I'm announcing the release of the 6.12.2 kernel.
All users of the 6.12 kernel series must upgrade.
The updated 6.12.y git tree can be found at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux-6.12.y and can be browsed at the normal kernel.org git web browser: https://git.kernel.org/?p=linux/kernel/git/stable/linux-stable.git%3Ba=summa...
thanks,
greg k-h
------------
Documentation/ABI/testing/sysfs-fs-f2fs | 7 Documentation/RCU/stallwarn.rst | 2 Documentation/admin-guide/blockdev/zram.rst | 2 Documentation/admin-guide/media/building.rst | 2 Documentation/admin-guide/media/saa7134.rst | 2 Documentation/arch/x86/boot.rst | 17 Documentation/devicetree/bindings/cache/qcom,llcc.yaml | 36 Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml | 22 Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml | 2 Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml | 5 Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml | 19 Documentation/devicetree/bindings/serial/rs485.yaml | 19 Documentation/devicetree/bindings/sound/mt6359.yaml | 10 Documentation/devicetree/bindings/vendor-prefixes.yaml | 2 Documentation/filesystems/mount_api.rst | 3 Documentation/locking/seqlock.rst | 2 MAINTAINERS | 335 +-- Makefile | 2 arch/arc/kernel/devtree.c | 2 arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts | 4 arch/arm/boot/dts/microchip/sam9x60.dtsi | 12 arch/arm/boot/dts/renesas/r7s72100-genmai.dts | 2 arch/arm/boot/dts/ti/omap/omap36xx.dtsi | 1 arch/arm/kernel/devtree.c | 2 arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso | 29 arch/arm64/boot/dts/mediatek/mt6358.dtsi | 4 arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi | 8 arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts | 3 arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts | 2 arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts | 3 arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi | 3 arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi | 30 arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi | 4 arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi | 4 arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi | 4 arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi | 5 arch/arm64/boot/dts/mediatek/mt8183.dtsi | 4 arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi | 21 arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi | 8 arch/arm64/boot/dts/mediatek/mt8188.dtsi | 5 arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi | 6 arch/arm64/boot/dts/mediatek/mt8195.dtsi | 4 arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts | 2 arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi | 2 arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts | 2 arch/arm64/boot/dts/qcom/sc8180x.dtsi | 2 arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts | 2 arch/arm64/boot/dts/qcom/sm6350.dtsi | 14 arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts | 4 arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts | 4 arch/arm64/boot/dts/qcom/x1e80100.dtsi | 8 arch/arm64/boot/dts/renesas/hihope-rev2.dtsi | 3 arch/arm64/boot/dts/renesas/hihope-rev4.dtsi | 3 arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso | 1 arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts | 2 arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts | 1 arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi | 2 arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts | 2 arch/arm64/boot/dts/ti/k3-j7200-main.dtsi | 38 arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi | 6 arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi | 6 arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi | 16 arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi | 6 arch/arm64/include/asm/insn.h | 1 arch/arm64/include/asm/kvm_host.h | 2 arch/arm64/kernel/cpufeature.c | 1 arch/arm64/kernel/probes/decode-insn.c | 7 arch/arm64/kernel/process.c | 2 arch/arm64/kernel/setup.c | 6 arch/arm64/kernel/vmlinux.lds.S | 6 arch/arm64/kvm/arch_timer.c | 3 arch/arm64/kvm/arm.c | 18 arch/arm64/kvm/mmio.c | 32 arch/arm64/kvm/pmu-emul.c | 1 arch/arm64/kvm/vgic/vgic-its.c | 32 arch/arm64/kvm/vgic/vgic-mmio-v3.c | 7 arch/arm64/kvm/vgic/vgic.h | 23 arch/arm64/net/bpf_jit_comp.c | 47 arch/csky/kernel/setup.c | 4 arch/loongarch/Makefile | 4 arch/loongarch/kernel/setup.c | 2 arch/loongarch/net/bpf_jit.c | 2 arch/loongarch/vdso/Makefile | 2 arch/m68k/coldfire/device.c | 8 arch/m68k/include/asm/mcfgpio.h | 2 arch/m68k/include/asm/mvme147hw.h | 4 arch/m68k/kernel/early_printk.c | 5 arch/m68k/mvme147/config.c | 30 arch/m68k/mvme147/mvme147.h | 6 arch/microblaze/kernel/microblaze_ksyms.c | 10 arch/microblaze/kernel/prom.c | 2 arch/mips/include/asm/switch_to.h | 2 arch/mips/kernel/prom.c | 2 arch/mips/kernel/relocate.c | 2 arch/nios2/kernel/prom.c | 4 arch/openrisc/Kconfig | 3 arch/openrisc/include/asm/fixmap.h | 21 arch/openrisc/kernel/prom.c | 2 arch/openrisc/mm/init.c | 37 arch/parisc/kernel/ftrace.c | 2 arch/powerpc/include/asm/dtl.h | 4 arch/powerpc/include/asm/fadump.h | 9 arch/powerpc/include/asm/kvm_book3s_64.h | 4 arch/powerpc/include/asm/sstep.h | 5 arch/powerpc/include/asm/vdso.h | 1 arch/powerpc/kernel/dt_cpu_ftrs.c | 2 arch/powerpc/kernel/fadump.c | 40 arch/powerpc/kernel/prom.c | 5 arch/powerpc/kernel/setup-common.c | 6 arch/powerpc/kernel/setup_64.c | 1 arch/powerpc/kexec/file_load_64.c | 9 arch/powerpc/kvm/book3s_hv.c | 14 arch/powerpc/kvm/book3s_hv_nested.c | 14 arch/powerpc/kvm/trace_hv.h | 2 arch/powerpc/lib/sstep.c | 12 arch/powerpc/mm/fault.c | 10 arch/powerpc/platforms/pseries/dtl.c | 8 arch/powerpc/platforms/pseries/lpar.c | 8 arch/powerpc/platforms/pseries/plpks.c | 2 arch/riscv/include/asm/cpufeature.h | 2 arch/riscv/kernel/setup.c | 2 arch/riscv/kernel/traps_misaligned.c | 14 arch/riscv/kernel/unaligned_access_speed.c | 1 arch/riscv/kvm/aia_aplic.c | 3 arch/riscv/kvm/vcpu_sbi.c | 11 arch/s390/include/asm/facility.h | 18 arch/s390/include/asm/pci.h | 4 arch/s390/include/asm/set_memory.h | 1 arch/s390/kernel/perf_cpum_sf.c | 2 arch/s390/kernel/syscalls/Makefile | 2 arch/s390/mm/pageattr.c | 15 arch/s390/pci/pci.c | 37 arch/s390/pci/pci_debug.c | 10 arch/sh/kernel/cpu/proc.c | 2 arch/sh/kernel/setup.c | 2 arch/um/drivers/net_kern.c | 2 arch/um/drivers/ubd_kern.c | 4 arch/um/drivers/vector_kern.c | 3 arch/um/kernel/dtb.c | 2 arch/um/kernel/physmem.c | 6 arch/um/kernel/process.c | 2 arch/um/kernel/sysrq.c | 2 arch/x86/coco/tdx/tdx.c | 111 + arch/x86/crypto/aegis128-aesni-asm.S | 29 arch/x86/events/intel/pt.c | 11 arch/x86/events/intel/pt.h | 2 arch/x86/include/asm/atomic64_32.h | 3 arch/x86/include/asm/cmpxchg_32.h | 6 arch/x86/include/asm/kvm_host.h | 4 arch/x86/include/asm/shared/tdx.h | 11 arch/x86/include/asm/tlb.h | 4 arch/x86/kernel/cpu/amd.c | 1 arch/x86/kernel/cpu/common.c | 4 arch/x86/kernel/cpu/microcode/amd.c | 25 arch/x86/kernel/devicetree.c | 2 arch/x86/kernel/unwind_orc.c | 2 arch/x86/kvm/Kconfig | 5 arch/x86/kvm/mmu/mmu.c | 68 arch/x86/kvm/mmu/spte.c | 18 arch/x86/kvm/vmx/vmx.c | 54 arch/x86/mm/tlb.c | 3 arch/x86/platform/pvh/head.S | 9 arch/xtensa/kernel/setup.c | 2 block/bfq-cgroup.c | 1 block/bfq-iosched.c | 43 block/blk-core.c | 18 block/blk-merge.c | 45 block/blk-mq.c | 148 + block/blk-mq.h | 13 block/blk-settings.c | 7 block/blk-sysfs.c | 6 block/blk-zoned.c | 14 block/blk.h | 30 block/elevator.c | 10 block/fops.c | 25 block/genhd.c | 24 crypto/pcrypt.c | 12 drivers/accel/ivpu/ivpu_ipc.c | 35 drivers/accel/ivpu/ivpu_ipc.h | 7 drivers/accel/ivpu/ivpu_jsm_msg.c | 19 drivers/acpi/arm64/gtdt.c | 2 drivers/acpi/cppc_acpi.c | 1 drivers/base/firmware_loader/main.c | 5 drivers/base/regmap/regmap-irq.c | 4 drivers/base/trace.h | 6 drivers/block/brd.c | 70 drivers/block/loop.c | 6 drivers/block/ublk_drv.c | 17 drivers/block/virtio_blk.c | 46 drivers/block/zram/Kconfig | 1 drivers/block/zram/zram_drv.c | 21 drivers/block/zram/zram_drv.h | 1 drivers/bluetooth/btbcm.c | 4 drivers/bluetooth/btintel.c | 62 drivers/bluetooth/btintel.h | 7 drivers/bluetooth/btintel_pcie.c | 265 ++ drivers/bluetooth/btintel_pcie.h | 16 drivers/bluetooth/btmtk.c | 1 drivers/bluetooth/btusb.c | 1 drivers/bus/mhi/host/trace.h | 25 drivers/clk/.kunitconfig | 1 drivers/clk/Kconfig | 2 drivers/clk/clk-apple-nco.c | 3 drivers/clk/clk-axi-clkgen.c | 22 drivers/clk/clk-en7523.c | 264 ++ drivers/clk/clk-loongson2.c | 6 drivers/clk/imx/clk-fracn-gppll.c | 10 drivers/clk/imx/clk-imx8-acm.c | 4 drivers/clk/imx/clk-lpcg-scu.c | 37 drivers/clk/imx/clk-scu.c | 2 drivers/clk/mediatek/Kconfig | 15 drivers/clk/qcom/Kconfig | 4 drivers/clk/ralink/clk-mtmips.c | 26 drivers/clk/renesas/rzg2l-cpg.c | 11 drivers/clk/sophgo/clk-sg2042-pll.c | 2 drivers/clk/sunxi-ng/ccu-sun20i-d1.c | 2 drivers/clocksource/Kconfig | 3 drivers/clocksource/timer-ti-dm-systimer.c | 4 drivers/comedi/comedi_fops.c | 12 drivers/counter/stm32-timer-cnt.c | 17 drivers/counter/ti-ecap-capture.c | 7 drivers/cpufreq/amd-pstate.c | 26 drivers/cpufreq/cppc_cpufreq.c | 63 drivers/cpufreq/loongson2_cpufreq.c | 4 drivers/cpufreq/loongson3_cpufreq.c | 7 drivers/cpufreq/mediatek-cpufreq-hw.c | 2 drivers/crypto/bcm/cipher.c | 5 drivers/crypto/caam/caampkc.c | 11 drivers/crypto/caam/qi.c | 2 drivers/crypto/cavium/cpt/cptpf_main.c | 6 drivers/crypto/hisilicon/hpre/hpre_main.c | 35 drivers/crypto/hisilicon/qm.c | 47 drivers/crypto/hisilicon/sec2/sec_main.c | 35 drivers/crypto/hisilicon/zip/zip_main.c | 35 drivers/crypto/inside-secure/safexcel_hash.c | 2 drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c | 2 drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c | 2 drivers/crypto/intel/qat/qat_common/adf_aer.c | 5 drivers/crypto/intel/qat/qat_common/adf_dbgfs.c | 13 drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c | 4 drivers/crypto/mxs-dcp.c | 20 drivers/dax/pmem/Makefile | 7 drivers/dax/pmem/pmem.c | 10 drivers/dma-buf/Kconfig | 1 drivers/dma-buf/udmabuf.c | 44 drivers/edac/bluefield_edac.c | 2 drivers/edac/fsl_ddr_edac.c | 22 drivers/edac/i10nm_base.c | 1 drivers/edac/igen6_edac.c | 2 drivers/edac/skx_common.c | 57 drivers/edac/skx_common.h | 8 drivers/firmware/arm_scpi.c | 3 drivers/firmware/efi/libstub/efi-stub.c | 2 drivers/firmware/efi/tpm.c | 17 drivers/firmware/google/gsmi.c | 6 drivers/gpio/gpio-exar.c | 10 drivers/gpio/gpio-zevio.c | 6 drivers/gpu/drm/Kconfig | 1 drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c | 2 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c | 13 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 8 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | 63 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 7 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 2 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c | 18 drivers/gpu/drm/amd/amdkfd/kfd_process.c | 14 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 32 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 3 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c | 4 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 13 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c | 15 drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c | 3 drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c | 6 drivers/gpu/drm/bridge/analogix/anx7625.c | 2 drivers/gpu/drm/bridge/ite-it6505.c | 2 drivers/gpu/drm/bridge/tc358767.c | 7 drivers/gpu/drm/drm_file.c | 2 drivers/gpu/drm/drm_mm.c | 2 drivers/gpu/drm/etnaviv/etnaviv_drv.c | 10 drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 28 drivers/gpu/drm/fsl-dcu/Kconfig | 1 drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c | 15 drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h | 3 drivers/gpu/drm/imagination/pvr_ccb.c | 2 drivers/gpu/drm/imagination/pvr_vm.c | 4 drivers/gpu/drm/imx/dcss/dcss-crtc.c | 6 drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c | 6 drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 4 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h | 12 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h | 14 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 2 drivers/gpu/drm/msm/msm_gpu_devfreq.c | 9 drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c | 1 drivers/gpu/drm/omapdrm/dss/base.c | 25 drivers/gpu/drm/omapdrm/dss/omapdss.h | 3 drivers/gpu/drm/omapdrm/omap_drv.c | 4 drivers/gpu/drm/omapdrm/omap_gem.c | 10 drivers/gpu/drm/panel/panel-newvision-nv3052c.c | 2 drivers/gpu/drm/panel/panel-novatek-nt35510.c | 15 drivers/gpu/drm/panfrost/panfrost_devfreq.c | 3 drivers/gpu/drm/panfrost/panfrost_gpu.c | 1 drivers/gpu/drm/panthor/panthor_devfreq.c | 29 drivers/gpu/drm/panthor/panthor_device.h | 28 drivers/gpu/drm/panthor/panthor_sched.c | 333 +++ drivers/gpu/drm/radeon/radeon_audio.c | 12 drivers/gpu/drm/v3d/v3d_drv.h | 1 drivers/gpu/drm/v3d/v3d_irq.c | 2 drivers/gpu/drm/v3d/v3d_mmu.c | 31 drivers/gpu/drm/v3d/v3d_sched.c | 46 drivers/gpu/drm/vc4/tests/vc4_mock.c | 12 drivers/gpu/drm/vc4/vc4_bo.c | 28 drivers/gpu/drm/vc4/vc4_crtc.c | 13 drivers/gpu/drm/vc4/vc4_drv.c | 22 drivers/gpu/drm/vc4/vc4_drv.h | 8 drivers/gpu/drm/vc4/vc4_gem.c | 24 drivers/gpu/drm/vc4/vc4_hdmi.c | 22 drivers/gpu/drm/vc4/vc4_hvs.c | 64 drivers/gpu/drm/vc4/vc4_irq.c | 10 drivers/gpu/drm/vc4/vc4_kms.c | 14 drivers/gpu/drm/vc4/vc4_perfmon.c | 20 drivers/gpu/drm/vc4/vc4_plane.c | 12 drivers/gpu/drm/vc4/vc4_render_cl.c | 2 drivers/gpu/drm/vc4/vc4_v3d.c | 10 drivers/gpu/drm/vc4/vc4_validate.c | 8 drivers/gpu/drm/vc4/vc4_validate_shaders.c | 2 drivers/gpu/drm/vkms/vkms_output.c | 5 drivers/gpu/drm/xe/display/xe_hdcp_gsc.c | 2 drivers/gpu/drm/xe/xe_sync.c | 6 drivers/gpu/drm/xlnx/zynqmp_disp.c | 3 drivers/gpu/drm/xlnx/zynqmp_kms.c | 2 drivers/hid/hid-hyperv.c | 58 drivers/hid/wacom_wac.c | 4 drivers/hwmon/aquacomputer_d5next.c | 2 drivers/hwmon/nct6775-core.c | 7 drivers/hwmon/pmbus/pmbus_core.c | 12 drivers/hwmon/tps23861.c | 2 drivers/i2c/i2c-dev.c | 17 drivers/i3c/master.c | 13 drivers/iio/accel/adxl380.c | 2 drivers/iio/adc/ad4000.c | 6 drivers/iio/adc/pac1921.c | 4 drivers/iio/dac/adi-axi-dac.c | 2 drivers/iio/industrialio-backend.c | 4 drivers/iio/industrialio-gts-helper.c | 2 drivers/iio/light/al3010.c | 11 drivers/infiniband/core/roce_gid_mgmt.c | 30 drivers/infiniband/core/uverbs.h | 2 drivers/infiniband/core/uverbs_main.c | 43 drivers/infiniband/hw/bnxt_re/ib_verbs.c | 7 drivers/infiniband/hw/bnxt_re/main.c | 28 drivers/infiniband/hw/bnxt_re/qplib_fp.h | 2 drivers/infiniband/hw/hns/hns_roce_cq.c | 4 drivers/infiniband/hw/hns/hns_roce_debugfs.c | 3 drivers/infiniband/hw/hns/hns_roce_device.h | 14 drivers/infiniband/hw/hns/hns_roce_hem.c | 48 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 257 +- drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 8 drivers/infiniband/hw/hns/hns_roce_main.c | 7 drivers/infiniband/hw/hns/hns_roce_mr.c | 11 drivers/infiniband/hw/hns/hns_roce_qp.c | 77 drivers/infiniband/hw/hns/hns_roce_srq.c | 4 drivers/infiniband/hw/mlx5/main.c | 69 drivers/infiniband/hw/mlx5/mlx5_ib.h | 2 drivers/infiniband/sw/rxe/rxe_qp.c | 1 drivers/infiniband/sw/rxe/rxe_req.c | 6 drivers/input/misc/cs40l50-vibra.c | 6 drivers/interconnect/qcom/icc-rpmh.c | 3 drivers/iommu/amd/io_pgtable_v2.c | 3 drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c | 5 drivers/iommu/intel/iommu.c | 40 drivers/iommu/s390-iommu.c | 73 drivers/irqchip/irq-mvebu-sei.c | 2 drivers/irqchip/irq-riscv-aplic-main.c | 3 drivers/irqchip/irq-riscv-aplic-msi.c | 3 drivers/leds/flash/leds-ktd2692.c | 1 drivers/leds/leds-max5970.c | 5 drivers/mailbox/arm_mhuv2.c | 8 drivers/mailbox/mtk-cmdq-mailbox.c | 2 drivers/mailbox/omap-mailbox.c | 1 drivers/media/i2c/adv7604.c | 5 drivers/media/i2c/adv7842.c | 13 drivers/media/i2c/ds90ub960.c | 2 drivers/media/i2c/max96717.c | 6 drivers/media/i2c/vgxy61.c | 2 drivers/media/pci/intel/ipu6/Kconfig | 6 drivers/media/pci/intel/ipu6/ipu6-bus.c | 6 drivers/media/pci/intel/ipu6/ipu6-buttress.c | 34 drivers/media/pci/intel/ipu6/ipu6-cpd.c | 18 drivers/media/pci/intel/ipu6/ipu6-dma.c | 202 +- drivers/media/pci/intel/ipu6/ipu6-dma.h | 34 drivers/media/pci/intel/ipu6/ipu6-fw-com.c | 14 drivers/media/pci/intel/ipu6/ipu6-mmu.c | 28 drivers/media/pci/intel/ipu6/ipu6.c | 3 drivers/media/radio/wl128x/fmdrv_common.c | 3 drivers/media/test-drivers/vivid/vivid-vid-cap.c | 15 drivers/media/v4l2-core/v4l2-dv-timings.c | 132 - drivers/message/fusion/mptsas.c | 4 drivers/mfd/da9052-spi.c | 2 drivers/mfd/intel_soc_pmic_bxtwc.c | 132 - drivers/mfd/rt5033.c | 4 drivers/mfd/tps65010.c | 8 drivers/misc/apds990x.c | 12 drivers/misc/lkdtm/bugs.c | 2 drivers/mmc/host/mmc_spi.c | 9 drivers/mtd/hyperbus/rpc-if.c | 7 drivers/mtd/nand/raw/atmel/pmecc.c | 8 drivers/mtd/nand/raw/atmel/pmecc.h | 2 drivers/mtd/spi-nor/core.c | 2 drivers/mtd/spi-nor/spansion.c | 1 drivers/mtd/ubi/attach.c | 12 drivers/mtd/ubi/fastmap-wl.c | 19 drivers/mtd/ubi/vmt.c | 2 drivers/mtd/ubi/wl.c | 11 drivers/mtd/ubi/wl.h | 3 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 37 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 9 drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c | 4 drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h | 3 drivers/net/ethernet/broadcom/tg3.c | 3 drivers/net/ethernet/google/gve/gve_adminq.c | 4 drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 2 drivers/net/ethernet/intel/ice/ice_virtchnl.c | 21 drivers/net/ethernet/marvell/octeontx2/af/cgx.c | 70 drivers/net/ethernet/marvell/octeontx2/af/cgx.h | 5 drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h | 7 drivers/net/ethernet/marvell/octeontx2/af/rpm.c | 87 drivers/net/ethernet/marvell/octeontx2/af/rpm.h | 18 drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 1 drivers/net/ethernet/marvell/octeontx2/af/rvu.h | 1 drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c | 45 drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c | 5 drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 4 drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c | 5 drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c | 9 drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c | 10 drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c | 10 drivers/net/ethernet/marvell/pxa168_eth.c | 14 drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c | 12 drivers/net/ethernet/meta/fbnic/fbnic_pci.c | 2 drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c | 17 drivers/net/ethernet/realtek/rtase/rtase.h | 7 drivers/net/ethernet/realtek/rtase/rtase_main.c | 43 drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c | 2 drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c | 24 drivers/net/ethernet/wangxun/txgbe/txgbe_main.c | 1 drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c | 168 - drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h | 2 drivers/net/ethernet/wangxun/txgbe/txgbe_type.h | 7 drivers/net/mdio/mdio-ipq4019.c | 5 drivers/net/netdevsim/ipsec.c | 11 drivers/net/usb/lan78xx.c | 40 drivers/net/wireless/ath/ath10k/mac.c | 4 drivers/net/wireless/ath/ath11k/qmi.c | 3 drivers/net/wireless/ath/ath12k/dp.c | 19 drivers/net/wireless/ath/ath12k/mac.c | 5 drivers/net/wireless/ath/ath12k/wow.c | 2 drivers/net/wireless/ath/ath9k/htc_hst.c | 3 drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c | 3 drivers/net/wireless/intel/iwlegacy/3945.c | 2 drivers/net/wireless/intel/iwlegacy/4965-mac.c | 2 drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c | 8 drivers/net/wireless/intersil/p54/p54spi.c | 4 drivers/net/wireless/marvell/mwifiex/cmdevt.c | 2 drivers/net/wireless/marvell/mwifiex/fw.h | 2 drivers/net/wireless/marvell/mwifiex/main.c | 4 drivers/net/wireless/marvell/mwifiex/util.c | 2 drivers/net/wireless/microchip/wilc1000/netdev.c | 6 drivers/net/wireless/realtek/rtl8xxxu/core.c | 6 drivers/net/wireless/realtek/rtlwifi/efuse.c | 11 drivers/net/wireless/realtek/rtw89/cam.c | 259 ++ drivers/net/wireless/realtek/rtw89/cam.h | 24 drivers/net/wireless/realtek/rtw89/chan.c | 215 +- drivers/net/wireless/realtek/rtw89/chan.h | 4 drivers/net/wireless/realtek/rtw89/coex.c | 157 + drivers/net/wireless/realtek/rtw89/coex.h | 6 drivers/net/wireless/realtek/rtw89/core.c | 887 ++++++---- drivers/net/wireless/realtek/rtw89/core.h | 421 +++- drivers/net/wireless/realtek/rtw89/debug.c | 127 + drivers/net/wireless/realtek/rtw89/fw.c | 637 ++++--- drivers/net/wireless/realtek/rtw89/fw.h | 192 +- drivers/net/wireless/realtek/rtw89/mac.c | 700 ++++--- drivers/net/wireless/realtek/rtw89/mac.h | 98 - drivers/net/wireless/realtek/rtw89/mac80211.c | 653 +++++-- drivers/net/wireless/realtek/rtw89/mac_be.c | 69 drivers/net/wireless/realtek/rtw89/phy.c | 399 ++-- drivers/net/wireless/realtek/rtw89/phy.h | 7 drivers/net/wireless/realtek/rtw89/ps.c | 109 - drivers/net/wireless/realtek/rtw89/ps.h | 14 drivers/net/wireless/realtek/rtw89/regd.c | 79 drivers/net/wireless/realtek/rtw89/rtw8851b.c | 13 drivers/net/wireless/realtek/rtw89/rtw8852a.c | 12 drivers/net/wireless/realtek/rtw89/rtw8852b.c | 13 drivers/net/wireless/realtek/rtw89/rtw8852bt.c | 13 drivers/net/wireless/realtek/rtw89/rtw8852c.c | 12 drivers/net/wireless/realtek/rtw89/rtw8922a.c | 10 drivers/net/wireless/realtek/rtw89/ser.c | 37 drivers/net/wireless/realtek/rtw89/wow.c | 217 +- drivers/net/wireless/realtek/rtw89/wow.h | 10 drivers/net/wireless/silabs/wfx/main.c | 17 drivers/net/wireless/st/cw1200/cw1200_spi.c | 2 drivers/nvme/host/core.c | 5 drivers/nvme/host/multipath.c | 21 drivers/nvme/host/pci.c | 55 drivers/of/fdt.c | 14 drivers/of/kexec.c | 2 drivers/pci/controller/cadence/pci-j721e.c | 26 drivers/pci/controller/dwc/pcie-qcom-ep.c | 6 drivers/pci/controller/dwc/pcie-qcom.c | 4 drivers/pci/controller/dwc/pcie-tegra194.c | 7 drivers/pci/endpoint/functions/pci-epf-mhi.c | 6 drivers/pci/hotplug/cpqphp_pci.c | 15 drivers/pci/pci.c | 5 drivers/pci/slot.c | 4 drivers/perf/arm-cmn.c | 4 drivers/perf/arm_smmuv3_pmu.c | 19 drivers/phy/phy-airoha-pcie-regs.h | 6 drivers/phy/phy-airoha-pcie.c | 8 drivers/phy/realtek/phy-rtk-usb2.c | 2 drivers/phy/realtek/phy-rtk-usb3.c | 2 drivers/pinctrl/pinctrl-k210.c | 2 drivers/pinctrl/pinctrl-zynqmp.c | 1 drivers/pinctrl/qcom/pinctrl-spmi-gpio.c | 2 drivers/pinctrl/renesas/Kconfig | 1 drivers/pinctrl/renesas/pinctrl-rzg2l.c | 2 drivers/platform/chrome/cros_ec_typec.c | 1 drivers/platform/x86/asus-wmi.c | 64 drivers/platform/x86/intel/bxtwc_tmu.c | 22 drivers/platform/x86/intel/pmt/class.c | 8 drivers/platform/x86/intel/pmt/class.h | 2 drivers/platform/x86/intel/pmt/telemetry.c | 2 drivers/platform/x86/panasonic-laptop.c | 10 drivers/pmdomain/ti/ti_sci_pm_domains.c | 4 drivers/power/reset/Kconfig | 1 drivers/power/sequencing/Kconfig | 1 drivers/power/supply/bq27xxx_battery.c | 37 drivers/power/supply/power_supply_core.c | 2 drivers/power/supply/rt9471.c | 52 drivers/pwm/core.c | 10 drivers/pwm/pwm-imx27.c | 98 + drivers/regulator/qcom_smd-regulator.c | 2 drivers/regulator/rk808-regulator.c | 15 drivers/remoteproc/Kconfig | 6 drivers/remoteproc/qcom_q6v5_adsp.c | 11 drivers/remoteproc/qcom_q6v5_mss.c | 6 drivers/remoteproc/qcom_q6v5_pas.c | 22 drivers/rpmsg/qcom_glink_native.c | 3 drivers/rtc/interface.c | 7 drivers/rtc/rtc-ab-eoz9.c | 7 drivers/rtc/rtc-abx80x.c | 2 drivers/rtc/rtc-rzn1.c | 8 drivers/rtc/rtc-st-lpc.c | 5 drivers/s390/cio/cio.c | 6 drivers/s390/cio/device.c | 18 drivers/s390/virtio/virtio_ccw.c | 4 drivers/scsi/bfa/bfad.c | 3 drivers/scsi/hisi_sas/hisi_sas_main.c | 8 drivers/scsi/qedf/qedf_main.c | 1 drivers/scsi/qedi/qedi_main.c | 1 drivers/scsi/sg.c | 9 drivers/sh/intc/core.c | 2 drivers/soc/fsl/qe/qmc.c | 4 drivers/soc/fsl/rcpm.c | 1 drivers/soc/qcom/qcom-geni-se.c | 3 drivers/soc/ti/smartreflex.c | 4 drivers/soc/xilinx/xlnx_event_manager.c | 4 drivers/spi/atmel-quadspi.c | 2 drivers/spi/spi-fsl-lpspi.c | 12 drivers/spi/spi-tegra210-quad.c | 2 drivers/spi/spi-zynqmp-gqspi.c | 2 drivers/spi/spi.c | 13 drivers/staging/media/atomisp/pci/sh_css_params.c | 2 drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c | 6 drivers/target/target_core_pscsi.c | 2 drivers/thermal/testing/zone.c | 32 drivers/thermal/thermal_core.c | 123 - drivers/thermal/thermal_core.h | 12 drivers/tty/serial/8250/8250_fintek.c | 14 drivers/tty/serial/8250/8250_omap.c | 4 drivers/tty/serial/amba-pl011.c | 7 drivers/tty/tty_io.c | 2 drivers/usb/dwc3/core.h | 4 drivers/usb/dwc3/ep0.c | 2 drivers/usb/dwc3/gadget.c | 69 drivers/usb/gadget/composite.c | 18 drivers/usb/gadget/function/uvc_video.c | 4 drivers/usb/host/ehci-spear.c | 7 drivers/usb/host/xhci-pci.c | 10 drivers/usb/host/xhci-ring.c | 73 drivers/usb/host/xhci.c | 40 drivers/usb/host/xhci.h | 3 drivers/usb/misc/chaoskey.c | 35 drivers/usb/misc/iowarrior.c | 50 drivers/usb/misc/usb-ljca.c | 20 drivers/usb/misc/yurex.c | 5 drivers/usb/musb/musb_gadget.c | 13 drivers/usb/typec/tcpm/wcove.c | 4 drivers/usb/typec/ucsi/ucsi_ccg.c | 5 drivers/usb/typec/ucsi/ucsi_glink.c | 2 drivers/vdpa/mlx5/core/mr.c | 4 drivers/vfio/pci/mlx5/cmd.c | 6 drivers/vfio/pci/mlx5/main.c | 35 drivers/vfio/pci/vfio_pci_config.c | 16 drivers/video/fbdev/sh7760fb.c | 3 drivers/watchdog/Kconfig | 4 drivers/xen/xenbus/xenbus_probe.c | 8 fs/binfmt_elf.c | 2 fs/binfmt_elf_fdpic.c | 5 fs/binfmt_misc.c | 7 fs/cachefiles/interface.c | 14 fs/cachefiles/ondemand.c | 38 fs/dlm/ast.c | 2 fs/dlm/recoverd.c | 2 fs/efs/super.c | 43 fs/erofs/data.c | 10 fs/erofs/internal.h | 1 fs/erofs/super.c | 6 fs/erofs/zmap.c | 17 fs/exec.c | 23 fs/exfat/file.c | 10 fs/exfat/namei.c | 21 fs/ext4/balloc.c | 4 fs/ext4/ext4.h | 12 fs/ext4/extents.c | 2 fs/ext4/fsmap.c | 54 fs/ext4/ialloc.c | 5 fs/ext4/indirect.c | 2 fs/ext4/inode.c | 4 fs/ext4/mballoc.c | 18 fs/ext4/mballoc.h | 1 fs/ext4/mmp.c | 2 fs/ext4/move_extent.c | 2 fs/ext4/resize.c | 2 fs/ext4/super.c | 42 fs/f2fs/checkpoint.c | 2 fs/f2fs/data.c | 29 fs/f2fs/f2fs.h | 3 fs/f2fs/file.c | 17 fs/f2fs/gc.c | 2 fs/f2fs/node.c | 10 fs/f2fs/segment.c | 5 fs/f2fs/segment.h | 35 fs/f2fs/super.c | 37 fs/fcntl.c | 3 fs/fuse/file.c | 62 fs/fuse/fuse_i.h | 6 fs/fuse/virtio_fs.c | 1 fs/gfs2/glock.c | 19 fs/gfs2/glock.h | 1 fs/gfs2/incore.h | 2 fs/gfs2/rgrp.c | 2 fs/gfs2/super.c | 2 fs/hfsplus/hfsplus_fs.h | 3 fs/hfsplus/wrapper.c | 2 fs/hostfs/hostfs_kern.c | 4 fs/isofs/inode.c | 8 fs/jffs2/erase.c | 7 fs/jfs/xattr.c | 2 fs/netfs/fscache_volume.c | 3 fs/nfs/blocklayout/blocklayout.c | 15 fs/nfs/blocklayout/dev.c | 6 fs/nfs/internal.h | 2 fs/nfs/localio.c | 6 fs/nfs/nfs4proc.c | 8 fs/nfs/write.c | 49 fs/nfs_common/nfslocalio.c | 8 fs/nfsd/export.c | 31 fs/nfsd/export.h | 4 fs/nfsd/filecache.c | 14 fs/nfsd/filecache.h | 2 fs/nfsd/nfs4callback.c | 16 fs/nfsd/nfs4proc.c | 7 fs/nfsd/nfs4recover.c | 3 fs/nfsd/nfs4state.c | 5 fs/nfsd/nfs4xdr.c | 2 fs/nfsd/nfsfh.c | 20 fs/nfsd/nfsfh.h | 3 fs/notify/fsnotify.c | 23 fs/notify/mark.c | 12 fs/ntfs3/file.c | 2 fs/ocfs2/aops.h | 2 fs/ocfs2/file.c | 4 fs/proc/kcore.c | 10 fs/read_write.c | 15 fs/smb/client/cached_dir.c | 229 +- fs/smb/client/cached_dir.h | 6 fs/smb/client/cifsacl.c | 50 fs/smb/client/cifsfs.c | 12 fs/smb/client/cifsglob.h | 4 fs/smb/client/cifsproto.h | 5 fs/smb/client/connect.c | 59 fs/smb/client/fs_context.c | 85 fs/smb/client/fs_context.h | 1 fs/smb/client/inode.c | 8 fs/smb/client/reparse.c | 95 - fs/smb/client/reparse.h | 6 fs/smb/client/smb1ops.c | 4 fs/smb/client/smb2file.c | 21 fs/smb/client/smb2inode.c | 6 fs/smb/client/smb2ops.c | 2 fs/smb/client/smb2pdu.c | 6 fs/smb/client/smb2proto.h | 11 fs/smb/client/smb2transport.c | 56 fs/smb/client/trace.h | 3 fs/smb/server/server.c | 4 fs/ubifs/super.c | 6 fs/ubifs/tnc_commit.c | 2 fs/unicode/utf8-core.c | 2 fs/xfs/xfs_bmap_util.c | 8 include/asm-generic/vmlinux.lds.h | 4 include/kunit/skbuff.h | 2 include/linux/blk-mq.h | 2 include/linux/blkdev.h | 12 include/linux/bpf.h | 9 include/linux/cleanup.h | 4 include/linux/compiler_attributes.h | 13 include/linux/compiler_types.h | 19 include/linux/f2fs_fs.h | 6 include/linux/fs.h | 2 include/linux/hisi_acc_qm.h | 8 include/linux/intel_vsec.h | 3 include/linux/jiffies.h | 2 include/linux/kfifo.h | 1 include/linux/kvm_host.h | 6 include/linux/lockdep.h | 2 include/linux/mmdebug.h | 6 include/linux/netpoll.h | 2 include/linux/nfslocalio.h | 18 include/linux/of_fdt.h | 5 include/linux/once.h | 4 include/linux/once_lite.h | 2 include/linux/rcupdate.h | 2 include/linux/rwlock_rt.h | 10 include/linux/sched/ext.h | 1 include/linux/seqlock.h | 98 - include/linux/spinlock_rt.h | 23 include/media/v4l2-dv-timings.h | 18 include/net/bluetooth/hci.h | 4 include/net/bluetooth/hci_core.h | 63 include/net/net_debug.h | 2 include/rdma/ib_verbs.h | 11 include/uapi/linux/rtnetlink.h | 2 init/Kconfig | 9 init/initramfs.c | 15 io_uring/memmap.c | 11 ipc/namespace.c | 4 kernel/bpf/bpf_struct_ops.c | 114 + kernel/bpf/btf.c | 5 kernel/bpf/dispatcher.c | 3 kernel/bpf/trampoline.c | 9 kernel/bpf/verifier.c | 103 + kernel/cgroup/cgroup.c | 21 kernel/fork.c | 26 kernel/rcu/rcuscale.c | 6 kernel/rcu/srcutiny.c | 2 kernel/rcu/tree.c | 14 kernel/rcu/tree_nocb.h | 13 kernel/sched/cpufreq_schedutil.c | 3 kernel/sched/ext.c | 9 kernel/time/time.c | 4 kernel/time/timer.c | 3 kernel/trace/bpf_trace.c | 5 kernel/trace/trace_event_perf.c | 6 lib/overflow_kunit.c | 2 lib/string_helpers.c | 2 lib/strncpy_from_user.c | 5 mm/internal.h | 2 net/9p/trans_usbg.c | 4 net/9p/trans_xen.c | 9 net/bluetooth/hci_conn.c | 219 +- net/bluetooth/hci_event.c | 39 net/bluetooth/hci_sysfs.c | 15 net/bluetooth/iso.c | 101 - net/bluetooth/mgmt.c | 38 net/bluetooth/rfcomm/sock.c | 10 net/core/filter.c | 88 net/core/netdev-genl.c | 2 net/core/skmsg.c | 4 net/hsr/hsr_device.c | 4 net/ipv4/inet_connection_sock.c | 2 net/ipv4/ipmr.c | 42 net/ipv6/addrconf.c | 41 net/ipv6/ip6_fib.c | 8 net/ipv6/ip6mr.c | 38 net/ipv6/route.c | 51 net/iucv/af_iucv.c | 26 net/l2tp/l2tp_core.c | 22 net/llc/af_llc.c | 2 net/netfilter/ipset/ip_set_bitmap_ip.c | 7 net/netfilter/nf_tables_api.c | 60 net/netlink/af_netlink.c | 21 net/rfkill/rfkill-gpio.c | 8 net/rxrpc/af_rxrpc.c | 7 net/sched/sch_fq.c | 6 net/sunrpc/cache.c | 4 net/sunrpc/svcsock.c | 4 net/sunrpc/xprtrdma/svc_rdma.c | 19 net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 8 net/sunrpc/xprtsock.c | 17 net/wireless/core.c | 64 net/wireless/mlme.c | 6 net/wireless/nl80211.c | 1 net/xdp/xsk.c | 11 rust/helpers/spinlock.c | 8 rust/kernel/block/mq/request.rs | 67 rust/kernel/lib.rs | 2 rust/kernel/rbtree.rs | 9 rust/macros/lib.rs | 2 samples/bpf/xdp_adjust_tail_kern.c | 1 samples/kfifo/dma-example.c | 1 scripts/checkpatch.pl | 37 scripts/faddr2line | 2 scripts/kernel-doc | 47 scripts/mod/file2alias.c | 5 scripts/package/builddeb | 22 security/apparmor/capability.c | 2 security/apparmor/policy_unpack_test.c | 6 sound/core/pcm_native.c | 6 sound/core/rawmidi.c | 4 sound/core/sound_kunit.c | 11 sound/core/ump.c | 5 sound/pci/hda/patch_realtek.c | 153 - sound/soc/amd/acp/acp-sdw-sof-mach.c | 16 sound/soc/amd/yc/acp6x-mach.c | 25 sound/soc/codecs/da7213.c | 1 sound/soc/codecs/da7219.c | 9 sound/soc/codecs/rt722-sdca.c | 8 sound/soc/fsl/fsl-asoc-card.c | 8 sound/soc/fsl/fsl_micfil.c | 4 sound/soc/fsl/imx-audmix.c | 3 sound/soc/mediatek/mt8188/mt8188-mt6359.c | 9 sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c | 4 sound/soc/mediatek/mt8195/mt8195-mt6359.c | 9 sound/usb/6fire/chip.c | 10 sound/usb/caiaq/audio.c | 10 sound/usb/caiaq/audio.h | 1 sound/usb/caiaq/device.c | 19 sound/usb/caiaq/input.c | 12 sound/usb/caiaq/input.h | 1 sound/usb/clock.c | 24 sound/usb/quirks.c | 27 sound/usb/usx2y/us122l.c | 5 sound/usb/usx2y/usbusx2y.c | 2 tools/bpf/bpftool/jit_disasm.c | 40 tools/gpio/gpio-sloppy-logic-analyzer.sh | 2 tools/include/nolibc/arch-s390.h | 1 tools/lib/bpf/Makefile | 3 tools/lib/bpf/libbpf.c | 99 - tools/lib/bpf/linker.c | 2 tools/lib/thermal/commands.c | 52 tools/perf/Makefile.config | 2 tools/perf/builtin-ftrace.c | 2 tools/perf/builtin-list.c | 4 tools/perf/builtin-stat.c | 52 tools/perf/builtin-trace.c | 23 tools/perf/pmu-events/empty-pmu-events.c | 2 tools/perf/pmu-events/jevents.py | 2 tools/perf/tests/attr/test-stat-default | 90 - tools/perf/tests/attr/test-stat-detailed-1 | 106 - tools/perf/tests/attr/test-stat-detailed-2 | 130 - tools/perf/tests/attr/test-stat-detailed-3 | 138 + tools/perf/util/bpf-filter.c | 2 tools/perf/util/cs-etm.c | 25 tools/perf/util/disasm.c | 15 tools/perf/util/evlist.c | 19 tools/perf/util/evlist.h | 1 tools/perf/util/machine.c | 2 tools/perf/util/mem-events.c | 8 tools/perf/util/pfm.c | 4 tools/perf/util/pmus.c | 2 tools/perf/util/probe-finder.c | 21 tools/perf/util/probe-finder.h | 4 tools/power/x86/turbostat/turbostat.c | 5 tools/testing/selftests/arm64/abi/hwcap.c | 6 tools/testing/selftests/arm64/mte/check_tags_inclusion.c | 4 tools/testing/selftests/arm64/mte/mte_common_util.c | 4 tools/testing/selftests/bpf/Makefile | 3 tools/testing/selftests/bpf/network_helpers.h | 1 tools/testing/selftests/bpf/prog_tests/timer_lockup.c | 6 tools/testing/selftests/bpf/progs/test_spin_lock_fail.c | 4 tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c | 6 tools/testing/selftests/bpf/test_progs.c | 9 tools/testing/selftests/bpf/test_sockmap.c | 165 + tools/testing/selftests/bpf/uprobe_multi.c | 4 tools/testing/selftests/mount_setattr/mount_setattr_test.c | 2 tools/testing/selftests/net/Makefile | 1 tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh | 262 ++ tools/testing/selftests/net/netfilter/conntrack_dump_flush.c | 6 tools/testing/selftests/net/pmtu.sh | 2 tools/testing/selftests/resctrl/fill_buf.c | 2 tools/testing/selftests/resctrl/mbm_test.c | 16 tools/testing/selftests/resctrl/resctrl_val.c | 3 tools/testing/selftests/vDSO/parse_vdso.c | 3 tools/testing/selftests/wireguard/netns.sh | 1 tools/tracing/rtla/src/timerlat_hist.c | 2 tools/tracing/rtla/src/timerlat_top.c | 2 virt/kvm/kvm_main.c | 103 - 896 files changed, 13140 insertions(+), 6525 deletions(-)
Abel Vesa (3): arm64: dts: qcom: x1e80100-slim7x: Drop orientation-switch from USB SS[0-1] QMP PHYs arm64: dts: qcom: x1e80100-vivobook-s15: Drop orientation-switch from USB SS[0-1] QMP PHYs dt-bindings: cache: qcom,llcc: Fix X1E80100 reg entries
Adrian Hunter (1): perf/x86/intel/pt: Fix buffer full but size is 0 case
Adrián Larumbe (4): drm/panfrost: Add missing OPP table refcnt decremental drm/panthor: introduce job cycle and timestamp accounting drm/panthor: record current and maximum device clock frequencies drm/panthor: Fix OPP refcnt leaks in devfreq initialisation
Ahmed Ehab (1): locking/lockdep: Avoid creating new name string literals in lockdep_set_subclass()
Ahsan Atta (1): crypto: qat - remove faulty arbiter config reset
Alan Maguire (1): selftests/bpf: Fix uprobe_multi compilation error
Aleksa Savic (1): hwmon: (aquacomputer_d5next) Fix length of speed_input array
Aleksandr Mishin (1): acpi/arm64: Adjust error handling procedure in gtdt_parse_timer_block()
Aleksei Vetrov (1): wifi: nl80211: fix bounds checker error in nl80211_parse_sched_scan
Alex Zenla (2): 9p/xen: fix init sequence 9p/xen: fix release of IRQ
Alexander Aring (2): dlm: fix swapped args sb_flags vs sb_status dlm: fix dlm_recover_members refcount on error
Alexis Lothoré (eBPF Foundation) (1): selftests/bpf: add missing header include for htons
Alper Nebi Yasak (2): arm64: dts: mediatek: mt8183-kukui: Disable DPI display interface wifi: mwifiex: Fix memcpy() field-spanning write warning in mwifiex_config_scan()
Amir Goldstein (1): fsnotify: fix sending inotify event with unexpected filename
Andre Przywara (5): kselftest/arm64: hwcap: fix f8dp2 cpuinfo name kselftest/arm64: mte: fix printf type warnings about __u64 kselftest/arm64: mte: fix printf type warnings about longs ARM: dts: cubieboard4: Fix DCDC5 regulator constraints clk: sunxi-ng: d1: Fix PLL_AUDIO0 preset
Andreas Gruenbacher (3): gfs2: Rename GLF_VERIFY_EVICT to GLF_VERIFY_DELETE gfs2: Allow immediate GLF_VERIFY_DELETE work gfs2: Fix unlinked inode cleanup
Andreas Kemnade (1): ARM: dts: omap36xx: declare 1GHz OPP as turbo again
Andrei Simion (1): ARM: dts: microchip: sam9x60: Add missing property atmel,usart-mode
Andrej Shadura (1): Bluetooth: Fix type of len in rfcomm_sock_getsockopt{,_old}()
Andrii Nakryiko (4): libbpf: fix sym_is_subprog() logic for weak global subprogs libbpf: never interpret subprogs in .text as entry programs selftests/bpf: fix test_spin_lock_fail.c's global vars usage libbpf: move global data mmap()'ing into bpf_object__load()
André Almeida (1): unicode: Fix utf8_load() error path
Andy Shevchenko (11): regmap: irq: Set lockdep class for hierarchical IRQ domains drm/mm: Mark drm_mm_interval_tree*() functions with __maybe_unused mfd: intel_soc_pmic_bxtwc: Use IRQ domain for USB Type-C device mfd: intel_soc_pmic_bxtwc: Use IRQ domain for TMU device mfd: intel_soc_pmic_bxtwc: Use IRQ domain for PMIC devices mfd: intel_soc_pmic_bxtwc: Fix IRQ domain names duplication cpufreq: loongson3: Check for error code from devm_mutex_init() call gpio: zevio: Add missed label initialisation iio: adc: ad4000: Check for error code from devm_mutex_init() call iio: adc: pac1921: Check for error code from devm_mutex_init() call x86/Documentation: Update algo in init_size description of boot protocol
Angelo Dureghello (2): iio: dac: adi-axi-dac: fix wrong register bitfield dt-bindings: iio: dac: ad3552r: fix maximum spi speed
Antonio Quartulli (1): m68k: coldfire/device.c: only build FEC when HW macros are defined
Antoniu Miclaus (1): iio: accel: adxl380: fix raw sample read
Anurag Dutta (3): arm64: dts: ti: k3-j7200: Fix clock ids for MCSPI instances arm64: dts: ti: k3-j721e: Fix clock IDs for MCSPI instances arm64: dts: ti: k3-j721s2: Fix clock IDs for MCSPI instances
Ard Biesheuvel (1): x86/pvh: Call C code via the kernel virtual mapping
Armin Wolf (1): platform/x86: asus-wmi: Fix inconsistent use of thermal policies
Arnaldo Carvalho de Melo (1): perf ftrace latency: Fix unit on histogram first entry when using --use-nsec
Arnd Bergmann (5): wifi: ath12k: fix one more memcpy size error mailbox, remoteproc: k3-m4+: fix compile testing power: reset: ep93xx: add AUXILIARY_BUS dependency KVM: x86: add back X86_LOCAL_APIC dependency serial: amba-pl011: fix build regression
Artem Sadovnikov (1): jfs: xattr: check invalid xattr size more strictly
Aurabindo Pillai (1): drm/amd/display: fix a memleak issue when driver is removed
Avihai Horon (1): vfio/pci: Properly hide first-in-list PCIe extended capability
Balaji Pothunoori (1): wifi: ath11k: Fix CE offset address calculation for WCN6750 in SSR
Baochen Qiang (2): wifi: ath10k: fix invalid VHT parameters in supported_vht_mcs_rate_nss1 wifi: ath10k: fix invalid VHT parameters in supported_vht_mcs_rate_nss2
Baolin Liu (1): scsi: target: Fix incorrect function name in pscsi_create_type_disk()
Barnabás Czémán (1): power: supply: bq27xxx: Fix registers of bq27426
Bart Van Assche (3): scsi: sg: Enable runtime power management power: supply: core: Remove might_sleep() from power_supply_put() blk-mq: Make blk_mq_quiesce_tagset() hold the tag list mutex less long
Bartosz Golaszewski (4): mmc: mmc_spi: drop buggy snprintf() power: sequencing: make the QCom PMU pwrseq driver depend on CONFIG_OF pinctrl: zynqmp: drop excess struct member description lib: string_helpers: silence snprintf() output truncation warning
Baruch Siach (1): doc: rcu: update printed dynticks counter bits
Benjamin Coddington (3): SUNRPC: timeout and cancel TLS handshake with -ETIMEDOUT nfs/blocklayout: Don't attempt unregister for invalid block device nfs/blocklayout: Limit repeat device registration on failure
Benjamin Peterson (3): perf trace: avoid garbage when not printing a trace event's arguments perf trace: Do not lose last events in a race perf trace: Avoid garbage when not printing a syscall's arguments
Benoît Sevens (1): ALSA: usb-audio: Fix potential out-of-bound accesses for Extigy and Mbox devices
Biju Das (3): pinctrl: renesas: rzg2l: Fix missing return in rzg2l_pinctrl_register() mtd: hyperbus: rpc-if: Add missing MODULE_DEVICE_TABLE clk: renesas: rzg2l: Fix FOUTPOSTDIV clk
Bill O'Donnell (1): efs: fix the efs new mount api implementation
Bin Liu (1): serial: 8250: omap: Move pm_runtime_get_sync
Bingbu Cao (2): media: ipu6: not override the dma_ops of device in driver media: ipu6: remove architecture DMA ops dependency in Kconfig
Björn Töpel (3): libbpf: Add missing per-arch include path selftests: bpf: Add missing per-arch include path riscv: kvm: Fix out-of-bounds array access
Borislav Petkov (AMD) (2): x86/mm: Carve out INVLPG inline asm for use by others x86/microcode/AMD: Flush patch buffer mapping after application
Breno Leitao (3): spi: tegra210-quad: Avoid shift-out-of-bounds netpoll: Use rcu_access_pointer() in netpoll_poll_lock nvme/multipath: Fix RCU list traversal to use SRCU primitive
Cabiddu, Giovanni (1): crypto: qat - remove check after debugfs_create_dir()
Carl Vanderlip (1): bus: mhi: host: Switch trace_mhi_gen_tre fields to native endian
Carlos Llamas (1): Revert "scripts/faddr2line: Check only two symbols when calculating symbol size"
Chao Yu (6): f2fs: fix to account dirty data in __get_secs_required() f2fs: fix to avoid potential deadlock in f2fs_record_stop_reason() f2fs: fix to map blocks correctly for direct write f2fs: fix to avoid forcing direct write to use buffered IO on inline_data inode f2fs: fix to do cast in F2FS_{BLK_TO_BYTES, BTYES_TO_BLK} to avoid overflow f2fs: fix to do sanity check on node blkaddr in truncate_node()
Charles Han (4): clk: clk-apple-nco: Add NULL check in applnco_probe phy: realtek: usb: fix NULL deref in rtk_usb2phy_probe phy: realtek: usb: fix NULL deref in rtk_usb3phy_probe ASoC: imx-audmix: Add NULL check in imx_audmix_probe
Chen Ridong (4): crypto: caam - add error check to caam_rsa_set_priv_key_form crypto: bcm - add error check in the ahash_hmac_init function Revert "cgroup: Fix memory leak caused by missing cgroup_bpf_offline" cgroup/bpf: only cgroup v2 can be attached by bpf programs
Chen Yufan (1): drm/imagination: Convert to use time_before macro
Chen-Yu Tsai (5): scripts/kernel-doc: Do not track section counter across processed files arm64: dts: mediatek: mt8173-elm-hana: Add vdd-supply to second source trackpad arm64: dts: mediatek: mt8183-kukui-jacuzzi: Fix DP bridge supply names arm64: dts: mediatek: mt8183-kukui-jacuzzi: Add supplies for fixed regulators arm64: dts: mediatek: mt8186-corsola-voltorb: Merge speaker codec nodes
Cheng Ming Lin (1): mtd: spi-nor: core: replace dummy buswidth from addr to data
Chengchang Tang (2): RDMA/core: Provide rdma_user_mmap_disassociate() to disassociate mmap pages RDMA/hns: Disassociate mmap pages for all uctx when HW is being reset
ChiYuan Huang (2): power: supply: rt9471: Fix wrong WDT function regfield declaration power: supply: rt9471: Use IC status regfield to report real charger status
Chiara Meiohas (3): RDMA/mlx5: Call dev_put() after the blocking notifier RDMA/core: Implement RoCE GID port rescan and export delete function RDMA/mlx5: Ensure active slave attachment to the bond IB device
Chris Lu (1): Bluetooth: btmtk: adjust the position to init iso data anchor
Chris Morgan (1): arm64: dts: rockchip: correct analog audio name on Indiedroid Nova
Christian Brauner (2): fcntl: make F_DUPFD_QUERY associative Revert "fs: don't block i_writecount during exec"
Christian Loehle (1): sched/cpufreq: Ensure sd is rebuilt for EAS check
Christoph Hellwig (7): nvme-pci: fix freeing of the HMB descriptor table block: take chunk_sectors into account in bio_split_write_zeroes block: fix bio_split_rw_at to take zone_write_granularity into account nvme-pci: reverse request order in nvme_queue_rqs virtio_blk: reverse request order in virtio_queue_rqs kfifo: don't include dma-mapping.h in kfifo.h block: return unsigned int from bdev_io_min
Christophe JAILLET (4): crypto: caam - Fix the pointer passed to caam_qi_shutdown() crypto: cavium - Fix an error handling path in cpt_ucode_load_fw() media: i2c: vgxy61: Fix an error handling path in vgxy61_detect() iio: light: al3010: Fix an error handling path in al3010_probe()
Christophe Leroy (1): powerpc/vdso: Flag VDSO64 entry points as functions
Chuck Lever (5): svcrdma: Address an integer overflow NFSD: Prevent NULL dereference in nfsd4_process_cb_update() NFSD: Cap the number of bytes copied by nfs4_reset_recoverydir() NFSD: Fix nfsd4_shutdown_copy() NFSD: Prevent a potential integer overflow
Chun-Tse Shao (1): perf/arm-smmuv3: Fix lockdep assert in ->event_init()
Clark Wang (1): pwm: imx27: Workaround of the pwm output bug when decrease the duty cycle
Claudiu Beznea (2): ASoC: da7213: Populate max_register to regmap_config serial: sh-sci: Clean sci_ports[0] after at earlycon exit
Colin Ian King (1): media: i2c: ds90ub960: Fix missing return check on ub960_rxport_read call
Csókás, Bence (1): spi: atmel-quadspi: Fix register name in verbose logging function
Damien Le Moal (1): block: Prevent potential deadlock in blk_revalidate_disk_zones()
Dan Carpenter (10): crypto: qat/qat_420xx - fix off by one in uof_get_name() crypto: qat/qat_4xxx - fix off by one in uof_get_name() soc: qcom: geni-se: fix array underflow in geni_se_clk_tbl_get() media: i2c: max96717: clean up on error in max96717_subdev_init() wifi: rtw89: unlock on error path in rtw89_ops_unassign_vif_chanctx() kunit: skb: use "gfp" variable instead of hardcoding GFP_KERNEL mailbox: arm_mhuv2: clean up loop in get_irq_chan_comb() usb: typec: fix potential array underflow in ucsi_ccg_sync_control() cifs: unlock on error in smb3_reconfigure() sh: intc: Fix use-after-free bug in register_intc_controller()
Daniel Lezcano (2): tools/lib/thermal: Make more generic the command encoding function thermal/lib: Fix memory leak on error in thermal_genl_auto()
Daniel Palmer (2): m68k: mvme147: Fix SCSI controller IRQ numbers m68k: mvme147: Reinstate early console
Daolong Zhu (4): arm64: dts: mt8183: fennel: add i2c2's i2c-scl-internal-delay-ns arm64: dts: mt8183: burnet: add i2c2's i2c-scl-internal-delay-ns arm64: dts: mt8183: cozmo: add i2c2's i2c-scl-internal-delay-ns arm64: dts: mt8183: Damu: add i2c2's i2c-scl-internal-delay-ns
Darrick J. Wong (2): MAINTAINERS: appoint myself the XFS maintainer for 6.12 LTS xfs: fix simplify extent lookup in xfs_can_free_eofblocks
Dave Stevenson (7): drm/vc4: hvs: Don't write gamma luts on 2711 drm/vc4: hvs: Fix dlist debug not resetting the next entry pointer drm/vc4: hvs: Remove incorrect limit from hvs_dlist debugfs function drm/vc4: hvs: Correct logic on stopping an HVS channel drm/vc4: Match drm_dev_enter and exit calls in vc4_hvs_lut_load drm/vc4: Match drm_dev_enter and exit calls in vc4_hvs_atomic_flush drm/vc4: Correct generation check in vc4_hvs_lut_load
David Disseldorp (1): initramfs: avoid filename buffer overrun
David Laight (1): x86: fix off-by-one in access_ok()
David Lechner (1): iio: adc: ad4000: fix reading unsigned data
David Thompson (1): EDAC/bluefield: Fix potential integer overflow
Dinesh Kumar (1): ALSA: hda/realtek: Fix Internal Speaker and Mic boost of Infinix Y4 Max
Dipendra Khadka (6): octeontx2-pf: handle otx2_mbox_get_rsp errors in otx2_common.c octeontx2-pf: handle otx2_mbox_get_rsp errors in otx2_ethtool.c octeontx2-pf: handle otx2_mbox_get_rsp errors in otx2_flows.c octeontx2-pf: handle otx2_mbox_get_rsp errors in cn10k.c octeontx2-pf: handle otx2_mbox_get_rsp errors in otx2_dmac_flt.c octeontx2-pf: handle otx2_mbox_get_rsp errors in otx2_dcbnl.c
Dirk Su (1): ALSA: hda/realtek: fix mute/micmute LEDs don't work for EliteBook X G1i
Dmitry Antipov (2): Bluetooth: fix use-after-free in device_for_each_child() ocfs2: fix uninitialized value in ocfs2_file_read_iter()
Dmitry Baryshkov (7): arm64: dts: qcom: qcs6390-rb3gen2: use modem.mbn for modem DSP arm64: dts: qcom: sda660-ifc6560: fix l10a voltage ranges drm/msm/dpu: on SDM845 move DSPP_3 to LM_5 block drm/msm/dpu: drop LM_3 / LM_4 on SDM845 drm/msm/dpu: drop LM_3 / LM_4 on MSM8998 remoteproc: qcom: pas: add minidump_id to SM8350 resources usb: typec: ucsi: glink: fix off-by-one in connector_status
Dom Cobley (2): drm/vc4: hdmi: Avoid hang with debug registers when suspended drm/vc4: hdmi: Increase audio MAI fifo dreq threshold
Dong Aisheng (1): clk: imx: clk-scu: fix clk enable state save and restore
Dragan Simic (1): regulator: rk808: Restrict DVS GPIOs to the RK808 variant only
Eder Zulian (1): rust: helpers: Avoid raw_spin_lock initialization for PREEMPT_RT
Eduard Zingerman (1): selftests/bpf: Fix backtrace printing for selftests crashes
Edward Adam Davis (1): USB: chaoskey: Fix possible deadlock chaoskey_list_lock
Emmanuel Grumbach (2): wifi: iwlwifi: allow fast resume on ax200 wifi: iwlwifi: mvm: tell iwlmei when we finished suspending
Eric Biggers (1): crypto: x86/aegis128 - access 32-bit arguments as 32-bit
Eric Dumazet (1): net: hsr: fix hsr_init_sk() vs network/transport headers.
Everest K.C (2): crypto: cavium - Fix the if condition to exit loop after timeout ASoC: rt722-sdca: Remove logically deadcode in rt722-sdca.c
Fangzhi Zuo (3): drm/amd/display: Skip Invalid Streams from DSC Policy drm/amd/display: Fix incorrect DSC recompute trigger drm/amd/display: Reduce HPD Detection Interval for IPS
Fei Shao (3): arm64: dts: mediatek: mt8188: Fix USB3 PHY port default status arm64: dts: mediatek: mt8195-cherry: Use correct audio codec DAI dt-bindings: PCI: mediatek-gen3: Allow exact number of clocks only
Felix Maurer (1): xsk: Free skb when TX metadata options are invalid
Feng Fang (1): RDMA/hns: Fix different dgids mapping to the same dip_idx
Filip Brozovic (1): serial: 8250_fintek: Add support for F81216E
Florian Westphal (3): netfilter: nf_tables: avoid false-positive lockdep splat on rule deletion netfilter: nf_tables: must hold rcu read lock while iterating expression type list netfilter: nf_tables: must hold rcu read lock while iterating object type list
Francesco Zardi (1): rust: block: fix formatting of `kernel::block::mq::request` module
Frank Li (2): arm64: dts: imx8mn-tqma8mqnl-mba8mx-usbot: fix coexistence of output-low and output-high in GPIO i3c: master: Remove i3c_dev_disable_ibi_locked(olddev) on device hotjoin
Gao Xiang (2): erofs: fix file-backed mounts over FUSE erofs: handle NONHEAD !delta[1] lclusters gracefully
Gaosheng Cui (2): drivers: soc: xilinx: add the missing kfree in xlnx_add_cb_for_suspend() firmware_loader: Fix possible resource leak in fw_log_firmware_info()
Gautam Menghani (3): KVM: PPC: Book3S HV: Stop using vc->dpdes for nested KVM guests KVM: PPC: Book3S HV: Avoid returning to nested hypervisor on pending doorbells powerpc/pseries: Fix KVM guest detection for disabling hardlockup detector
Gautham R. Shenoy (1): amd-pstate: Set min_perf to nominal_perf for active mode performance gov
Geert Uytterhoeven (2): ASoC: fsl-asoc-card: Add missing handling of {hp,mic}-dt-gpios zram: ZRAM_DEF_COMP should depend on ZRAM
Gerd Bayer (1): s390/facilities: Fix warning about shadow of global variable
Greg Kroah-Hartman (2): Revert "serial: sh-sci: Clean sci_ports[0] after at earlycon exit" Linux 6.12.2
Gregory Price (1): tpm: fix signed/unsigned bug when checking event logs
Guenter Roeck (1): net: microchip: vcap: Add typegroup table terminators in kunit tests
Guilherme G. Piccoli (1): wifi: rtlwifi: Drastically reduce the attempts to read efuse in case of failures
Gustavo A. R. Silva (2): clk: clk-loongson2: Fix memory corruption bug in struct loongson2_clk_provider clk: clk-loongson2: Fix potential buffer overflow in flexible-array member access
Halil Pasic (1): s390/virtio_ccw: Fix dma_parm pointer not set up
Hangbin Liu (3): netdevsim: copy addresses for both in and out paths wireguard: selftests: load nf_conntrack if not present net/ipv6: delete temporary address if mngtmpaddr is removed or unmanaged
Hans Verkuil (1): media: v4l2-core: v4l2-dv-timings: check cvt/gtf result
Hao Ge (2): isofs: avoid memory leak in iocharset perf bpf-filter: Return -ENOMEM directly when pfi allocation fails
Hari Bathini (1): powerpc/fadump: allocate memory for additional parameters early
Hariprasad Kelam (5): octeontx2-af: RPM: Fix mismatch in lmac type octeontx2-af: RPM: Fix low network performance octeontx2-af: RPM: fix stale RSFEC counters octeontx2-af: RPM: fix stale FCFEC counters octeontx2-af: Quiesce traffic before NIX block reset
Harshit Mogalapalli (1): dax: delete a stale directory pmem
Heiko Carstens (1): s390/pageattr: Implement missing kernel_page_present()
Heiko Stuebner (1): arm64: dts: rockchip: Remove 'enable-active-low' from two boards
Henrique Carvalho (1): smb: client: disable directory caching when dir_cache_timeout is zero
Herve Codina (1): soc: fsl: cpm1: qmc: Set the ret error code on platform_get_irq() failure
Hongzhen Luo (1): erofs: fix blksize < PAGE_SIZE for file-backed mounts
Hou Tao (1): virtiofs: use pages instead of pointer for kernel direct IO
Howard Chu (1): perf trace: Fix tracing itself, creating feedback loops
Hsin-Te Yuan (2): arm64: dts: mt8183: krane: Fix the address of eeprom at i2c4 arm64: dts: mt8183: kukui: Fix the address of eeprom at i2c4
Huacai Chen (2): LoongArch: Explicitly specify code model in Makefile sh: cpuinfo: Fix a warning for CONFIG_CPUMASK_OFFSTACK
Huan Yang (2): udmabuf: change folios array from kmalloc to kvmalloc udmabuf: fix vmap_udmabuf error page set
Hubert Wiśniewski (1): usb: musb: Fix hardware lockup on first Rx endpoint request
Ian Rogers (3): perf stat: Fix affinity memory leaks on error path perf disasm: Fix capstone memory leak perf probe: Fix libdw memory leak
Igor Prusov (1): dt-bindings: vendor-prefixes: Add NeoFidelity, Inc
Igor Pylypiv (1): i2c: dev: Fix memory leak when underlying adapter does not support I2C
Ilpo Järvinen (1): PCI: cpqphp: Fix PCIBIOS_* return value confusion
Ilya Zverev (1): ASoC: amd: yc: Add a quirk for microfone on Lenovo ThinkPad P14s Gen 5 21MES00B00
Iulia Tanasescu (3): Bluetooth: ISO: Do not emit LE PA Create Sync if previous is pending Bluetooth: ISO: Do not emit LE BIG Create Sync if previous is pending Bluetooth: ISO: Send BIG Create Sync via hci_sync
Jacob Keller (1): ice: consistently use q_idx in ice_vc_cfg_qs_msg()
Jaegeuk Kim (1): Revert "f2fs: remove unreachable lazytime mount option parsing"
Jakub Kicinski (3): eth: fbnic: don't disable the PCI device twice netlink: fix false positive warning in extack during dumps net_sched: sch_fq: don't follow the fast path if Tx is behind now
James Chapman (1): net/l2tp: fix warning in l2tp_exit_net found by syzbot
James Clark (1): perf cs-etm: Don't flush when packet_queue fills up
Jan Hendrik Farr (1): Compiler Attributes: disable __counted_by for clang < 19.1.3
Jan Kara (1): ext4: avoid remount errors with 'abort' mount option
Jann Horn (2): fsnotify: Fix ordering of iput() and watched_objects decrement comedi: Flush partial mappings in error case
Jared McArthur (1): arm64: dts: ti: k3-j7200: Fix register map for main domain pmx
Jason Gerecke (1): HID: wacom: Interpret tilt data from Intuos Pro BT as signed values
Javier Carrasco (9): clocksource/drivers/timer-ti-dm: Fix child node refcount handling Bluetooth: btbcm: fix missing of_node_put() in btbcm_get_board_name() leds: max5970: Fix unreleased fwnode_handle in probe function wifi: brcmfmac: release 'root' node in all execution paths platform/chrome: cros_ec_typec: fix missing fwnode reference decrement mtd: ubi: fix unreleased fwnode_handle in find_volume_fwnode() soc: fsl: rcpm: fix missing of_node_put() in copy_ippdexpcr1_setting() staging: vchiq_arm: Fix missing refcount decrement in error path for fw_node counter: stm32-timer-cnt: fix device_node handling in probe_encoder()
Jean-Michel Hautbois (1): m68k: mcfgpio: Fix incorrect register offset for CONFIG_M5441x
Jean-Philippe Romain (1): perf list: Fix topic and pmu_name argument order
Jeff Layton (1): nfsd: drop inode parameter from nfsd4_change_attribute()
Jeongjun Park (4): wifi: ath9k: add range check for conn_rsp_epid in htc_connect_service() usb: using mutex lock and supporting O_NONBLOCK flag in iowarrior_read() ext4: supress data-race warnings in ext4_free_inodes_{count,set}() netfilter: ipset: add missing range check in bitmap_ip_uadt
Jerome Brunet (1): hwmon: (pmbus/core) clear faults after setting smbalert mask
Jesse Taube (2): RISC-V: Scalar unaligned access emulated on hotplug CPUs RISC-V: Check scalar unaligned access on all CPUs
Jiasheng Jiang (2): counter: stm32-timer-cnt: Add check for clk_enable() counter: ti-ecap-capture: Add check for clk_enable()
Jiawen Wu (2): net: txgbe: remove GPIO interrupt controller net: txgbe: fix null pointer to pcs
Jiayuan Chen (1): bpf: fix recursive lock when verdict program return SK_PASS
Jie Zhan (1): cppc_cpufreq: Use desired perf if feedback ctrs are 0 or unchanged
Jing Zhang (1): KVM: arm64: vgic-its: Add a data length check in vgic_its_save_*
Jinjie Ruan (17): spi: spi-fsl-lpspi: Use IRQF_NO_AUTOEN flag in request_irq() soc: ti: smartreflex: Use IRQF_NO_AUTOEN flag in request_irq() spi: zynqmp-gqspi: Undo runtime PM changes at driver exit time wifi: p54: Use IRQF_NO_AUTOEN flag in request_irq() wifi: mwifiex: Use IRQF_NO_AUTOEN flag in request_irq() drm/imx/dcss: Use IRQF_NO_AUTOEN flag in request_irq() drm/imx/ipuv3: Use IRQF_NO_AUTOEN flag in request_irq() drm/msm/adreno: Use IRQF_NO_AUTOEN flag in request_irq() mfd: tps65010: Use IRQF_NO_AUTOEN flag in request_irq() to fix race cpufreq: CPPC: Fix possible null-ptr-deref for cpufreq_cpu_get_raw() cpufreq: CPPC: Fix possible null-ptr-deref for cppc_get_cpu_cost() cpufreq: CPPC: Fix wrong return value in cppc_get_cpu_cost() cpufreq: CPPC: Fix wrong return value in cppc_get_cpu_power() misc: apds990x: Fix missing pm_runtime_disable() apparmor: test: Fix memory leak for aa_unpack_strdup() cpufreq: mediatek-hw: Fix wrong return value in mtk_cpufreq_get_cpu_power() rtc: st-lpc: Use IRQF_NO_AUTOEN flag in request_irq()
Jiri Olsa (2): bpf: Allow return values 0 and 1 for kprobe session bpf: Force uprobe bpf program to always return 0
Joe Damato (1): netdev-genl: Hold rcu_read_lock in napi_get
Joe Hattori (2): remoteproc: qcom: pas: Remove subdevs on the error path of adsp_probe() remoteproc: qcom: adsp: Remove subdevs on the error path of adsp_probe()
Johan Hovold (1): pinctrl: qcom: spmi: fix debugfs drive strength
John Garry (3): block/fs: Pass an iocb to generic_atomic_write_valid() fs/block: Check for IOCB_DIRECT in generic_atomic_write_valid() block: Don't allow an atomic write be truncated in blkdev_write_iter()
Jonas Gorski (1): mips: asm: fix warning when disabling MIPS_FP_SUPPORT
Jonathan Gray (1): drm: use ATOMIC64_INIT() for atomic64_t
Jonathan Marek (3): efi/libstub: fix efi_parse_options() ignoring the default command line clk: qcom: videocc-sm8550: depend on either gcc-sm8550 or gcc-sm8650 rpmsg: glink: use only lower 16-bits of param2 for CMD_OPEN name length
Jose Ignacio Tornos Martinez (2): wifi: ath12k: fix warning when unbinding wifi: ath12k: fix crash when unbinding
Josh Poimboeuf (1): parisc/ftrace: Fix function graph tracing disablement
José Expósito (1): drm/vkms: Drop unnecessary call to drm_crtc_cleanup()
Junxian Huang (3): RDMA/hns: Use dev_* printings in hem code instead of ibdev_* RDMA/hns: Fix out-of-order issue of requester when setting FENCE RDMA/hns: Fix NULL pointer derefernce in hns_roce_map_mr_sg()
Justin Lai (3): rtase: Refactor the rtase_check_mac_version_valid() function rtase: Correct the speed for RTL907XD-V1 rtase: Corrects error handling of the rtase_check_mac_version_valid()
Kailang Yang (4): ALSA: hda/realtek: Update ALC256 depop procedure ALSA: hda/realtek: Update ALC225 depop procedure ALSA: hda/realtek: Enable speaker pins for Medion E15443 platform ALSA: hda/realtek: Set PCBeep to default value for ALC274
Kajol Jain (1): KVM: PPC: Book3S HV: Fix kmv -> kvm typo
Kalesh AP (1): RDMA/bnxt_re: Correct the sequence of device suspend
Kalle Valo (1): Revert "wifi: iwlegacy: do not skip frames with bad FCS"
Kan Liang (1): perf jevents: Don't stop at the first matched pmu when searching a events table
Karol Wachowski (1): accel/ivpu: Prevent recovery invocation during probe and resume
Karthikeyan Periyasamy (1): wifi: cfg80211: check radio iface combination for multi radio per wiphy
Kartik Rajput (1): serial: amba-pl011: Fix RX stall when DMA is used
Kashyap Desai (1): RDMA/bnxt_re: Check cqe flags to know imm_data vs inv_irkey
Keita Morisaki (1): devres: Fix page faults when tracing devres from unloaded modules
Kiran K (2): Bluetooth: btintel_pcie: Add handshake between driver and firmware Bluetooth: btintel: Do no pass vendor events to stack
Kirill A. Shutemov (3): x86/tdx: Introduce wrappers to read and write TD metadata x86/tdx: Rename tdx_parse_tdinfo() to tdx_setup() x86/tdx: Dynamically disable SEPT violations from causing #VEs
Konrad Dybcio (2): arm64: dts: qcom: x1e80100: Update C4/C5 residency/exit numbers arm64: dts: qcom: sc8180x: Add a SoC-specific compatible to cpufreq-hw
Konstantin Komarov (1): fs/ntfs3: Equivalent transition from page to folio
Kristina Martsenko (1): arm64: probes: Disable kprobes/uprobes on MOPS instructions
Krzysztof Kozlowski (1): dt-bindings: pinctrl: samsung: Fix interrupt constraint for variants with fallbacks
Kuangyi Chiang (4): xhci: Fix control transfer error on Etron xHCI host xhci: Combine two if statements for Etron xHCI host xhci: Don't perform Soft Retry for Etron xHCI host xhci: Don't issue Reset Device command to Etron xHCI host
Kumar Kartikeya Dwivedi (2): bpf: Tighten tail call checks for lingering locks, RCU, preempt_disable bpf: Mark raw_tp arguments with PTR_MAYBE_NULL
Kuniyuki Iwashima (1): tcp: Fix use-after-free of nreq in reqsk_timer_handler().
Kunkun Jiang (2): KVM: arm64: vgic-its: Clear ITE when DISCARD frees an ITE KVM: arm64: vgic-its: Clear DTE when MAPD unmaps a device
Lad Prabhakar (2): arm64: dts: renesas: hihope: Drop #sound-dai-cells pinctrl: renesas: Select PINCTRL_RZG2L for RZ/V2H(P) SoC
Leo Yan (1): perf probe: Correct demangled symbols in C++ program
Leon Hwang (1): bpf, bpftool: Fix incorrect disasm pc
Levi Yun (2): trace/trace_event_perf: remove duplicate samples on the first tracepoint event perf stat: Close cork_fd when create_perf_stat_counter() failed
Li Huafei (6): crypto: inside-secure - Fix the return value of safexcel_xcbcmac_cra_init() media: atomisp: Add check for rgby_data memory allocation failure drm/nouveau/gr/gf100: Fix missing unlock in gf100_gr_chan_new() drm/amdgpu: Fix the memory allocation issue in amdgpu_discovery_get_nps_info() perf disasm: Use disasm_line__free() to properly free disasm_line perf disasm: Fix not cleaning up disasm_line in symbol__disassemble_raw()
Li Lingfeng (1): nfs: ignore SB_RDONLY when mounting nfs
Li Wang (1): loop: fix type of block size
Lifeng Zheng (1): ACPI: CPPC: Fix _CPC register setting issue
Lijo Lazar (2): drm/amdgpu: Fix JPEG v4.0.3 register write drm/amdgpu: Fix map/unmap queue logic
Lingbo Kong (1): wifi: cfg80211: Remove the Medium Synchronization Delay validity check
Linus Walleij (2): drm/panel: nt35510: Make new commands optional wifi: cw1200: Fix potential NULL dereference
Liu Jian (3): RDMA/rxe: Set queue pair cur_qp_state when being queried sunrpc: clear XPRT_SOCK_UPD_TIMEOUT when reset transport sunrpc: fix one UAF issue caused by sunrpc kernel tcp socket
Liu Shixin (1): zram: fix NULL pointer in comp_algorithm_show()
Long Li (2): ext4: fix race in buffer_head read fault injection f2fs: fix race in concurrent f2fs_stop_gc_thread
LongPing Wei (1): f2fs: fix the wrong f2fs_bug_on condition in f2fs_do_replace_block
Lorenzo Bianconi (8): clk: en7523: remove REG_PCIE*_{MEM,MEM_MASK} configuration clk: en7523: move clock_register in hw_init callback clk: en7523: introduce chip_scu regmap clk: en7523: fix estimation of fixed rate for EN7581 phy: airoha: Fix REG_CSR_2L_PLL_CMN_RESERVE0 config in airoha_pcie_phy_init_clk_out() phy: airoha: Fix REG_PCIE_PMA_TX_RESET config in airoha_pcie_phy_init_csr_2l() phy: airoha: Fix REG_CSR_2L_JCPLL_SDM_HREN config in airoha_pcie_phy_init_ssc_jcpll() phy: airoha: Fix REG_CSR_2L_RX{0,1}_REV0 definitions
Luca Weiss (1): arm64: dts: qcom: sm6350: Fix GPU frequencies missing on some speedbins
Lucas Stach (1): drm/etnaviv: hold GPU lock across perfmon sampling
Luiz Augusto von Dentz (3): Bluetooth: ISO: Use kref to track lifetime of iso_conn Bluetooth: MGMT: Fix slab-use-after-free Read in set_powered_sync Bluetooth: MGMT: Fix possible deadlocks
Lukas Bulwahn (1): clk: mediatek: drop two dead config options
Lukas Wunner (1): PCI: Fix use-after-free of slot->bus on hot remove
Lukasz Luba (1): drm/msm/gpu: Check the status of registration to PM QoS
Luo Qiu (1): firmware: arm_scpi: Check the DVFS OPP count returned by the firmware
Ma Wupeng (1): ipc: fix memleak if msg_init_ns failed in create_ipc_ns
Macpaul Lin (5): arm64: dts: mediatek: mt8395-genio-1200-evk: Fix dtbs_check error for phy arm64: dts: mt8195: Fix dtbs_check error for mutex node arm64: dts: mt8195: Fix dtbs_check error for infracfg_ao node arm64: dts: mediatek: mt6358: fix dtbs_check error ASoC: dt-bindings: mt6359: Update generic node name and dmic-mode
Manivannan Sadhasivam (3): PCI: qcom: Enable MSI interrupts together with Link up if 'Global IRQ' is supported PCI: qcom-ep: Move controller cleanups to qcom_pcie_perst_deassert() PCI: tegra194: Move controller cleanups to pex_ep_event_pex_rst_deassert()
Marc Zyngier (2): arm64: Expose ID_AA64ISAR1_EL1.XS to sanitised feature consumers KVM: arm64: vgic-v3: Sanitise guest writes to GICR_INVLPIR
Marco Elver (2): kcsan, seqlock: Support seqcount_latch_t kcsan, seqlock: Fix incorrect assumption in read_seqbegin()
Marcus Folkesson (1): mfd: da9052-spi: Change read-mask to write-mask
Marek Vasut (1): wifi: wilc1000: Set MAC after operation mode
Mario Limonciello (1): cpufreq/amd-pstate: Don't update CPPC request in amd_pstate_cpu_boost_update()
Mark Brown (2): kselftest/arm64: Fix encoding for SVE B16B16 test clocksource/drivers:sp804: Make user selectable
Martin Kaistra (1): wifi: rtl8xxxu: Perform update_beacon_work when beaconing is enabled
Masahiro Yamada (5): arm64: fix .data.rel.ro size assertion when CONFIG_LTO_CLANG s390/syscalls: Avoid creation of arch/arch/ directory Rename .data.unlikely to .data..unlikely Rename .data.once to .data..once to fix resetting WARN*_ONCE modpost: remove incorrect code in do_eisa_entry()
Matt Coster (1): drm/imagination: Use pvr_vm_context_get()
Matt Fleming (1): kbuild: deb-pkg: Don't fail if modules.order is missing
Matthew Rosato (1): iommu/s390: Implement blocking domain
Matthias Schiffer (1): drm: fsl-dcu: enable PIXCLK on LS1021A
Maurice Lambert (1): netlink: typographical error in nlmsg_type constants definition
Mauro Carvalho Chehab (2): MAINTAINERS: update location of media main tree docs: media: update location of the media patches
Maxime Chevallier (2): net: stmmac: dwmac-socfpga: Set RX watchdog interrupt as broken rtc: ab-eoz9: don't fail temperature reads on undervoltage notification
Maxime Ripard (1): drm/vc4: Introduce generation number enum
Maíra Canal (2): drm/v3d: Address race-condition in MMU flush drm/v3d: Flush the MMU before we supply more memory to the binner
Meetakshi Setiya (1): cifs: support mounting with alternate password to allow password rotation
Michael Chan (2): bnxt_en: Refactor bnxt_ptp_init() bnxt_en: Unregister PTP during PCI shutdown and suspend
Michael Ellerman (2): powerpc/pseries: Fix dtl_access_lock to be a rw_semaphore selftests/mount_setattr: Fix failures on 64K PAGE_SIZE kernels
Michael Grzeschik (1): usb: gadget: uvc: wake pump everytime we update the free list
Michael J. Ruhl (1): platform/x86/intel/pmt: allow user offset for PMT callbacks
Michael Petlan (1): perf trace: Keep exited threads for summary
Michal Luczaj (2): llc: Improve setsockopt() handling of malformed user input rxrpc: Improve setsockopt() handling of malformed user input
Michal Pecio (3): usb: xhci: Limit Stop Endpoint retries usb: xhci: Fix TD invalidation under pending Set TR Dequeue usb: xhci: Avoid queuing redundant Stop Endpoint commands
Michal Schmidt (1): rcu/srcutiny: don't return before reenabling preemption
Michal Simek (2): microblaze: Export xmb_manager functions dt-bindings: serial: rs485: Fix rs485-rts-delay property
Michal Suchanek (1): powerpc/sstep: make emulate_vsx_load and emulate_vsx_store static
Michal Vrastil (1): Revert "usb: gadget: composite: fix OS descriptors w_value logic"
Miguel Ojeda (4): time: Partially revert cleanup on msecs_to_jiffies() documentation time: Fix references to _msecs_to_jiffies() handling of values drm/panic: Select ZLIB_DEFLATE for DRM_PANIC_SCREEN_QR_CODE rust: rbtree: fix `SAFETY` comments that should be `# Safety` sections
Mike Snitzer (1): nfs_common: must not hold RCU while calling nfsd_file_put_local
Mikulas Patocka (1): blk-settings: round down io_opt to physical_block_size
Min-Hua Chen (1): regulator: qcom-smd: make smd_vreg_rpm static
Ming Lei (6): ublk: fix ublk_ch_mmap() for 64K page size ublk: fix error code for unsupported command blk-mq: add non_owner variant of start_freeze/unfreeze queue APIs block: model freeze & enter queue as lock for supporting lockdep block: always verify unfreeze lock on the owner task block: don't verify IO lock for freeze/unfreeze in elevator_init_mq()
Mingwei Zheng (1): net: rfkill: gpio: Add check for clk_enable()
Miquel Raynal (1): mtd: rawnand: atmel: Fix possible memory leak
Mirsad Todorovac (2): fs/proc/kcore.c: fix coccinelle reported ERROR instances net/9p/usbg: fix handling of the failed kzalloc() memory allocation
Muchun Song (3): block: fix missing dispatching request when queue is started or unquiesced block: fix ordering between checking QUEUE_FLAG_QUIESCED request adding block: fix ordering between checking BLK_MQ_S_STOPPED request adding
Murad Masimov (1): hwmon: (tps23861) Fix reporting of negative temperatures
Namhyung Kim (1): perf/arm-cmn: Ensure port and device id bits are set properly
Namjae Jeon (1): exfat: fix uninit-value in __exfat_get_dentry_set
Nathan Morrisson (1): arm64: dts: ti: k3-am62x-phyboard-lyra: Drop unnecessary McASP AFIFOs
NeilBrown (1): nfs/localio: must clear res.replen in nfs_local_read_done
Nicolas Bouchinet (1): tty: ldsic: fix tty_ldisc_autoload sysctl's proc_handler
Nicolin Chen (2): iommu/tegra241-cmdqv: Staticize cmdqv_debugfs_dir iommu/tegra241-cmdqv: Fix alignment failure at max_n_shift
Niklas Schnelle (2): watchdog: Add HAS_IOPORT dependency for SBC8360 and SBC7240 s390/pci: Fix potential double remove of hotplug slot
Nilay Shroff (1): nvme-fabrics: fix kernel crash while shutting down controller
Nirmoy Das (1): drm/xe/ufence: Wake up waiters after setting ufence->signalled
Nobuhiro Iwamatsu (1): rtc: abx80x: Fix WDT bit position of the status register
Nuno Sa (2): dt-bindings: clock: axi-clkgen: include AXI clk clk: clk-axi-clkgen: make sure to enable the AXI bus clock
Nícolas F. R. A. Prado (1): ASoC: mediatek: Check num_codecs is not zero to avoid panic during probe
Oleksij Rempel (3): net: usb: lan78xx: Fix double free issue with interrupt buffer allocation net: usb: lan78xx: Fix memory leak on device unplug by freeing PHY device net: usb: lan78xx: Fix refcounting and autosuspend on invalid WoL configuration
Oliver Neukum (2): usb: yurex: make waiting on yurex_write interruptible USB: chaoskey: fail open after removal
Oliver Upton (1): KVM: arm64: Don't retire aborted MMIO instruction
Omid Ehtemam-Haghighi (1): ipv6: Fix soft lockups in fib6_select_path under high next hop churn
Orange Kao (1): EDAC/igen6: Avoid segmentation fault on module unload
Pablo Sun (1): arm64: dts: mediatek: mt8188: Fix wrong clock provider in MFG1 power domain
Pali Rohár (2): cifs: Fix parsing native symlinks relative to the export cifs: Fix parsing reparse point with native symlink in SMB1 non-UNICODE session
Paolo Abeni (4): ipv6: release nexthop on device removal selftests: net: really check for bg process completion ip6mr: fix tables suspicious RCU usage ipmr: fix tables suspicious RCU usage
Paolo Bonzini (2): rust: macros: fix documentation of the paste! macro KVM: x86: switch hugepage recovery thread to vhost_task
Patrisious Haddad (1): RDMA/mlx5: Move events notifier registration to be after device registration
Patryk Wlazlyn (1): tools/power turbostat: Fix child's argument forwarding
Paul Aurich (5): smb: cached directories can be more than root file handle smb: Don't leak cfid when reconnect races with open_cached_dir smb: prevent use-after-free due to open_cached_dir error paths smb: During unmount, ensure all cached dir instances drop their dentry smb: Initialize cfid->tcon before performing network ops
Paulo Alcantara (3): smb: client: fix NULL ptr deref in crypto_aead_setkey() smb: client: fix use-after-free of signing key smb: client: handle max length for SMB symlinks
Pavan Chebbi (1): tg3: Set coherent DMA mask bits to 31 for BCM57766 chipsets
Pavel Begunkov (2): io_uring: fix corner case forgetting to vunmap io_uring: check for overflows in io_pin_pages
Pei Xiao (2): hwmon: (nct6775-core) Fix overflows seen when writing limit attributes wifi: rtw89: coex: check NULL return of kmalloc in btc_fw_set_monreg()
Peng Fan (3): clk: imx: lpcg-scu: SW workaround for errata (e10858) clk: imx: fracn-gppll: correct PLL initialization flow clk: imx: fracn-gppll: fix pll power up
Peter Große (1): i40e: Fix handling changed priv flags
Pin-yen Lin (3): arm64: dts: mt8183: Add port node to dpi node drm/bridge: anx7625: Drop EDID cache on bridge power off drm/bridge: it6505: Drop EDID cache on bridge power off
Po-Hao Huang (1): wifi: rtw89: Fix TX fail with A2DP after scanning
Priyanka Singh (1): EDAC/fsl_ddr: Fix bad bit shift operations
Qi Han (1): f2fs: compress: fix inconsistent update of i_blocks in release_compress_blocks and reserve_compress_blocks
Qingfang Deng (1): jffs2: fix use of uninitialized variable
Qiu-ji Chen (3): xen: Fix the issue of resource not being properly released in xenbus_dev_probe() ASoC: codecs: Fix atomicity violation in snd_soc_component_get_drvdata() media: wl128x: Fix atomicity violation in fmc_send_cmd()
Qiuxu Zhuo (2): EDAC/skx_common: Differentiate memory error sources EDAC/{skx_common,i10nm}: Fix incorrect far-memory error source indicator
Rafael J. Wysocki (7): thermal: core: Initialize thermal zones before registering them thermal: core: Rearrange PM notification code thermal: core: Represent suspend-related thermal zone flags as bits thermal: core: Mark thermal zones as initializing to start with thermal: core: Fix race between zone registration and system suspend thermal: testing: Use DEFINE_FREE() and __free() to simplify code thermal: testing: Initialize some variables annoteded with _free()
Raghavendra Rao Ananta (2): KVM: arm64: Ignore PMCNTENSET_EL0 while checking for overflow status KVM: arm64: Get rid of userspace_irqchip_in_use
Ralph Boehme (1): fs/smb/client: implement chmod() for SMB3 POSIX Extensions
Rameshkumar Sundaram (1): wifi: ath12k: fix use-after-free in ath12k_dp_cc_cleanup()
Ramya Gnanasekar (1): wifi: ath12k: Skip Rx TID cleanup for self peer
Randy Dunlap (2): kernel-doc: allow object-like macros in ReST output fs_parser: update mount_api doc to match function signature
Raviteja Laggyshetty (1): interconnect: qcom: icc-rpmh: probe defer incase of missing QoS clock dependency
Raymond Hackley (1): leds: ktd2692: Set missing timing properties
Reinette Chatre (3): selftests/resctrl: Print accurate buffer size as part of MBM results selftests/resctrl: Fix memory overflow due to unhandled wraparound selftests/resctrl: Protect against array overrun during iMC config parsing
Ritesh Harjani (IBM) (3): powerpc/fadump: Refactor and prepare fadump_cma_init for late init powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init() powerpc/mm/fault: Fix kfence page fault reporting
Roman Li (1): drm/amd/display: Increase idle worker HPD detection time
Rosen Penev (1): net: mdio-ipq4019: add missing error check
Russell King (Oracle) (1): irqchip/irq-mvebu-sei: Move misplaced select() callback to SEI CP domain
Ryan Walklin (1): drm: panel: nv3052c: correct spi_device_id for RG35XX panel
Sabyrzhan Tasbolatov (1): kasan: move checks to do_strncpy_from_user
Sai Kumar Cholleti (1): gpio: exar: set value when external pull-up or pull-down is present
Sakari Ailus (1): media: ipu6: Fix DMA and physical address debugging messages for 32-bit
Samuel Holland (1): irqchip/riscv-aplic: Prevent crash when MSI domain is missing
Saravanan Vajravel (1): bnxt_en: Reserve rings after PCIe AER recovery if NIC interface is down
Sascha Hauer (1): wifi: mwifiex: add missing locking for cfg80211 calls
Sean Anderson (1): drm: zynqmp_kms: Unplug DRM device before removal
Sean Christopherson (3): KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTE KVM: x86: Break CONFIG_KVM_X86's direct dependency on KVM_INTEL || KVM_AMD Revert "KVM: VMX: Move LOAD_IA32_PERF_GLOBAL_CTRL errata handling out of setup_vmcs_config()"
Sebastian Andrzej Siewior (2): locking/rt: Add sparse annotation PREEMPT_RT's sleeping locks. x86/CPU/AMD: Terminate the erratum_1386_microcode array
Selvarasu Ganesan (1): usb: dwc3: gadget: Add missing check for single port RAM in TxFIFO resizing logic
Sergey Senozhatsky (1): zram: permit only one post-processing operation at a time
Sergio Paracuellos (2): clk: ralink: mtmips: fix clock plan for Ralink SoC RT3883 clk: ralink: mtmips: fix clocks probe order in oldest ralink SoCs
Shengjiu Wang (1): ASoC: fsl_micfil: fix regmap_write_bits usage
Shravya KN (2): bnxt_en: Set backplane link modes correctly for ethtool bnxt_en: Fix receive ring space parameters when XDP is active
Shyam Prasad N (1): cifs: during remount, make sure passwords are in sync
Si-Wei Liu (1): vdpa/mlx5: Fix suboptimal range on iotlb iteration
Sibi Sankar (2): arm64: dts: qcom: x1e80100: Resize GIC Redistributor register region remoteproc: qcom_q6v5_mss: Re-order writes to the IMEM region
Siddharth Vadapalli (1): PCI: j721e: Deassert PERST# after a delay of PCIE_T_PVPERL_MS milliseconds
Sidraya Jayagond (1): s390/iucv: MSG_PEEK causes memory leak in iucv_sock_destruct()
Somnath Kotur (1): bnxt_en: Fix queue start to update vnic RSS table
Sourabh Jain (1): fadump: reserve param area if below boot_mem_top
Srikar Dronamraju (1): gpio: sloppy-logic-analyzer remove reference to rcu_momentary_dyntick_idle()
Srinivasan Shanmugam (2): drm/amdgpu/gfx9: Add Cleaner Shader Deinitialization in gfx_v9_0 Module drm/amdkfd: Use dynamic allocation for CU occupancy array in 'kfd_get_cu_occupancy()'
Stafford Horne (1): openrisc: Implement fixmap to fix earlycon
Stanislaw Gruszka (4): spi: Fix acpi deferred irq probe media: intel/ipu6: do not handle interrupts when device is disabled usb: misc: ljca: set small runtime autosuspend delay usb: misc: ljca: move usb_autopm_put_interface() after wait for response
Steffen Dirkwinkel (1): drm: xlnx: zynqmp_disp: layer may be null while releasing
Stephen Boyd (1): clk: Allow kunit tests to run without OF_OVERLAY enabled
Steve French (1): smb3: request handle caching when caching directories
Steven 'Steve' Kendall (1): drm/radeon: Fix spurious unplug event on radeon HDMI
Steven Price (1): drm/panfrost: Remove unused id_mask from struct panfrost_model
Suraj Kandpal (1): drm/xe/hdcp: Fix gsc structure check in fw check status
Takahiro Kuwano (1): mtd: spi-nor: spansion: Use nor->addr_nbytes in octal DTR mode in RD_ANY_REG_OP
Takashi Iwai (9): ALSA: usx2y: Use snd_card_free_when_closed() at disconnection ALSA: us122l: Use snd_card_free_when_closed() at disconnection ALSA: caiaq: Use snd_card_free_when_closed() at disconnection ALSA: 6fire: Release resources at card release ALSA: usb-audio: Fix out of bounds reads when finding clock sources ALSA: rawmidi: Fix kvfree() call in spinlock ALSA: ump: Fix evaluation of MIDI 1.0 FB info ALSA: pcm: Add sanity NULL check for the default mmap fault handler ALSA: hda/realtek: Apply quirk for Medion E15433
Tamir Duberstein (1): checkpatch: always parse orig_commit in fixes tag
Tao Chen (1): libbpf: Fix expected_attach_type set handling in program load callback
Tejun Heo (1): sched_ext: scx_bpf_dispatch_from_dsq_set_*() are allowed from unlocked context
Thadeu Lima de Souza Cascardo (1): hfsplus: don't query the device logical block size multiple times
Theodore Ts'o (1): ext4: fix FS_IOC_GETFSMAP handling
Thinh Nguyen (3): usb: dwc3: ep0: Don't clear ep0 DWC3_EP_TRANSFER_STARTED usb: dwc3: gadget: Fix checking for number of TRBs left usb: dwc3: gadget: Fix looping of queued SG entries
Thomas Falcon (1): perf mem: Fix printing PERF_MEM_LVLNUM_{L2_MHB|MSC}
Thomas Gleixner (2): timers: Add missing READ_ONCE() in __run_timer_base() sched/ext: Remove sched_fork() hack
Thomas Richter (1): s390/cpum_sf: Fix and protect memory allocation of SDBs with mutex
Thomas Weißschuh (1): tools/nolibc: s390: include std.h
Tiezhu Yang (2): LoongArch: Fix build failure with GCC 15 (-std=gnu23) LoongArch: BPF: Sign-extend return values
Tiwei Bie (7): um: ubd: Do not use drvdata in release um: net: Do not use drvdata in release um: vector: Do not use drvdata in release um: Fix potential integer overflow during physmem setup um: Fix the return value of elf_core_copy_task_fpregs um: ubd: Initialize ubd's disk pointer in ubd_add um: Always dump trace for specified task in show_stack
Todd Kjos (1): PCI: Fix reset_method_store() memory leak
Tomas Glozar (1): rtla/timerlat: Do not set params->user_workload with -U
Tomas Paukrt (1): crypto: mxs-dcp - Fix AES-CBC with hardware-bound keys
Tomasz Maciej Nowak (1): arm64: tegra: p2180: Add mandatory compatible for WiFi node
Tomi Valkeinen (3): drm/omap: Fix possible NULL dereference drm/omap: Fix locking in omap_gem_new_dmabuf() drm/bridge: tc358767: Fix link properties discovery
Tony Ambardar (1): libbpf: Fix output .symtab byte-order during linking
Trond Myklebust (2): NFSv4.0: Fix a use-after-free problem in the asynchronous open() Revert "nfs: don't reuse partially completed requests in nfs_lock_and_join_requests"
Tvrtko Ursulin (1): drm/v3d: Appease lockdep while updating GPU stats
Uladzislau Rezki (Sony) (2): rcu/kvfree: Fix data-race in __mod_timer / kvfree_call_rcu rcuscale: Do a proper cleanup if kfree_scale_init() fails
Uros Bizjak (3): cleanup: Remove address space of returned pointer locking/atomic/x86: Use ALT_OUTPUT_SP() for __alternative_atomic64() locking/atomic/x86: Use ALT_OUTPUT_SP() for __arch_{,try_}cmpxchg64_emu()
Usama Arif (1): of/fdt: add dt_phys arg to early_init_dt_scan and early_init_dt_verify
Uwe Kleine-König (1): pwm: Assume a disabled PWM to emit a constant inactive output
Vasant Hegde (1): iommu/amd/pgtbl_v2: Take protection domain lock before invalidating TLB
Venkata Prasad Potturu (1): ASoC: amd: yc: Fix for enabling DMIC on acp6x via _DSD entry
Veronika Molnarova (2): perf test attr: Add back missing topdown events perf dso: Fix symtab_type for kmod compression
Vijendar Mukunda (2): ASoC: amd: acp: fix for inconsistent indenting ASoC: amd: acp: fix for cpu dai index logic
Viktor Malik (1): selftests/bpf: skip the timer_lockup test for single-CPU nodes
Vineeth Vijayan (1): s390/cio: Do not unregister the subchannel based on DNV
Vitalii Mordan (2): marvell: pxa168_eth: fix call balance of pep->clk handling routines usb: ehci-spear: fix call balance of sehci clk handling routines
Vitaly Kuznetsov (1): HID: hyperv: streamline driver probe to avoid devres issues
Wang Hai (1): crypto: qat - Fix missing destroy_workqueue in adf_init_aer()
Waqar Hameed (1): ubifs: authentication: Fix use-after-free in ubifs_tnc_end_commit
Weili Qian (1): crypto: hisilicon/qm - disable same error report before resetting
Will Deacon (1): arm64: tls: Fix context-switching of tpidrro_el0 when kpti is enabled
Wolfram Sang (2): ARM: dts: renesas: genmai: Fix partition size for QSPI NOR Flash rtc: rzn1: fix BCD to rtc_time conversion errors
Xiaolei Wang (1): drm/etnaviv: Request pages from DMA32 zone on addressing_limited
Xiuhong Wang (1): f2fs: fix fiemap failure issue when page size is 16KB
Xu Kuohai (3): bpf, arm64: Remove garbage frame for struct_ops trampoline bpf: Use function pointers count as struct_ops links count bpf: Add kernel symbol for struct_ops trampoline
Yang Erkun (3): brd: defer automatic disk creation until module initialization succeeds nfsd: release svc_expkey/svc_export with rcu_work SUNRPC: make sure cache entry active before cache_show
Yang Wang (1): drm/amdgpu: fix ACA bank count boundary check error
Yang Yingliang (3): clk: imx: imx8-acm: Fix return value check in clk_imx_acm_attach_pm_domains() mailbox: mtk-cmdq: fix wrong use of sizeof in cmdq_get_clocks() iio: backend: fix wrong pointer passed to IS_ERR()
Yao Zi (1): platform/x86: panasonic-laptop: Return errno correctly in show callback
Ye Bin (3): scsi: bfa: Fix use-after-free in bfad_im_module_exit() f2fs: fix null-ptr-deref in f2fs_submit_page_bio() svcrdma: fix miss destroy percpu_counter in svc_rdma_proc_init()
Yi Yang (1): crypto: pcrypt - Call crypto layer directly when padata_do_parallel() return -EBUSY
Yicong Yang (1): perf build: Add missing cflags when building with custom libtraceevent
Yihang Li (1): scsi: hisi_sas: Enable all PHYs that are not disabled by user during controller reset
Yishai Hadas (2): vfio/mlx5: Fix an unwind issue in mlx5vf_add_migration_pages() vfio/mlx5: Fix unwind flows in mlx5vf_pci_save/resume_device_data()
Yong-Xuan Wang (1): RISC-V: KVM: Fix APLIC in_clrip and clripnum write emulation
Yongliang Gao (1): rtc: check if __rtc_read_time was successful in rtc_timer_do_work()
Yongpeng Yang (1): f2fs: check curseg->inited before write_sum_page in change_curseg
Yu Kuai (2): block: fix uaf for flush rq while iterating tags block, bfq: fix bfqq uaf in bfq_limit_depth()
Yuan Can (5): firmware: google: Unregister driver_info on failure wifi: wfx: Fix error handling in wfx_core_init() drm/amdkfd: Fix wrong usage of INIT_WORK() cpufreq: loongson2: Unregister platform_driver on failure Input: cs40l50 - fix wrong usage of INIT_WORK()
Yuan Chen (1): bpf: Fix the xdp_adjust_tail sample prog issue
Yuezhang Mo (2): exfat: fix file being changed by unaligned direct write exfat: fix out-of-bounds access of directory entries
Yunseong Kim (1): ksmbd: fix use-after-free in SMB request handling
Yutaro Ohno (1): rust: kernel: fix THIS_MODULE header path in ThisModule doc comment
Yuyu Li (1): RDMA/hns: Modify debugfs name
Zach Wade (1): Revert "block, bfq: merge bfq_release_process_ref() into bfq_put_cooperator()"
Zeng Heng (2): scsi: fusion: Remove unused variable 'rc' f2fs: Fix not used variable 'index'
Zhang Changzhong (1): mfd: rt5033: Fix missing regmap_del_irq_chip()
Zhang Rui (1): tools/power turbostat: Fix trailing '\n' parsing
Zhang Xianwei (1): brd: decrease the number of allocated pages which discarded
Zhang Zekun (2): pmdomain: ti-sci: Add missing of_node_put() for args.np powerpc/kexec: Fix return of uninitialized variable
ZhangPeng (1): hostfs: Fix the NULL vs IS_ERR() bug for __filemap_get_folio()
Zhen Lei (3): scsi: qedf: Fix a possible memory leak in qedf_alloc_and_init_sb() scsi: qedi: Fix a possible memory leak in qedi_alloc_and_init_sb() fbdev: sh7760fb: Fix a possible memory leak in sh7760fb_alloc_mem()
Zheng Yejian (1): x86/unwind/orc: Fix unwind for newly forked tasks
Zhenzhong Duan (2): iommu/vt-d: Fix checks and print in dmar_fault_dump_ptes() iommu/vt-d: Fix checks and print in pgtable_walk()
Zhiguo Niu (1): f2fs: fix to avoid use GC_AT when setting gc_mode as GC_URGENT_LOW or GC_URGENT_MID
Zhihao Cheng (4): ubi: wl: Put source PEB into correct list if trying locking LEB failed ubi: fastmap: wl: Schedule fm_work if wear-leveling pool is empty ubifs: Correct the total block count by deducting journal reservation ubi: fastmap: Fix duplicate slab cache names while attaching
Zhongqiu Han (1): PCI: endpoint: epf-mhi: Avoid NULL dereference if DT lacks 'mmio'
Zhu Yanjun (1): RDMA/rxe: Fix the qp flush warnings in req
Zichen Xie (3): drm/msm/dpu: cast crtc_clk calculation to u64 in _dpu_core_perf_calc_clk() clk: sophgo: avoid integer overflow in sg2042_pll_recalc_rate() ALSA: core: Fix possible NULL dereference caused by kunit_kzalloc()
Zicheng Qu (3): drm/amd/display: Fix null check for pipe_ctx->plane_state in dcn20_program_pipe drm/amd/display: Fix null check for pipe_ctx->plane_state in hwss_setup_dpp iio: gts: Fix uninitialized symbol 'ret'
Zijian Zhang (9): selftests/bpf: Fix msg_verify_data in test_sockmap selftests/bpf: Fix txmsg_redir of test_txmsg_pull in test_sockmap selftests/bpf: Add txmsg_pass to pull/push/pop in test_sockmap selftests/bpf: Fix SENDPAGE data logic in test_sockmap selftests/bpf: Fix total_bytes in msg_loop_rx in test_sockmap selftests/bpf: Add push/pop checking for msg_verify_data in test_sockmap bpf, sockmap: Several fixes to bpf_msg_push_data bpf, sockmap: Several fixes to bpf_msg_pop_data bpf, sockmap: Fix sk_msg_reset_curr
Ziwei Xiao (1): gve: Flow steering trigger reset only for timeout error
Zizhi Wo (4): cachefiles: Fix incorrect length return value in cachefiles_ondemand_fd_write_iter() cachefiles: Fix missing pos updates in cachefiles_ondemand_fd_write_iter() cachefiles: Fix NULL pointer dereference in object->file netfs/fscache: Add a memory barrier for FSCACHE_VOLUME_CREATING
Zong-Zhe Yang (7): wifi: rtw89: rename rtw89_vif to rtw89_vif_link ahead for MLO wifi: rtw89: rename rtw89_sta to rtw89_sta_link ahead for MLO wifi: rtw89: read bss_conf corresponding to the link wifi: rtw89: read link_sta corresponding to the link wifi: rtw89: refactor VIF related func ahead for MLO wifi: rtw89: refactor STA related func ahead for MLO wifi: rtw89: tweak driver architecture for impending MLO support
Zqiang (1): rcu/nocb: Fix missed RCU barrier on deoffloading
chao liu (1): apparmor: fix 'Do simple duplicate message elimination'
guanjing (1): selftests: netfilter: Fix missing return values in conntrack_dump_flush
wenglianfa (3): RDMA/hns: Fix an AEQE overflow error caused by untimely update of eq_db_ci RDMA/hns: Fix flush cqe error when racing with destroy qp RDMA/hns: Fix cpu stuck caused by printings during reset
zhang jiao (1): pinctrl: k210: Undef K210_PC_DEFAULT
diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs index fdedf1ea944b..513296bb6f29 100644 --- a/Documentation/ABI/testing/sysfs-fs-f2fs +++ b/Documentation/ABI/testing/sysfs-fs-f2fs @@ -311,10 +311,13 @@ Description: Do background GC aggressively when set. Set to 0 by default. GC approach and turns SSR mode on. gc urgent low(2): lowers the bar of checking I/O idling in order to process outstanding discard commands and GC a - little bit aggressively. uses cost benefit GC approach. + little bit aggressively. always uses cost benefit GC approach, + and will override age-threshold GC approach if ATGC is enabled + at the same time. gc urgent mid(3): does GC forcibly in a period of given gc_urgent_sleep_time and executes a mid level of I/O idling check. - uses cost benefit GC approach. + always uses cost benefit GC approach, and will override + age-threshold GC approach if ATGC is enabled at the same time.
What: /sys/fs/f2fs/<disk>/gc_urgent_sleep_time Date: August 2017 diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst index ca7b7cd806a1..30080ff6f406 100644 --- a/Documentation/RCU/stallwarn.rst +++ b/Documentation/RCU/stallwarn.rst @@ -249,7 +249,7 @@ ticks this GP)" indicates that this CPU has not taken any scheduling-clock interrupts during the current stalled grace period.
The "idle=" portion of the message prints the dyntick-idle state. -The hex number before the first "/" is the low-order 12 bits of the +The hex number before the first "/" is the low-order 16 bits of the dynticks counter, which will have an even-numbered value if the CPU is in dyntick-idle mode and an odd-numbered value otherwise. The hex number between the two "/"s is the value of the nesting, which will be diff --git a/Documentation/admin-guide/blockdev/zram.rst b/Documentation/admin-guide/blockdev/zram.rst index 678d70d6e1c3..714a5171bfc0 100644 --- a/Documentation/admin-guide/blockdev/zram.rst +++ b/Documentation/admin-guide/blockdev/zram.rst @@ -47,6 +47,8 @@ The list of possible return codes: -ENOMEM zram was not able to allocate enough memory to fulfil your needs. -EINVAL invalid input has been provided. +-EAGAIN re-try operation later (e.g. when attempting to run recompress + and writeback simultaneously). ======== =============================================================
If you use 'echo', the returned value is set by the 'echo' utility, diff --git a/Documentation/admin-guide/media/building.rst b/Documentation/admin-guide/media/building.rst index a06473429916..7a413ba07f93 100644 --- a/Documentation/admin-guide/media/building.rst +++ b/Documentation/admin-guide/media/building.rst @@ -15,7 +15,7 @@ Please notice, however, that, if:
you should use the main media development tree ``master`` branch:
- https://git.linuxtv.org/media_tree.git/ + https://git.linuxtv.org/media.git/
In this case, you may find some useful information at the `LinuxTv wiki pages https://linuxtv.org/wiki`_: diff --git a/Documentation/admin-guide/media/saa7134.rst b/Documentation/admin-guide/media/saa7134.rst index 51eae7eb5ab7..18d7cbc897db 100644 --- a/Documentation/admin-guide/media/saa7134.rst +++ b/Documentation/admin-guide/media/saa7134.rst @@ -67,7 +67,7 @@ Changes / Fixes Please mail to linux-media AT vger.kernel.org unified diffs against the linux media git tree:
- https://git.linuxtv.org/media_tree.git/ + https://git.linuxtv.org/media.git/
This is done by committing a patch at a clone of the git tree and submitting the patch using ``git send-email``. Don't forget to diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst index 4fd492cb4970..ad2d8ddad27f 100644 --- a/Documentation/arch/x86/boot.rst +++ b/Documentation/arch/x86/boot.rst @@ -896,10 +896,19 @@ Offset/size: 0x260/4
The kernel runtime start address is determined by the following algorithm::
- if (relocatable_kernel) - runtime_start = align_up(load_address, kernel_alignment) - else - runtime_start = pref_address + if (relocatable_kernel) { + if (load_address < pref_address) + load_address = pref_address; + runtime_start = align_up(load_address, kernel_alignment); + } else { + runtime_start = pref_address; + } + +Hence the necessary memory window location and size can be estimated by +a boot loader as:: + + memory_window_start = runtime_start; + memory_window_size = init_size;
============ =============== Field name: handover_offset diff --git a/Documentation/devicetree/bindings/cache/qcom,llcc.yaml b/Documentation/devicetree/bindings/cache/qcom,llcc.yaml index 68ea5f70b75f..ee7edc6f60e2 100644 --- a/Documentation/devicetree/bindings/cache/qcom,llcc.yaml +++ b/Documentation/devicetree/bindings/cache/qcom,llcc.yaml @@ -39,11 +39,11 @@ properties:
reg: minItems: 2 - maxItems: 9 + maxItems: 10
reg-names: minItems: 2 - maxItems: 9 + maxItems: 10
interrupts: maxItems: 1 @@ -134,6 +134,36 @@ allOf: - qcom,qdu1000-llcc - qcom,sc8180x-llcc - qcom,sc8280xp-llcc + then: + properties: + reg: + items: + - description: LLCC0 base register region + - description: LLCC1 base register region + - description: LLCC2 base register region + - description: LLCC3 base register region + - description: LLCC4 base register region + - description: LLCC5 base register region + - description: LLCC6 base register region + - description: LLCC7 base register region + - description: LLCC broadcast base register region + reg-names: + items: + - const: llcc0_base + - const: llcc1_base + - const: llcc2_base + - const: llcc3_base + - const: llcc4_base + - const: llcc5_base + - const: llcc6_base + - const: llcc7_base + - const: llcc_broadcast_base + + - if: + properties: + compatible: + contains: + enum: - qcom,x1e80100-llcc then: properties: @@ -148,6 +178,7 @@ allOf: - description: LLCC6 base register region - description: LLCC7 base register region - description: LLCC broadcast base register region + - description: LLCC broadcast AND register region reg-names: items: - const: llcc0_base @@ -159,6 +190,7 @@ allOf: - const: llcc6_base - const: llcc7_base - const: llcc_broadcast_base + - const: llcc_broadcast_and_base
- if: properties: diff --git a/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml b/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml index 5e942bccf277..2b2041818a0a 100644 --- a/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml +++ b/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml @@ -26,9 +26,21 @@ properties: description: Specifies the reference clock(s) from which the output frequency is derived. This must either reference one clock if only the first clock - input is connected or two if both clock inputs are connected. - minItems: 1 - maxItems: 2 + input is connected or two if both clock inputs are connected. The last + clock is the AXI bus clock that needs to be enabled so we can access the + core registers. + minItems: 2 + maxItems: 3 + + clock-names: + oneOf: + - items: + - const: clkin1 + - const: s_axi_aclk + - items: + - const: clkin1 + - const: clkin2 + - const: s_axi_aclk
'#clock-cells': const: 0 @@ -40,6 +52,7 @@ required: - compatible - reg - clocks + - clock-names - '#clock-cells'
additionalProperties: false @@ -50,5 +63,6 @@ examples: compatible = "adi,axi-clkgen-2.00.a"; #clock-cells = <0>; reg = <0xff000000 0x1000>; - clocks = <&osc 1>; + clocks = <&osc 1>, <&clkc 15>; + clock-names = "clkin1", "s_axi_aclk"; }; diff --git a/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml b/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml index fc8b97f82077..41fe00034742 100644 --- a/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml +++ b/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml @@ -30,7 +30,7 @@ properties: maxItems: 1
spi-max-frequency: - maximum: 30000000 + maximum: 66000000
reset-gpios: maxItems: 1 diff --git a/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml b/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml index 898c1be2d6a4..f05aab2b1add 100644 --- a/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml +++ b/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml @@ -149,7 +149,7 @@ allOf: then: properties: clocks: - minItems: 4 + minItems: 6
clock-names: items: @@ -178,7 +178,7 @@ allOf: then: properties: clocks: - minItems: 4 + minItems: 6
clock-names: items: @@ -207,6 +207,7 @@ allOf: properties: clocks: minItems: 4 + maxItems: 4
clock-names: items: diff --git a/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml b/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml index 4dfb49b0e07f..f82a3c7e6c29 100644 --- a/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml +++ b/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml @@ -91,14 +91,17 @@ allOf: - if: properties: compatible: - # Match without "contains", to skip newer variants which are still - # compatible with samsung,exynos7-wakeup-eint - enum: - - samsung,s5pv210-wakeup-eint - - samsung,exynos4210-wakeup-eint - - samsung,exynos5433-wakeup-eint - - samsung,exynos7-wakeup-eint - - samsung,exynos7885-wakeup-eint + oneOf: + # Match without "contains", to skip newer variants which are still + # compatible with samsung,exynos7-wakeup-eint + - enum: + - samsung,exynos4210-wakeup-eint + - samsung,exynos7-wakeup-eint + - samsung,s5pv210-wakeup-eint + - contains: + enum: + - samsung,exynos5433-wakeup-eint + - samsung,exynos7885-wakeup-eint then: properties: interrupts: diff --git a/Documentation/devicetree/bindings/serial/rs485.yaml b/Documentation/devicetree/bindings/serial/rs485.yaml index 9418fd66a8e9..b93254ad2a28 100644 --- a/Documentation/devicetree/bindings/serial/rs485.yaml +++ b/Documentation/devicetree/bindings/serial/rs485.yaml @@ -18,16 +18,15 @@ properties: description: prop-encoded-array <a b> $ref: /schemas/types.yaml#/definitions/uint32-array items: - items: - - description: Delay between rts signal and beginning of data sent in - milliseconds. It corresponds to the delay before sending data. - default: 0 - maximum: 100 - - description: Delay between end of data sent and rts signal in milliseconds. - It corresponds to the delay after sending data and actual release - of the line. - default: 0 - maximum: 100 + - description: Delay between rts signal and beginning of data sent in + milliseconds. It corresponds to the delay before sending data. + default: 0 + maximum: 100 + - description: Delay between end of data sent and rts signal in milliseconds. + It corresponds to the delay after sending data and actual release + of the line. + default: 0 + maximum: 100
rs485-rts-active-high: description: drive RTS high when sending (this is the default). diff --git a/Documentation/devicetree/bindings/sound/mt6359.yaml b/Documentation/devicetree/bindings/sound/mt6359.yaml index 23d411fc4200..128698630c86 100644 --- a/Documentation/devicetree/bindings/sound/mt6359.yaml +++ b/Documentation/devicetree/bindings/sound/mt6359.yaml @@ -23,8 +23,8 @@ properties: Indicates how many data pins are used to transmit two channels of PDM signal. 0 means two wires, 1 means one wire. Default value is 0. enum: - - 0 # one wire - - 1 # two wires + - 0 # two wires + - 1 # one wire
mediatek,mic-type-0: $ref: /schemas/types.yaml#/definitions/uint32 @@ -53,9 +53,9 @@ additionalProperties: false
examples: - | - mt6359codec: mt6359codec { - mediatek,dmic-mode = <0>; - mediatek,mic-type-0 = <2>; + mt6359codec: audio-codec { + mediatek,dmic-mode = <0>; + mediatek,mic-type-0 = <2>; };
... diff --git a/Documentation/devicetree/bindings/vendor-prefixes.yaml b/Documentation/devicetree/bindings/vendor-prefixes.yaml index b320a39de7fe..fbfce9b4ae6b 100644 --- a/Documentation/devicetree/bindings/vendor-prefixes.yaml +++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml @@ -1013,6 +1013,8 @@ patternProperties: description: Shanghai Neardi Technology Co., Ltd. "^nec,.*": description: NEC LCD Technologies, Ltd. + "^neofidelity,.*": + description: Neofidelity Inc. "^neonode,.*": description: Neonode Inc. "^netgear,.*": diff --git a/Documentation/filesystems/mount_api.rst b/Documentation/filesystems/mount_api.rst index 317934c9e8fc..d92c276f1575 100644 --- a/Documentation/filesystems/mount_api.rst +++ b/Documentation/filesystems/mount_api.rst @@ -770,7 +770,8 @@ process the parameters it is given.
* ::
- bool fs_validate_description(const struct fs_parameter_description *desc); + bool fs_validate_description(const char *name, + const struct fs_parameter_description *desc);
This performs some validation checks on a parameter description. It returns true if the description is good and false if it is not. It will diff --git a/Documentation/locking/seqlock.rst b/Documentation/locking/seqlock.rst index bfda1a5fecad..ec6411d02ac8 100644 --- a/Documentation/locking/seqlock.rst +++ b/Documentation/locking/seqlock.rst @@ -153,7 +153,7 @@ Use seqcount_latch_t when the write side sections cannot be protected from interruption by readers. This is typically the case when the read side can be invoked from NMI handlers.
-Check `raw_write_seqcount_latch()` for more information. +Check `write_seqcount_latch()` for more information.
.. _seqlock_t: diff --git a/MAINTAINERS b/MAINTAINERS index b878ddc99f94..6bb4ec0c162a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -701,7 +701,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-aimslab*
AIO @@ -809,7 +809,7 @@ ALLWINNER A10 CSI DRIVER M: Maxime Ripard mripard@kernel.org L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/allwinner,sun4i-a10-csi.yaml F: drivers/media/platform/sunxi/sun4i-csi/
@@ -818,7 +818,7 @@ M: Yong Deng yong.deng@magewell.com M: Paul Kocialkowski paul.kocialkowski@bootlin.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/allwinner,sun6i-a31-csi.yaml F: drivers/media/platform/sunxi/sun6i-csi/
@@ -826,7 +826,7 @@ ALLWINNER A31 ISP DRIVER M: Paul Kocialkowski paul.kocialkowski@bootlin.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/allwinner,sun6i-a31-isp.yaml F: drivers/staging/media/sunxi/sun6i-isp/ F: drivers/staging/media/sunxi/sun6i-isp/uapi/sun6i-isp-config.h @@ -835,7 +835,7 @@ ALLWINNER A31 MIPI CSI-2 BRIDGE DRIVER M: Paul Kocialkowski paul.kocialkowski@bootlin.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/allwinner,sun6i-a31-mipi-csi2.yaml F: drivers/media/platform/sunxi/sun6i-mipi-csi2/
@@ -3348,7 +3348,7 @@ ASAHI KASEI AK7375 LENS VOICE COIL DRIVER M: Tianshu Qiu tian.shu.qiu@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/asahi-kasei,ak7375.yaml F: drivers/media/i2c/ak7375.c
@@ -3765,7 +3765,7 @@ M: Mauro Carvalho Chehab mchehab@kernel.org L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/dvb-usb-v2/az6007.c
AZTECH FM RADIO RECEIVER DRIVER @@ -3773,7 +3773,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-aztech*
B43 WIRELESS DRIVER @@ -3857,7 +3857,7 @@ M: Fabien Dessenne fabien.dessenne@foss.st.com L: linux-media@vger.kernel.org S: Supported W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/platform/st/sti/bdisp
BECKHOFF CX5020 ETHERCAT MASTER DRIVER @@ -4865,7 +4865,7 @@ M: Mauro Carvalho Chehab mchehab@kernel.org L: linux-media@vger.kernel.org S: Odd fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/driver-api/media/drivers/bttv* F: drivers/media/pci/bt8xx/bttv*
@@ -4979,13 +4979,13 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-cadet*
CAFE CMOS INTEGRATED CAMERA CONTROLLER DRIVER L: linux-media@vger.kernel.org S: Orphan -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/admin-guide/media/cafe_ccic* F: drivers/media/platform/marvell/
@@ -5169,7 +5169,7 @@ M: Hans Verkuil hverkuil-cisco@xs4all.nl L: linux-media@vger.kernel.org S: Supported W: http://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/ABI/testing/debugfs-cec-error-inj F: Documentation/devicetree/bindings/media/cec/cec-common.yaml F: Documentation/driver-api/media/cec-core.rst @@ -5186,7 +5186,7 @@ M: Hans Verkuil hverkuil-cisco@xs4all.nl L: linux-media@vger.kernel.org S: Supported W: http://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/cec/cec-gpio.yaml F: drivers/media/cec/platform/cec-gpio/
@@ -5393,7 +5393,7 @@ CHRONTEL CH7322 CEC DRIVER M: Joe Tessler jrt@google.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/chrontel,ch7322.yaml F: drivers/media/cec/i2c/ch7322.c
@@ -5582,7 +5582,7 @@ M: Hans Verkuil hverkuil-cisco@xs4all.nl L: linux-media@vger.kernel.org S: Supported W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/pci/cobalt/
COCCINELLE/Semantic Patches (SmPL) @@ -6026,7 +6026,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Odd Fixes W: http://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/cs3308.c
CS5535 Audio ALSA driver @@ -6057,7 +6057,7 @@ M: Andy Walls awalls@md.metrocast.net L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/pci/cx18/ F: include/uapi/linux/ivtv*
@@ -6066,7 +6066,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/common/cx2341x* F: include/media/drv-intf/cx2341x.h
@@ -6084,7 +6084,7 @@ M: Mauro Carvalho Chehab mchehab@kernel.org L: linux-media@vger.kernel.org S: Odd fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/driver-api/media/drivers/cx88* F: drivers/media/pci/cx88/
@@ -6320,7 +6320,7 @@ DEINTERLACE DRIVERS FOR ALLWINNER H3 M: Jernej Skrabec jernej.skrabec@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/allwinner,sun8i-h3-deinterlace.yaml F: drivers/media/platform/sunxi/sun8i-di/
@@ -6447,7 +6447,7 @@ M: Hugues Fruchet hugues.fruchet@foss.st.com L: linux-media@vger.kernel.org S: Supported W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/platform/st/sti/delta
DENALI NAND DRIVER @@ -6855,7 +6855,7 @@ DONGWOON DW9714 LENS VOICE COIL DRIVER M: Sakari Ailus sakari.ailus@linux.intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml F: drivers/media/i2c/dw9714.c
@@ -6863,13 +6863,13 @@ DONGWOON DW9719 LENS VOICE COIL DRIVER M: Daniel Scally djrscally@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/dw9719.c
DONGWOON DW9768 LENS VOICE COIL DRIVER L: linux-media@vger.kernel.org S: Orphan -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9768.yaml F: drivers/media/i2c/dw9768.c
@@ -6877,7 +6877,7 @@ DONGWOON DW9807 LENS VOICE COIL DRIVER M: Sakari Ailus sakari.ailus@linux.intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9807-vcm.yaml F: drivers/media/i2c/dw9807-vcm.c
@@ -7860,7 +7860,7 @@ DSBR100 USB FM RADIO DRIVER M: Alexey Klimov klimov.linux@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/dsbr100.c
DT3155 MEDIA DRIVER @@ -7868,7 +7868,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Odd Fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/pci/dt3155/
DVB_USB_AF9015 MEDIA DRIVER @@ -7913,7 +7913,7 @@ S: Maintained W: https://linuxtv.org W: http://github.com/mkrufky Q: http://patchwork.linuxtv.org/project/linux-media/list/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/dvb-usb/cxusb*
DVB_USB_EC168 MEDIA DRIVER @@ -8282,7 +8282,7 @@ M: Mauro Carvalho Chehab mchehab@kernel.org L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/admin-guide/media/em28xx* F: drivers/media/usb/em28xx/
@@ -8578,7 +8578,7 @@ EXTRON DA HD 4K PLUS CEC DRIVER M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/cec/usb/extron-da-hd-4k-plus/
EXYNOS DP DRIVER @@ -9400,7 +9400,7 @@ GALAXYCORE GC2145 SENSOR DRIVER M: Alain Volmat alain.volmat@foss.st.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/galaxycore,gc2145.yaml F: drivers/media/i2c/gc2145.c
@@ -9448,7 +9448,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-gemtek*
GENERIC ARCHITECTURE TOPOLOGY @@ -9830,56 +9830,56 @@ GS1662 VIDEO SERIALIZER M: Charles-Antoine Couret charles-antoine.couret@nexvision.fr L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/spi/gs1662.c
GSPCA FINEPIX SUBDRIVER M: Frank Zago frank@zago.net L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/gspca/finepix.c
GSPCA GL860 SUBDRIVER M: Olivier Lorin o.lorin@laposte.net L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/gspca/gl860/
GSPCA M5602 SUBDRIVER M: Erik Andren erik.andren@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/gspca/m5602/
GSPCA PAC207 SONIXB SUBDRIVER M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Odd Fixes -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/gspca/pac207.c
GSPCA SN9C20X SUBDRIVER M: Brian Johnson brijohn@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/gspca/sn9c20x.c
GSPCA T613 SUBDRIVER M: Leandro Costantino lcostantino@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/gspca/t613.c
GSPCA USB WEBCAM DRIVER M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Odd Fixes -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/gspca/
GTP (GPRS Tunneling Protocol) @@ -9996,7 +9996,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Odd Fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/hdpvr/
HEWLETT PACKARD ENTERPRISE ILO CHIF DRIVER @@ -10503,7 +10503,7 @@ M: Jean-Christophe Trotin jean-christophe.trotin@foss.st.com L: linux-media@vger.kernel.org S: Supported W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/platform/st/sti/hva
HWPOISON MEMORY FAILURE HANDLING @@ -10531,7 +10531,7 @@ HYNIX HI556 SENSOR DRIVER M: Sakari Ailus sakari.ailus@linux.intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/hi556.c
HYNIX HI846 SENSOR DRIVER @@ -11502,7 +11502,7 @@ M: Dan Scally djrscally@gmail.com R: Tianshu Qiu tian.shu.qiu@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/userspace-api/media/v4l/pixfmt-srggb10-ipu3.rst F: drivers/media/pci/intel/ipu3/
@@ -11523,7 +11523,7 @@ M: Bingbu Cao bingbu.cao@intel.com R: Tianshu Qiu tian.shu.qiu@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/admin-guide/media/ipu6-isys.rst F: drivers/media/pci/intel/ipu6/
@@ -12036,7 +12036,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-isa*
ISAPNP @@ -12138,7 +12138,7 @@ M: Andy Walls awalls@md.metrocast.net L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/admin-guide/media/ivtv* F: drivers/media/pci/ivtv/ F: include/uapi/linux/ivtv* @@ -12286,7 +12286,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-keene*
KERNEL AUTOMOUNTER @@ -13573,7 +13573,7 @@ MA901 MASTERKIT USB FM RADIO DRIVER M: Alexey Klimov klimov.linux@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-ma901.c
MAC80211 @@ -13868,7 +13868,7 @@ MAX2175 SDR TUNER DRIVER M: Ramesh Shanmugasundaram rashanmu@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/max2175.txt F: Documentation/userspace-api/media/drivers/max2175.rst F: drivers/media/i2c/max2175* @@ -14048,7 +14048,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-maxiradio*
MAXLINEAR ETHERNET PHY DRIVER @@ -14131,7 +14131,7 @@ M: Laurent Pinchart laurent.pinchart@ideasonboard.com L: linux-media@vger.kernel.org S: Supported W: https://www.linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/mc/ F: include/media/media-*.h F: include/uapi/linux/media.h @@ -14140,7 +14140,7 @@ MEDIA DRIVER FOR FREESCALE IMX PXP M: Philipp Zabel p.zabel@pengutronix.de L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/platform/nxp/imx-pxp.[ch]
MEDIA DRIVERS FOR ASCOT2E @@ -14149,7 +14149,7 @@ L: linux-media@vger.kernel.org S: Supported W: https://linuxtv.org W: http://netup.tv/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/dvb-frontends/ascot2e*
MEDIA DRIVERS FOR CXD2099AR CI CONTROLLERS @@ -14157,7 +14157,7 @@ M: Jasmin Jessich jasmin@anw.at L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/dvb-frontends/cxd2099*
MEDIA DRIVERS FOR CXD2841ER @@ -14166,7 +14166,7 @@ L: linux-media@vger.kernel.org S: Supported W: https://linuxtv.org W: http://netup.tv/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/dvb-frontends/cxd2841er*
MEDIA DRIVERS FOR CXD2880 @@ -14174,7 +14174,7 @@ M: Yasunari Takiguchi Yasunari.Takiguchi@sony.com L: linux-media@vger.kernel.org S: Supported W: http://linuxtv.org/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/dvb-frontends/cxd2880/* F: drivers/media/spi/cxd2880*
@@ -14182,7 +14182,7 @@ MEDIA DRIVERS FOR DIGITAL DEVICES PCIE DEVICES L: linux-media@vger.kernel.org S: Orphan W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/pci/ddbridge/*
MEDIA DRIVERS FOR FREESCALE IMX @@ -14190,7 +14190,7 @@ M: Steve Longerbeam slongerbeam@gmail.com M: Philipp Zabel p.zabel@pengutronix.de L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/admin-guide/media/imx.rst F: Documentation/devicetree/bindings/media/imx.txt F: drivers/staging/media/imx/ @@ -14204,7 +14204,7 @@ M: Martin Kepplinger martin.kepplinger@puri.sm R: Purism Kernel Team kernel@puri.sm L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/admin-guide/media/imx7.rst F: Documentation/devicetree/bindings/media/nxp,imx-mipi-csi2.yaml F: Documentation/devicetree/bindings/media/nxp,imx7-csi.yaml @@ -14219,7 +14219,7 @@ L: linux-media@vger.kernel.org S: Supported W: https://linuxtv.org W: http://netup.tv/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/dvb-frontends/helene*
MEDIA DRIVERS FOR HORUS3A @@ -14228,7 +14228,7 @@ L: linux-media@vger.kernel.org S: Supported W: https://linuxtv.org W: http://netup.tv/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/dvb-frontends/horus3a*
MEDIA DRIVERS FOR LNBH25 @@ -14237,14 +14237,14 @@ L: linux-media@vger.kernel.org S: Supported W: https://linuxtv.org W: http://netup.tv/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/dvb-frontends/lnbh25*
MEDIA DRIVERS FOR MXL5XX TUNER DEMODULATORS L: linux-media@vger.kernel.org S: Orphan W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/dvb-frontends/mxl5xx*
MEDIA DRIVERS FOR NETUP PCI UNIVERSAL DVB devices @@ -14253,7 +14253,7 @@ L: linux-media@vger.kernel.org S: Supported W: https://linuxtv.org W: http://netup.tv/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/pci/netup_unidvb/*
MEDIA DRIVERS FOR NVIDIA TEGRA - VDE @@ -14261,7 +14261,7 @@ M: Dmitry Osipenko digetx@gmail.com L: linux-media@vger.kernel.org L: linux-tegra@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/nvidia,tegra-vde.yaml F: drivers/media/platform/nvidia/tegra-vde/
@@ -14270,7 +14270,7 @@ M: Jacopo Mondi jacopo@jmondi.org L: linux-media@vger.kernel.org L: linux-renesas-soc@vger.kernel.org S: Supported -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/renesas,ceu.yaml F: drivers/media/platform/renesas/renesas-ceu.c F: include/media/drv-intf/renesas-ceu.h @@ -14280,7 +14280,7 @@ M: Fabrizio Castro fabrizio.castro.jz@renesas.com L: linux-media@vger.kernel.org L: linux-renesas-soc@vger.kernel.org S: Supported -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/renesas,drif.yaml F: drivers/media/platform/renesas/rcar_drif.c
@@ -14289,7 +14289,7 @@ M: Laurent Pinchart laurent.pinchart@ideasonboard.com L: linux-media@vger.kernel.org L: linux-renesas-soc@vger.kernel.org S: Supported -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/renesas,fcp.yaml F: drivers/media/platform/renesas/rcar-fcp.c F: include/media/rcar-fcp.h @@ -14299,7 +14299,7 @@ M: Kieran Bingham kieran.bingham+renesas@ideasonboard.com L: linux-media@vger.kernel.org L: linux-renesas-soc@vger.kernel.org S: Supported -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/renesas,fdp1.yaml F: drivers/media/platform/renesas/rcar_fdp1.c
@@ -14308,7 +14308,7 @@ M: Niklas Söderlund niklas.soderlund@ragnatech.se L: linux-media@vger.kernel.org L: linux-renesas-soc@vger.kernel.org S: Supported -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/renesas,csi2.yaml F: Documentation/devicetree/bindings/media/renesas,isp.yaml F: Documentation/devicetree/bindings/media/renesas,vin.yaml @@ -14322,7 +14322,7 @@ M: Kieran Bingham kieran.bingham+renesas@ideasonboard.com L: linux-media@vger.kernel.org L: linux-renesas-soc@vger.kernel.org S: Supported -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/renesas,vsp1.yaml F: drivers/media/platform/renesas/vsp1/
@@ -14330,14 +14330,14 @@ MEDIA DRIVERS FOR ST STV0910 DEMODULATOR ICs L: linux-media@vger.kernel.org S: Orphan W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/dvb-frontends/stv0910*
MEDIA DRIVERS FOR ST STV6111 TUNER ICs L: linux-media@vger.kernel.org S: Orphan W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/dvb-frontends/stv6111*
MEDIA DRIVERS FOR STM32 - DCMI / DCMIPP @@ -14345,7 +14345,7 @@ M: Hugues Fruchet hugues.fruchet@foss.st.com M: Alain Volmat alain.volmat@foss.st.com L: linux-media@vger.kernel.org S: Supported -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/st,stm32-dcmi.yaml F: Documentation/devicetree/bindings/media/st,stm32-dcmipp.yaml F: drivers/media/platform/st/stm32/stm32-dcmi.c @@ -14357,7 +14357,7 @@ L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org Q: http://patchwork.kernel.org/project/linux-media/list/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/admin-guide/media/ F: Documentation/devicetree/bindings/media/ F: Documentation/driver-api/media/ @@ -14933,7 +14933,7 @@ L: linux-media@vger.kernel.org L: linux-amlogic@lists.infradead.org S: Supported W: http://linux-meson.com/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/cec/amlogic,meson-gx-ao-cec.yaml F: drivers/media/cec/platform/meson/ao-cec-g12a.c F: drivers/media/cec/platform/meson/ao-cec.c @@ -14943,7 +14943,7 @@ M: Neil Armstrong neil.armstrong@linaro.org L: linux-media@vger.kernel.org L: linux-amlogic@lists.infradead.org S: Supported -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/amlogic,axg-ge2d.yaml F: drivers/media/platform/amlogic/meson-ge2d/
@@ -14959,7 +14959,7 @@ M: Neil Armstrong neil.armstrong@linaro.org L: linux-media@vger.kernel.org L: linux-amlogic@lists.infradead.org S: Supported -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/amlogic,gx-vdec.yaml F: drivers/staging/media/meson/vdec/
@@ -15557,7 +15557,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Odd Fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-miropcm20*
MITSUMI MM8013 FG DRIVER @@ -15709,7 +15709,7 @@ MR800 AVERMEDIA USB FM RADIO DRIVER M: Alexey Klimov klimov.linux@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-mr800.c
MRF24J40 IEEE 802.15.4 RADIO DRIVER @@ -15776,7 +15776,7 @@ MT9M114 ONSEMI SENSOR DRIVER M: Laurent Pinchart laurent.pinchart@ideasonboard.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/onnn,mt9m114.yaml F: drivers/media/i2c/mt9m114.c
@@ -15784,7 +15784,7 @@ MT9P031 APTINA CAMERA SENSOR M: Laurent Pinchart laurent.pinchart@ideasonboard.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/aptina,mt9p031.yaml F: drivers/media/i2c/mt9p031.c F: include/media/i2c/mt9p031.h @@ -15793,7 +15793,7 @@ MT9T112 APTINA CAMERA SENSOR M: Jacopo Mondi jacopo@jmondi.org L: linux-media@vger.kernel.org S: Odd Fixes -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/mt9t112.c F: include/media/i2c/mt9t112.h
@@ -15801,7 +15801,7 @@ MT9V032 APTINA CAMERA SENSOR M: Laurent Pinchart laurent.pinchart@ideasonboard.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/mt9v032.txt F: drivers/media/i2c/mt9v032.c F: include/media/i2c/mt9v032.h @@ -15810,7 +15810,7 @@ MT9V111 APTINA CAMERA SENSOR M: Jacopo Mondi jacopo@jmondi.org L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/aptina,mt9v111.yaml F: drivers/media/i2c/mt9v111.c
@@ -17005,13 +17005,13 @@ OMNIVISION OV01A10 SENSOR DRIVER M: Bingbu Cao bingbu.cao@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/ov01a10.c
OMNIVISION OV02A10 SENSOR DRIVER L: linux-media@vger.kernel.org S: Orphan -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov02a10.yaml F: drivers/media/i2c/ov02a10.c
@@ -17019,28 +17019,28 @@ OMNIVISION OV08D10 SENSOR DRIVER M: Jimmy Su jimmy.su@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/ov08d10.c
OMNIVISION OV08X40 SENSOR DRIVER M: Jason Chen jason.z.chen@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/ov08x40.c
OMNIVISION OV13858 SENSOR DRIVER M: Sakari Ailus sakari.ailus@linux.intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/ov13858.c
OMNIVISION OV13B10 SENSOR DRIVER M: Arec Kao arec.kao@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/ov13b10.c
OMNIVISION OV2680 SENSOR DRIVER @@ -17048,7 +17048,7 @@ M: Rui Miguel Silva rmfrfs@gmail.com M: Hans de Goede hansg@kernel.org L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov2680.yaml F: drivers/media/i2c/ov2680.c
@@ -17056,7 +17056,7 @@ OMNIVISION OV2685 SENSOR DRIVER M: Shunqian Zheng zhengsq@rock-chips.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov2685.yaml F: drivers/media/i2c/ov2685.c
@@ -17066,14 +17066,14 @@ R: Sakari Ailus sakari.ailus@linux.intel.com R: Bingbu Cao bingbu.cao@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/ov2740.c
OMNIVISION OV4689 SENSOR DRIVER M: Mikhail Rudenko mike.rudenko@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov4689.yaml F: drivers/media/i2c/ov4689.c
@@ -17081,7 +17081,7 @@ OMNIVISION OV5640 SENSOR DRIVER M: Steve Longerbeam slongerbeam@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/ov5640.c
OMNIVISION OV5647 SENSOR DRIVER @@ -17089,7 +17089,7 @@ M: Dave Stevenson dave.stevenson@raspberrypi.com M: Jacopo Mondi jacopo@jmondi.org L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov5647.yaml F: drivers/media/i2c/ov5647.c
@@ -17097,7 +17097,7 @@ OMNIVISION OV5670 SENSOR DRIVER M: Sakari Ailus sakari.ailus@linux.intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov5670.yaml F: drivers/media/i2c/ov5670.c
@@ -17105,7 +17105,7 @@ OMNIVISION OV5675 SENSOR DRIVER M: Sakari Ailus sakari.ailus@linux.intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov5675.yaml F: drivers/media/i2c/ov5675.c
@@ -17113,7 +17113,7 @@ OMNIVISION OV5693 SENSOR DRIVER M: Daniel Scally djrscally@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov5693.yaml F: drivers/media/i2c/ov5693.c
@@ -17121,21 +17121,21 @@ OMNIVISION OV5695 SENSOR DRIVER M: Shunqian Zheng zhengsq@rock-chips.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/ov5695.c
OMNIVISION OV64A40 SENSOR DRIVER M: Jacopo Mondi jacopo.mondi@ideasonboard.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov64a40.yaml F: drivers/media/i2c/ov64a40.c
OMNIVISION OV7670 SENSOR DRIVER L: linux-media@vger.kernel.org S: Orphan -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ov7670.txt F: drivers/media/i2c/ov7670.c
@@ -17143,7 +17143,7 @@ OMNIVISION OV772x SENSOR DRIVER M: Jacopo Mondi jacopo@jmondi.org L: linux-media@vger.kernel.org S: Odd fixes -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov772x.yaml F: drivers/media/i2c/ov772x.c F: include/media/i2c/ov772x.h @@ -17151,7 +17151,7 @@ F: include/media/i2c/ov772x.h OMNIVISION OV7740 SENSOR DRIVER L: linux-media@vger.kernel.org S: Orphan -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ov7740.txt F: drivers/media/i2c/ov7740.c
@@ -17159,7 +17159,7 @@ OMNIVISION OV8856 SENSOR DRIVER M: Sakari Ailus sakari.ailus@linux.intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov8856.yaml F: drivers/media/i2c/ov8856.c
@@ -17168,7 +17168,7 @@ M: Jacopo Mondi jacopo.mondi@ideasonboard.com M: Nicholas Roth nicholas@rothemail.net L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov8858.yaml F: drivers/media/i2c/ov8858.c
@@ -17176,7 +17176,7 @@ OMNIVISION OV9282 SENSOR DRIVER M: Dave Stevenson dave.stevenson@raspberrypi.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ovti,ov9282.yaml F: drivers/media/i2c/ov9282.c
@@ -17192,7 +17192,7 @@ R: Akinobu Mita akinobu.mita@gmail.com R: Sylwester Nawrocki s.nawrocki@samsung.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/ov9650.txt F: drivers/media/i2c/ov9650.c
@@ -17201,7 +17201,7 @@ M: Tianshu Qiu tian.shu.qiu@intel.com R: Bingbu Cao bingbu.cao@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/ov9734.c
ONBOARD USB HUB DRIVER @@ -18646,7 +18646,7 @@ PULSE8-CEC DRIVER M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/cec/usb/pulse8/
PURELIFI PLFXLC DRIVER @@ -18661,7 +18661,7 @@ L: pvrusb2@isely.net (subscribers-only) L: linux-media@vger.kernel.org S: Maintained W: http://www.isely.net/pvrusb2/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/driver-api/media/drivers/pvrusb2* F: drivers/media/usb/pvrusb2/
@@ -18669,7 +18669,7 @@ PWC WEBCAM DRIVER M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Odd Fixes -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/pwc/* F: include/trace/events/pwc.h
@@ -19173,7 +19173,7 @@ R: Bryan O'Donoghue bryan.odonoghue@linaro.org L: linux-media@vger.kernel.org L: linux-arm-msm@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/*venus* F: drivers/media/platform/qcom/venus/
@@ -19218,14 +19218,14 @@ RADIOSHARK RADIO DRIVER M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-shark.c
RADIOSHARK2 RADIO DRIVER M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-shark2.c F: drivers/media/radio/radio-tea5777.c
@@ -19249,7 +19249,7 @@ RAINSHADOW-CEC DRIVER M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/cec/usb/rainshadow/
RALINK MIPS ARCHITECTURE @@ -19333,7 +19333,7 @@ M: Sean Young sean@mess.org L: linux-media@vger.kernel.org S: Maintained W: http://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/driver-api/media/rc-core.rst F: Documentation/userspace-api/media/rc/ F: drivers/media/rc/ @@ -20077,7 +20077,7 @@ ROTATION DRIVER FOR ALLWINNER A83T M: Jernej Skrabec jernej.skrabec@gmail.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/allwinner,sun8i-a83t-de2-rotate.yaml F: drivers/media/platform/sunxi/sun8i-rotate/
@@ -20331,7 +20331,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Odd Fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/saa6588*
SAA7134 VIDEO4LINUX DRIVER @@ -20339,7 +20339,7 @@ M: Mauro Carvalho Chehab mchehab@kernel.org L: linux-media@vger.kernel.org S: Odd fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/driver-api/media/drivers/saa7134* F: drivers/media/pci/saa7134/
@@ -20347,7 +20347,7 @@ SAA7146 VIDEO4LINUX-2 DRIVER M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/common/saa7146/ F: drivers/media/pci/saa7146/ F: include/media/drv-intf/saa7146* @@ -20965,7 +20965,7 @@ SHARP RJ54N1CB0C SENSOR DRIVER M: Jacopo Mondi jacopo@jmondi.org L: linux-media@vger.kernel.org S: Odd fixes -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/rj54n1cb0c.c F: include/media/i2c/rj54n1cb0c.h
@@ -21015,7 +21015,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Odd Fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/silabs,si470x.yaml F: drivers/media/radio/si470x/radio-si470x-i2c.c
@@ -21024,7 +21024,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/si470x/radio-si470x-common.c F: drivers/media/radio/si470x/radio-si470x-usb.c F: drivers/media/radio/si470x/radio-si470x.h @@ -21034,7 +21034,7 @@ M: Eduardo Valentin edubezval@gmail.com L: linux-media@vger.kernel.org S: Odd Fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/si4713/si4713.?
SI4713 FM RADIO TRANSMITTER PLATFORM DRIVER @@ -21042,7 +21042,7 @@ M: Eduardo Valentin edubezval@gmail.com L: linux-media@vger.kernel.org S: Odd Fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/si4713/radio-platform-si4713.c
SI4713 FM RADIO TRANSMITTER USB DRIVER @@ -21050,7 +21050,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/si4713/radio-usb-si4713.c
SIANO DVB DRIVER @@ -21058,7 +21058,7 @@ M: Mauro Carvalho Chehab mchehab@kernel.org L: linux-media@vger.kernel.org S: Odd fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/common/siano/ F: drivers/media/mmc/siano/ F: drivers/media/usb/siano/ @@ -21434,14 +21434,14 @@ SONY IMX208 SENSOR DRIVER M: Sakari Ailus sakari.ailus@linux.intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/imx208.c
SONY IMX214 SENSOR DRIVER M: Ricardo Ribalda ribalda@kernel.org L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/sony,imx214.yaml F: drivers/media/i2c/imx214.c
@@ -21449,7 +21449,7 @@ SONY IMX219 SENSOR DRIVER M: Dave Stevenson dave.stevenson@raspberrypi.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/imx219.yaml F: drivers/media/i2c/imx219.c
@@ -21457,7 +21457,7 @@ SONY IMX258 SENSOR DRIVER M: Sakari Ailus sakari.ailus@linux.intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/sony,imx258.yaml F: drivers/media/i2c/imx258.c
@@ -21465,7 +21465,7 @@ SONY IMX274 SENSOR DRIVER M: Leon Luo leonl@leopardimaging.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/sony,imx274.yaml F: drivers/media/i2c/imx274.c
@@ -21474,7 +21474,7 @@ M: Kieran Bingham kieran.bingham@ideasonboard.com M: Umang Jain umang.jain@ideasonboard.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/sony,imx283.yaml F: drivers/media/i2c/imx283.c
@@ -21482,7 +21482,7 @@ SONY IMX290 SENSOR DRIVER M: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/sony,imx290.yaml F: drivers/media/i2c/imx290.c
@@ -21491,7 +21491,7 @@ M: Laurent Pinchart laurent.pinchart@ideasonboard.com M: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/sony,imx296.yaml F: drivers/media/i2c/imx296.c
@@ -21499,20 +21499,20 @@ SONY IMX319 SENSOR DRIVER M: Bingbu Cao bingbu.cao@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/imx319.c
SONY IMX334 SENSOR DRIVER L: linux-media@vger.kernel.org S: Orphan -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/sony,imx334.yaml F: drivers/media/i2c/imx334.c
SONY IMX335 SENSOR DRIVER L: linux-media@vger.kernel.org S: Orphan -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/sony,imx335.yaml F: drivers/media/i2c/imx335.c
@@ -21520,13 +21520,13 @@ SONY IMX355 SENSOR DRIVER M: Tianshu Qiu tian.shu.qiu@intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/imx355.c
SONY IMX412 SENSOR DRIVER L: linux-media@vger.kernel.org S: Orphan -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/sony,imx412.yaml F: drivers/media/i2c/imx412.c
@@ -21534,7 +21534,7 @@ SONY IMX415 SENSOR DRIVER M: Michael Riesch michael.riesch@wolfvision.net L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/sony,imx415.yaml F: drivers/media/i2c/imx415.c
@@ -21823,7 +21823,7 @@ M: Benjamin Mugnier benjamin.mugnier@foss.st.com M: Sylvain Petinot sylvain.petinot@foss.st.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml F: drivers/media/i2c/st-mipid02.c
@@ -21859,7 +21859,7 @@ M: Benjamin Mugnier benjamin.mugnier@foss.st.com M: Sylvain Petinot sylvain.petinot@foss.st.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/st,st-vgxy61.yaml F: Documentation/userspace-api/media/drivers/vgxy61.rst F: drivers/media/i2c/vgxy61.c @@ -22149,7 +22149,7 @@ STK1160 USB VIDEO CAPTURE DRIVER M: Ezequiel Garcia ezequiel@vanguardiasur.com.ar L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/stk1160/
STM32 AUDIO (ASoC) DRIVERS @@ -22586,7 +22586,7 @@ L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org Q: http://patchwork.linuxtv.org/project/linux-media/list/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/tuners/tda18250*
TDA18271 MEDIA DRIVER @@ -22632,7 +22632,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/tda9840*
TEA5761 TUNER DRIVER @@ -22640,7 +22640,7 @@ M: Mauro Carvalho Chehab mchehab@kernel.org L: linux-media@vger.kernel.org S: Odd fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/tuners/tea5761.*
TEA5767 TUNER DRIVER @@ -22648,7 +22648,7 @@ M: Mauro Carvalho Chehab mchehab@kernel.org L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/tuners/tea5767.*
TEA6415C MEDIA DRIVER @@ -22656,7 +22656,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/tea6415c*
TEA6420 MEDIA DRIVER @@ -22664,7 +22664,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/i2c/tea6420*
TEAM DRIVER @@ -22952,7 +22952,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/radio/radio-raremono.c
THERMAL @@ -23028,7 +23028,7 @@ M: Laurent Pinchart laurent.pinchart@ideasonboard.com M: Paul Elder paul.elder@ideasonboard.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/i2c/thine,thp7312.yaml F: Documentation/userspace-api/media/drivers/thp7312.rst F: drivers/media/i2c/thp7312.c @@ -23615,7 +23615,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Odd Fixes W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/pci/tw68/
TW686X VIDEO4LINUX DRIVER @@ -23623,7 +23623,7 @@ M: Ezequiel Garcia ezequiel@vanguardiasur.com.ar L: linux-media@vger.kernel.org S: Maintained W: http://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/pci/tw686x/
U-BOOT ENVIRONMENT VARIABLES @@ -24106,7 +24106,7 @@ M: Laurent Pinchart laurent.pinchart@ideasonboard.com L: linux-media@vger.kernel.org S: Maintained W: http://www.ideasonboard.org/uvc/ -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/usb/uvc/ F: include/uapi/linux/uvcvideo.h
@@ -24212,7 +24212,7 @@ V4L2 ASYNC AND FWNODE FRAMEWORKS M: Sakari Ailus sakari.ailus@linux.intel.com L: linux-media@vger.kernel.org S: Maintained -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/v4l2-core/v4l2-async.c F: drivers/media/v4l2-core/v4l2-fwnode.c F: include/media/v4l2-async.h @@ -24378,7 +24378,7 @@ M: Hans Verkuil hverkuil-cisco@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/test-drivers/vicodec/*
VIDEO I2C POLLING DRIVER @@ -24406,7 +24406,7 @@ M: Daniel W. S. Almeida dwlsalmeida@gmail.com L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/test-drivers/vidtv/*
VIMC VIRTUAL MEDIA CONTROLLER DRIVER @@ -24415,7 +24415,7 @@ R: Kieran Bingham kieran.bingham@ideasonboard.com L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/test-drivers/vimc/*
VIRT LIB @@ -24663,7 +24663,7 @@ M: Hans Verkuil hverkuil@xs4all.nl L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/test-drivers/vivid/*
VM SOCKETS (AF_VSOCK) @@ -25217,7 +25217,7 @@ M: Mauro Carvalho Chehab mchehab@kernel.org L: linux-media@vger.kernel.org S: Maintained W: https://linuxtv.org -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: drivers/media/tuners/xc2028.*
XDP (eXpress Data Path) @@ -25358,8 +25358,7 @@ F: include/xen/arm/swiotlb-xen.h F: include/xen/swiotlb-xen.h
XFS FILESYSTEM -M: Carlos Maiolino cem@kernel.org -R: Darrick J. Wong djwong@kernel.org +M: Darrick J. Wong djwong@kernel.org L: linux-xfs@vger.kernel.org S: Supported W: http://xfs.org/ @@ -25441,7 +25440,7 @@ XILINX VIDEO IP CORES M: Laurent Pinchart laurent.pinchart@ideasonboard.com L: linux-media@vger.kernel.org S: Supported -T: git git://linuxtv.org/media_tree.git +T: git git://linuxtv.org/media.git F: Documentation/devicetree/bindings/media/xilinx/ F: drivers/media/platform/xilinx/ F: include/uapi/linux/xilinx-v4l2-controls.h diff --git a/Makefile b/Makefile index 70070e64d267..da6e99309a4d 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 6 PATCHLEVEL = 12 -SUBLEVEL = 1 +SUBLEVEL = 2 EXTRAVERSION = NAME = Baby Opossum Posse
diff --git a/arch/arc/kernel/devtree.c b/arch/arc/kernel/devtree.c index 4c9e61457b2f..cc6ac7d128aa 100644 --- a/arch/arc/kernel/devtree.c +++ b/arch/arc/kernel/devtree.c @@ -62,7 +62,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt) const struct machine_desc *mdesc; unsigned long dt_root;
- if (!early_init_dt_scan(dt)) + if (!early_init_dt_scan(dt, __pa(dt))) return NULL;
mdesc = of_flat_dt_match_machine(NULL, arch_get_next_mach); diff --git a/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts b/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts index c8ca8cb7f5c9..52ad95a2063a 100644 --- a/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts +++ b/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts @@ -280,8 +280,8 @@ reg_dcdc4: dcdc4 {
reg_dcdc5: dcdc5 { regulator-always-on; - regulator-min-microvolt = <1425000>; - regulator-max-microvolt = <1575000>; + regulator-min-microvolt = <1450000>; + regulator-max-microvolt = <1550000>; regulator-name = "vcc-dram"; };
diff --git a/arch/arm/boot/dts/microchip/sam9x60.dtsi b/arch/arm/boot/dts/microchip/sam9x60.dtsi index 04a6d716ecaf..1e8fcb5d4700 100644 --- a/arch/arm/boot/dts/microchip/sam9x60.dtsi +++ b/arch/arm/boot/dts/microchip/sam9x60.dtsi @@ -186,6 +186,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 13>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -388,6 +389,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 32>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -439,6 +441,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 33>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -598,6 +601,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 9>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -649,6 +653,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 10>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -700,6 +705,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 11>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -751,6 +757,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 5>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -821,6 +828,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 6>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -891,6 +899,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 7>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -961,6 +970,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 8>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -1086,6 +1096,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 15>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; @@ -1137,6 +1148,7 @@ AT91_XDMAC_DT_PER_IF(1) | dma-names = "tx", "rx"; clocks = <&pmc PMC_TYPE_PERIPHERAL 16>; clock-names = "usart"; + atmel,usart-mode = <AT91_USART_MODE_SERIAL>; atmel,use-dma-rx; atmel,use-dma-tx; atmel,fifo-size = <16>; diff --git a/arch/arm/boot/dts/renesas/r7s72100-genmai.dts b/arch/arm/boot/dts/renesas/r7s72100-genmai.dts index 29ba098f5dd5..28e703e0f152 100644 --- a/arch/arm/boot/dts/renesas/r7s72100-genmai.dts +++ b/arch/arm/boot/dts/renesas/r7s72100-genmai.dts @@ -53,7 +53,7 @@ partition@0 {
partition@4000000 { label = "user1"; - reg = <0x04000000 0x40000000>; + reg = <0x04000000 0x04000000>; }; }; }; diff --git a/arch/arm/boot/dts/ti/omap/omap36xx.dtsi b/arch/arm/boot/dts/ti/omap/omap36xx.dtsi index c3d79ecd56e3..c217094b50ab 100644 --- a/arch/arm/boot/dts/ti/omap/omap36xx.dtsi +++ b/arch/arm/boot/dts/ti/omap/omap36xx.dtsi @@ -72,6 +72,7 @@ opp-1000000000 { <1375000 1375000 1375000>; /* only on am/dm37x with speed-binned bit set */ opp-supported-hw = <0xffffffff 2>; + turbo-mode; }; };
diff --git a/arch/arm/kernel/devtree.c b/arch/arm/kernel/devtree.c index fdb74e64206a..3b78966e750a 100644 --- a/arch/arm/kernel/devtree.c +++ b/arch/arm/kernel/devtree.c @@ -200,7 +200,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt_virt)
mdesc_best = &__mach_desc_GENERIC_DT;
- if (!dt_virt || !early_init_dt_verify(dt_virt)) + if (!dt_virt || !early_init_dt_verify(dt_virt, __pa(dt_virt))) return NULL;
mdesc = of_flat_dt_match_machine(mdesc_best, arch_get_next_mach); diff --git a/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso b/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso index 96db07fc9bec..1f2a0fe70a0a 100644 --- a/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso +++ b/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso @@ -29,12 +29,37 @@ usb_dr_connector: endpoint { }; };
+/* + * rst_usb_hub_hog and sel_usb_hub_hog have property 'output-high', + * dt overlay don't support /delete-property/. Both 'output-low' and + * 'output-high' will be exist under hog nodes if overlay file set + * 'output-low'. Workaround is disable these hog and create new hog with + * 'output-low'. + */ + &rst_usb_hub_hog { - output-low; + status = "disabled"; +}; + +&expander0 { + rst-usb-low-hub-hog { + gpio-hog; + gpios = <13 0>; + output-low; + line-name = "RST_USB_HUB#"; + }; };
&sel_usb_hub_hog { - output-low; + status = "disabled"; +}; + +&gpio2 { + sel-usb-low-hub-hog { + gpio-hog; + gpios = <1 GPIO_ACTIVE_HIGH>; + output-low; + }; };
&usbotg1 { diff --git a/arch/arm64/boot/dts/mediatek/mt6358.dtsi b/arch/arm64/boot/dts/mediatek/mt6358.dtsi index 641d452fbc08..e23672a2eea4 100644 --- a/arch/arm64/boot/dts/mediatek/mt6358.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt6358.dtsi @@ -15,12 +15,12 @@ pmic_adc: adc { #io-channel-cells = <1>; };
- mt6358codec: mt6358codec { + mt6358codec: audio-codec { compatible = "mediatek,mt6358-sound"; mediatek,dmic-mode = <0>; /* two-wires */ };
- mt6358regulator: mt6358regulator { + mt6358regulator: regulators { compatible = "mediatek,mt6358-regulator";
mt6358_vdram1_reg: buck_vdram1 { diff --git a/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi b/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi index 8d1cbc92bce3..ae0379fd42a9 100644 --- a/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi @@ -49,6 +49,14 @@ trackpad2: trackpad@2c { interrupts-extended = <&pio 117 IRQ_TYPE_LEVEL_LOW>; reg = <0x2c>; hid-descr-addr = <0x0020>; + /* + * The trackpad needs a post-power-on delay of 100ms, + * but at time of writing, the power supply for it on + * this board is always on. The delay is therefore not + * added to avoid impacting the readiness of the + * trackpad. + */ + vdd-supply = <&mt6397_vgp6_reg>; wakeup-source; }; }; diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts index 19c1e2bee494..20b71f2e7159 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts +++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts @@ -30,3 +30,6 @@ touchscreen@2c { }; };
+&i2c2 { + i2c-scl-internal-delay-ns = <4100>; +}; diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts index f34964afe39b..83bbcfe62083 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts +++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts @@ -18,6 +18,8 @@ &i2c_tunnel { };
&i2c2 { + i2c-scl-internal-delay-ns = <25000>; + trackpad@2c { compatible = "hid-over-i2c"; reg = <0x2c>; diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts index 0b45aee2e299..65860b33c01f 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts +++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts @@ -30,3 +30,6 @@ &qca_wifi { qcom,ath10k-calibration-variant = "GO_DAMU"; };
+&i2c2 { + i2c-scl-internal-delay-ns = <20000>; +}; diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi index bbe6c338f465..f9c1ec366b26 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi @@ -25,3 +25,6 @@ trackpad@2c { }; };
+&i2c2 { + i2c-scl-internal-delay-ns = <21500>; +}; diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi index 783c333107bc..49e053b932e7 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi @@ -8,28 +8,32 @@ #include <arm/cros-ec-keyboard.dtsi>
/ { - pp1200_mipibrdg: pp1200-mipibrdg { + pp1000_mipibrdg: pp1000-mipibrdg { compatible = "regulator-fixed"; - regulator-name = "pp1200_mipibrdg"; + regulator-name = "pp1000_mipibrdg"; + regulator-min-microvolt = <1000000>; + regulator-max-microvolt = <1000000>; pinctrl-names = "default"; - pinctrl-0 = <&pp1200_mipibrdg_en>; + pinctrl-0 = <&pp1000_mipibrdg_en>;
enable-active-high; regulator-boot-on;
gpio = <&pio 54 GPIO_ACTIVE_HIGH>; + vin-supply = <&pp1800_alw>; };
pp1800_mipibrdg: pp1800-mipibrdg { compatible = "regulator-fixed"; regulator-name = "pp1800_mipibrdg"; pinctrl-names = "default"; - pinctrl-0 = <&pp1800_lcd_en>; + pinctrl-0 = <&pp1800_mipibrdg_en>;
enable-active-high; regulator-boot-on;
gpio = <&pio 36 GPIO_ACTIVE_HIGH>; + vin-supply = <&pp1800_alw>; };
pp3300_panel: pp3300-panel { @@ -44,18 +48,20 @@ pp3300_panel: pp3300-panel { regulator-boot-on;
gpio = <&pio 35 GPIO_ACTIVE_HIGH>; + vin-supply = <&pp3300_alw>; };
- vddio_mipibrdg: vddio-mipibrdg { + pp3300_mipibrdg: pp3300-mipibrdg { compatible = "regulator-fixed"; - regulator-name = "vddio_mipibrdg"; + regulator-name = "pp3300_mipibrdg"; pinctrl-names = "default"; - pinctrl-0 = <&vddio_mipibrdg_en>; + pinctrl-0 = <&pp3300_mipibrdg_en>;
enable-active-high; regulator-boot-on;
gpio = <&pio 37 GPIO_ACTIVE_HIGH>; + vin-supply = <&pp3300_alw>; };
volume_buttons: volume-buttons { @@ -146,9 +152,9 @@ anx_bridge: anx7625@58 { pinctrl-0 = <&anx7625_pins>; enable-gpios = <&pio 45 GPIO_ACTIVE_HIGH>; reset-gpios = <&pio 73 GPIO_ACTIVE_HIGH>; - vdd10-supply = <&pp1200_mipibrdg>; + vdd10-supply = <&pp1000_mipibrdg>; vdd18-supply = <&pp1800_mipibrdg>; - vdd33-supply = <&vddio_mipibrdg>; + vdd33-supply = <&pp3300_mipibrdg>;
ports { #address-cells = <1>; @@ -391,14 +397,14 @@ &pio { "", "";
- pp1200_mipibrdg_en: pp1200-mipibrdg-en { + pp1000_mipibrdg_en: pp1000-mipibrdg-en { pins1 { pinmux = <PINMUX_GPIO54__FUNC_GPIO54>; output-low; }; };
- pp1800_lcd_en: pp1800-lcd-en { + pp1800_mipibrdg_en: pp1800-mipibrdg-en { pins1 { pinmux = <PINMUX_GPIO36__FUNC_GPIO36>; output-low; @@ -460,7 +466,7 @@ trackpad-int { }; };
- vddio_mipibrdg_en: vddio-mipibrdg-en { + pp3300_mipibrdg_en: pp3300-mipibrdg-en { pins1 { pinmux = <PINMUX_GPIO37__FUNC_GPIO37>; output-low; diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi index bfb9e42c8aca..ff02f63bac29 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi @@ -92,9 +92,9 @@ &i2c4 { clock-frequency = <400000>; vbus-supply = <&mt6358_vcn18_reg>;
- eeprom@54 { + eeprom@50 { compatible = "atmel,24c32"; - reg = <0x54>; + reg = <0x50>; pagesize = <32>; vcc-supply = <&mt6358_vcn18_reg>; }; diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi index 5c1bf6a1e475..da6e767b4cee 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi @@ -79,9 +79,9 @@ &i2c4 { clock-frequency = <400000>; vbus-supply = <&mt6358_vcn18_reg>;
- eeprom@54 { + eeprom@50 { compatible = "atmel,24c64"; - reg = <0x54>; + reg = <0x50>; pagesize = <32>; vcc-supply = <&mt6358_vcn18_reg>; }; diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi index 0f5fa893a774..8b56b8564ed7 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi @@ -88,9 +88,9 @@ &i2c4 { clock-frequency = <400000>; vbus-supply = <&mt6358_vcn18_reg>;
- eeprom@54 { + eeprom@50 { compatible = "atmel,24c32"; - reg = <0x54>; + reg = <0x50>; pagesize = <32>; vcc-supply = <&mt6358_vcn18_reg>; }; diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi index 22924f61ec9e..07ae3c8e897b 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi @@ -290,6 +290,11 @@ dsi_out: endpoint { }; };
+&dpi0 { + /* TODO Re-enable after DP to Type-C port muxing can be described */ + status = "disabled"; +}; + &gic { mediatek,broken-save-restore-fw; }; diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi index 266441e999f2..0a6578aacf82 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi @@ -1845,6 +1845,10 @@ dpi0: dpi@14015000 { <&mmsys CLK_MM_DPI_MM>, <&apmixedsys CLK_APMIXED_TVDPLL>; clock-names = "pixel", "engine", "pll"; + + port { + dpi_out: endpoint { }; + }; };
mutex: mutex@14016000 { diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi index 52ec58128d56..b495a241b443 100644 --- a/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi @@ -10,12 +10,6 @@
/ { chassis-type = "laptop"; - - max98360a: max98360a { - compatible = "maxim,max98360a"; - sdmode-gpios = <&pio 150 GPIO_ACTIVE_HIGH>; - #sound-dai-cells = <0>; - }; };
&cpu6 { @@ -59,19 +53,14 @@ &cluster1_opp_15 { opp-hz = /bits/ 64 <2200000000>; };
-&rt1019p{ - status = "disabled"; -}; - &sound { compatible = "mediatek,mt8186-mt6366-rt5682s-max98360-sound"; - status = "okay"; +};
- spk-hdmi-playback-dai-link { - codec { - sound-dai = <&it6505dptx>, <&max98360a>; - }; - }; +&speaker_codec { + compatible = "maxim,max98360a"; + sdmode-gpios = <&pio 150 GPIO_ACTIVE_HIGH>; + /delete-property/ sdb-gpios; };
&spmi { diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi index 682c6ad2574d..0c0b3ac59745 100644 --- a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi @@ -259,15 +259,15 @@ spk-hdmi-playback-dai-link { mediatek,clk-provider = "cpu"; /* RT1019P and IT6505 connected to the same I2S line */ codec { - sound-dai = <&it6505dptx>, <&rt1019p>; + sound-dai = <&it6505dptx>, <&speaker_codec>; }; }; };
- rt1019p: speaker-codec { + speaker_codec: speaker-codec { compatible = "realtek,rt1019p"; pinctrl-names = "default"; - pinctrl-0 = <&rt1019p_pins_default>; + pinctrl-0 = <&speaker_codec_pins_default>; #sound-dai-cells = <0>; sdb-gpios = <&pio 150 GPIO_ACTIVE_HIGH>; }; @@ -1179,7 +1179,7 @@ pins { }; };
- rt1019p_pins_default: rt1019p-default-pins { + speaker_codec_pins_default: speaker-codec-default-pins { pins-sdb { pinmux = <PINMUX_GPIO150__FUNC_GPIO150>; output-low; diff --git a/arch/arm64/boot/dts/mediatek/mt8188.dtsi b/arch/arm64/boot/dts/mediatek/mt8188.dtsi index cd27966d2e3c..91beef22e0a9 100644 --- a/arch/arm64/boot/dts/mediatek/mt8188.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8188.dtsi @@ -956,9 +956,9 @@ mfg0: power-domain@MT8188_POWER_DOMAIN_MFG0 { #size-cells = <0>; #power-domain-cells = <1>;
- power-domain@MT8188_POWER_DOMAIN_MFG1 { + mfg1: power-domain@MT8188_POWER_DOMAIN_MFG1 { reg = <MT8188_POWER_DOMAIN_MFG1>; - clocks = <&topckgen CLK_APMIXED_MFGPLL>, + clocks = <&apmixedsys CLK_APMIXED_MFGPLL>, <&topckgen CLK_TOP_MFG_CORE_TMP>; clock-names = "mfg", "alt"; mediatek,infracfg = <&infracfg_ao>; @@ -1689,7 +1689,6 @@ u3port1: usb-phy@700 { <&clk26m>; clock-names = "ref", "da_ref"; #phy-cells = <1>; - status = "disabled"; }; };
diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi index 75d56b2d5a3d..2c7b2223ee76 100644 --- a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi @@ -438,7 +438,7 @@ audio_codec: codec@1a { /* Realtek RT5682i or RT5682s, sharing the same configuration */ reg = <0x1a>; interrupts-extended = <&pio 89 IRQ_TYPE_EDGE_BOTH>; - #sound-dai-cells = <0>; + #sound-dai-cells = <1>; realtek,jd-src = <1>;
AVDD-supply = <&mt6359_vio18_ldo_reg>; @@ -1181,7 +1181,7 @@ hs-playback-dai-link { link-name = "ETDM1_OUT_BE"; mediatek,clk-provider = "cpu"; codec { - sound-dai = <&audio_codec>; + sound-dai = <&audio_codec 0>; }; };
@@ -1189,7 +1189,7 @@ hs-capture-dai-link { link-name = "ETDM2_IN_BE"; mediatek,clk-provider = "cpu"; codec { - sound-dai = <&audio_codec>; + sound-dai = <&audio_codec 0>; }; };
diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi index e89ba384c4aa..ade685ed2190 100644 --- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi @@ -487,7 +487,7 @@ topckgen: syscon@10000000 { };
infracfg_ao: syscon@10001000 { - compatible = "mediatek,mt8195-infracfg_ao", "syscon", "simple-mfd"; + compatible = "mediatek,mt8195-infracfg_ao", "syscon"; reg = <0 0x10001000 0 0x1000>; #clock-cells = <1>; #reset-cells = <1>; @@ -3331,11 +3331,9 @@ &larb19 &larb21 &larb24 &larb25 mutex1: mutex@1c101000 { compatible = "mediatek,mt8195-disp-mutex"; reg = <0 0x1c101000 0 0x1000>; - reg-names = "vdo1_mutex"; interrupts = <GIC_SPI 494 IRQ_TYPE_LEVEL_HIGH 0>; power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>; clocks = <&vdosys1 CLK_VDO1_DISP_MUTEX>; - clock-names = "vdo1_mutex"; mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x1000 0x1000>; mediatek,gce-events = <CMDQ_EVENT_VDO1_STREAM_DONE_ENG_0>; }; diff --git a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts index 1ef6262b65c9..b4b48eb93f3c 100644 --- a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts +++ b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts @@ -187,7 +187,7 @@ mdio { compatible = "snps,dwmac-mdio"; #address-cells = <1>; #size-cells = <0>; - eth_phy0: eth-phy0@1 { + eth_phy0: ethernet-phy@1 { compatible = "ethernet-phy-id001c.c916"; reg = <0x1>; }; diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi index c00db75e3910..1c53ccc5e3cb 100644 --- a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi @@ -351,7 +351,7 @@ mmc@700b0200 { #size-cells = <0>;
wifi@1 { - compatible = "brcm,bcm4354-fmac"; + compatible = "brcm,bcm4354-fmac", "brcm,bcm4329-fmac"; reg = <1>; interrupt-parent = <&gpio>; interrupts = <TEGRA_GPIO(H, 2) IRQ_TYPE_LEVEL_HIGH>; diff --git a/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts b/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts index 0d45662b8028..5d0167fbc709 100644 --- a/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts +++ b/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts @@ -707,7 +707,7 @@ &remoteproc_cdsp { };
&remoteproc_mpss { - firmware-name = "qcom/qcs6490/modem.mdt"; + firmware-name = "qcom/qcs6490/modem.mbn"; status = "okay"; };
diff --git a/arch/arm64/boot/dts/qcom/sc8180x.dtsi b/arch/arm64/boot/dts/qcom/sc8180x.dtsi index 0e9429684dd9..60f71b490261 100644 --- a/arch/arm64/boot/dts/qcom/sc8180x.dtsi +++ b/arch/arm64/boot/dts/qcom/sc8180x.dtsi @@ -3889,7 +3889,7 @@ lmh@18358800 { };
cpufreq_hw: cpufreq@18323000 { - compatible = "qcom,cpufreq-hw"; + compatible = "qcom,sc8180x-cpufreq-hw", "qcom,cpufreq-hw"; reg = <0 0x18323000 0 0x1400>, <0 0x18325800 0 0x1400>; reg-names = "freq-domain0", "freq-domain1";
diff --git a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts index 60412281ab27..962c8aa40044 100644 --- a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts +++ b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts @@ -104,7 +104,7 @@ vreg_l10a_1p8: vreg-l10a-regulator { compatible = "regulator-fixed"; regulator-name = "vreg_l10a_1p8"; regulator-min-microvolt = <1804000>; - regulator-max-microvolt = <1896000>; + regulator-max-microvolt = <1804000>; regulator-always-on; regulator-boot-on; }; diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi index 7986ddb30f6e..4f8477de7e1b 100644 --- a/arch/arm64/boot/dts/qcom/sm6350.dtsi +++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi @@ -1376,43 +1376,43 @@ gpu_opp_table: opp-table { opp-850000000 { opp-hz = /bits/ 64 <850000000>; opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>; - opp-supported-hw = <0x02>; + opp-supported-hw = <0x03>; };
opp-800000000 { opp-hz = /bits/ 64 <800000000>; opp-level = <RPMH_REGULATOR_LEVEL_TURBO>; - opp-supported-hw = <0x04>; + opp-supported-hw = <0x07>; };
opp-650000000 { opp-hz = /bits/ 64 <650000000>; opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>; - opp-supported-hw = <0x08>; + opp-supported-hw = <0x0f>; };
opp-565000000 { opp-hz = /bits/ 64 <565000000>; opp-level = <RPMH_REGULATOR_LEVEL_NOM>; - opp-supported-hw = <0x10>; + opp-supported-hw = <0x1f>; };
opp-430000000 { opp-hz = /bits/ 64 <430000000>; opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>; - opp-supported-hw = <0xff>; + opp-supported-hw = <0x1f>; };
opp-355000000 { opp-hz = /bits/ 64 <355000000>; opp-level = <RPMH_REGULATOR_LEVEL_SVS>; - opp-supported-hw = <0xff>; + opp-supported-hw = <0x1f>; };
opp-253000000 { opp-hz = /bits/ 64 <253000000>; opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>; - opp-supported-hw = <0xff>; + opp-supported-hw = <0x1f>; }; }; }; diff --git a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts index fb4a48a1e2a8..2926a1aba768 100644 --- a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts +++ b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts @@ -594,8 +594,6 @@ &usb_1_ss0_qmpphy { vdda-phy-supply = <&vreg_l3e_1p2>; vdda-pll-supply = <&vreg_l1j_0p8>;
- orientation-switch; - status = "okay"; };
@@ -628,8 +626,6 @@ &usb_1_ss1_qmpphy { vdda-phy-supply = <&vreg_l3e_1p2>; vdda-pll-supply = <&vreg_l2d_0p9>;
- orientation-switch; - status = "okay"; };
diff --git a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts index 0cdaff9c8cf0..f22e5c840a2e 100644 --- a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts +++ b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts @@ -898,8 +898,6 @@ &usb_1_ss0_qmpphy { vdda-phy-supply = <&vreg_l3e_1p2>; vdda-pll-supply = <&vreg_l1j_0p8>;
- orientation-switch; - status = "okay"; };
@@ -932,8 +930,6 @@ &usb_1_ss1_qmpphy { vdda-phy-supply = <&vreg_l3e_1p2>; vdda-pll-supply = <&vreg_l2d_0p9>;
- orientation-switch; - status = "okay"; };
diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi index 0510abc0edf0..914f9cb3aca2 100644 --- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi +++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi @@ -279,8 +279,8 @@ CLUSTER_C4: cpu-sleep-0 { idle-state-name = "ret"; arm,psci-suspend-param = <0x00000004>; entry-latency-us = <180>; - exit-latency-us = <320>; - min-residency-us = <1000>; + exit-latency-us = <500>; + min-residency-us = <600>; }; };
@@ -299,7 +299,7 @@ CLUSTER_CL5: cluster-sleep-1 { idle-state-name = "ret-pll-off"; arm,psci-suspend-param = <0x01000054>; entry-latency-us = <2200>; - exit-latency-us = <2500>; + exit-latency-us = <4000>; min-residency-us = <7000>; }; }; @@ -5752,7 +5752,7 @@ apps_smmu: iommu@15000000 { intc: interrupt-controller@17000000 { compatible = "arm,gic-v3"; reg = <0 0x17000000 0 0x10000>, /* GICD */ - <0 0x17080000 0 0x480000>; /* GICR * 12 */ + <0 0x17080000 0 0x300000>; /* GICR * 12 */
interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
diff --git a/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi b/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi index 8e2db1d6ca81..25c55b32aafe 100644 --- a/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi +++ b/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi @@ -69,9 +69,6 @@ &rcar_sound {
status = "okay";
- /* Single DAI */ - #sound-dai-cells = <0>; - rsnd_port: port { rsnd_endpoint: endpoint { remote-endpoint = <&dw_hdmi0_snd_in>; diff --git a/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi b/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi index 66f3affe0469..deb69c272775 100644 --- a/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi +++ b/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi @@ -84,9 +84,6 @@ &rcar_sound { pinctrl-names = "default"; status = "okay";
- /* Single DAI */ - #sound-dai-cells = <0>; - /* audio_clkout0/1/2/3 */ #clock-cells = <1>; clock-frequency = <12288000 11289600>; diff --git a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso index ebcaeafc3800..fa61633aea15 100644 --- a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso +++ b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso @@ -49,7 +49,6 @@ vcc1v8_eth: vcc1v8-eth-regulator {
vcc3v3_eth: vcc3v3-eth-regulator { compatible = "regulator-fixed"; - enable-active-low; gpio = <&gpio0 RK_PC0 GPIO_ACTIVE_LOW>; pinctrl-names = "default"; pinctrl-0 = <&vcc3v3_eth_enn>; diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts index 8ba111d9283f..d9d2bf822443 100644 --- a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts +++ b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts @@ -62,7 +62,7 @@ sdio_pwrseq: sdio-pwrseq {
sound { compatible = "audio-graph-card"; - label = "rockchip,es8388-codec"; + label = "rockchip,es8388"; widgets = "Microphone", "Mic Jack", "Headphone", "Headphones"; routing = "LINPUT2", "Mic Jack", diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts index feea6b20a6bf..6b77be643249 100644 --- a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts +++ b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts @@ -71,7 +71,6 @@ vcc5v0_sys: vcc5v0-sys-regulator {
vcc_3v3_sd_s0: vcc-3v3-sd-s0-regulator { compatible = "regulator-fixed"; - enable-active-low; gpios = <&gpio4 RK_PB5 GPIO_ACTIVE_LOW>; regulator-name = "vcc_3v3_sd_s0"; regulator-boot-on; diff --git a/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi b/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi index e4633af87eb9..d6ce53c6d748 100644 --- a/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi +++ b/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi @@ -433,8 +433,6 @@ &mcasp2 { 0 0 0 0 0 0 0 0 >; - tx-num-evt = <32>; - rx-num-evt = <32>; status = "okay"; };
diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts index 6593c5da82c0..df39f2b1ff6b 100644 --- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts +++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts @@ -254,7 +254,7 @@ J721E_IOPAD(0x38, PIN_OUTPUT, 0) /* (Y21) MCAN3_TX */ }; };
-&main_pmx1 { +&main_pmx2 { main_usbss0_pins_default: main-usbss0-default-pins { pinctrl-single,pins = < J721E_IOPAD(0x04, PIN_OUTPUT, 0) /* (T4) USB0_DRVVBUS */ diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi index 9386bf3ef9f6..1d11da926a87 100644 --- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi +++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi @@ -426,10 +426,28 @@ main_pmx0: pinctrl@11c000 { pinctrl-single,function-mask = <0xffffffff>; };
- main_pmx1: pinctrl@11c11c { + main_pmx1: pinctrl@11c110 { compatible = "ti,j7200-padconf", "pinctrl-single"; /* Proxy 0 addressing */ - reg = <0x00 0x11c11c 0x00 0xc>; + reg = <0x00 0x11c110 0x00 0x004>; + #pinctrl-cells = <1>; + pinctrl-single,register-width = <32>; + pinctrl-single,function-mask = <0xffffffff>; + }; + + main_pmx2: pinctrl@11c11c { + compatible = "ti,j7200-padconf", "pinctrl-single"; + /* Proxy 0 addressing */ + reg = <0x00 0x11c11c 0x00 0x00c>; + #pinctrl-cells = <1>; + pinctrl-single,register-width = <32>; + pinctrl-single,function-mask = <0xffffffff>; + }; + + main_pmx3: pinctrl@11c164 { + compatible = "ti,j7200-padconf", "pinctrl-single"; + /* Proxy 0 addressing */ + reg = <0x00 0x11c164 0x00 0x008>; #pinctrl-cells = <1>; pinctrl-single,register-width = <32>; pinctrl-single,function-mask = <0xffffffff>; @@ -1145,7 +1163,7 @@ main_spi0: spi@2100000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 266 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 266 1>; + clocks = <&k3_clks 266 4>; status = "disabled"; };
@@ -1156,7 +1174,7 @@ main_spi1: spi@2110000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 267 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 267 1>; + clocks = <&k3_clks 267 4>; status = "disabled"; };
@@ -1167,7 +1185,7 @@ main_spi2: spi@2120000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 268 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 268 1>; + clocks = <&k3_clks 268 4>; status = "disabled"; };
@@ -1178,7 +1196,7 @@ main_spi3: spi@2130000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 269 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 269 1>; + clocks = <&k3_clks 269 4>; status = "disabled"; };
@@ -1189,7 +1207,7 @@ main_spi4: spi@2140000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 270 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 270 1>; + clocks = <&k3_clks 270 2>; status = "disabled"; };
@@ -1200,7 +1218,7 @@ main_spi5: spi@2150000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 271 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 271 1>; + clocks = <&k3_clks 271 4>; status = "disabled"; };
@@ -1211,7 +1229,7 @@ main_spi6: spi@2160000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 272 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 272 1>; + clocks = <&k3_clks 272 4>; status = "disabled"; };
@@ -1222,7 +1240,7 @@ main_spi7: spi@2170000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 273 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 273 1>; + clocks = <&k3_clks 273 4>; status = "disabled"; };
diff --git a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi index 5097d192c2b2..b18b2f2deb96 100644 --- a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi +++ b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi @@ -494,7 +494,7 @@ mcu_spi0: spi@40300000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 274 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 274 0>; + clocks = <&k3_clks 274 4>; status = "disabled"; };
@@ -505,7 +505,7 @@ mcu_spi1: spi@40310000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 275 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 275 0>; + clocks = <&k3_clks 275 4>; status = "disabled"; };
@@ -516,7 +516,7 @@ mcu_spi2: spi@40320000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 276 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 276 0>; + clocks = <&k3_clks 276 2>; status = "disabled"; };
diff --git a/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi index 3731ffb4a5c9..6f5c1401ebd6 100644 --- a/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi +++ b/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi @@ -654,7 +654,7 @@ mcu_spi0: spi@40300000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 274 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 274 0>; + clocks = <&k3_clks 274 1>; status = "disabled"; };
@@ -665,7 +665,7 @@ mcu_spi1: spi@40310000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 275 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 275 0>; + clocks = <&k3_clks 275 1>; status = "disabled"; };
@@ -676,7 +676,7 @@ mcu_spi2: spi@40320000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 276 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 276 0>; + clocks = <&k3_clks 276 1>; status = "disabled"; };
diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi index 9ed6949b40e9..fae534b5c8a4 100644 --- a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi +++ b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi @@ -1708,7 +1708,7 @@ main_spi0: spi@2100000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 339 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 339 1>; + clocks = <&k3_clks 339 2>; status = "disabled"; };
@@ -1719,7 +1719,7 @@ main_spi1: spi@2110000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 340 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 340 1>; + clocks = <&k3_clks 340 2>; status = "disabled"; };
@@ -1730,7 +1730,7 @@ main_spi2: spi@2120000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 341 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 341 1>; + clocks = <&k3_clks 341 2>; status = "disabled"; };
@@ -1741,7 +1741,7 @@ main_spi3: spi@2130000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 342 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 342 1>; + clocks = <&k3_clks 342 2>; status = "disabled"; };
@@ -1752,7 +1752,7 @@ main_spi4: spi@2140000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 343 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 343 1>; + clocks = <&k3_clks 343 2>; status = "disabled"; };
@@ -1763,7 +1763,7 @@ main_spi5: spi@2150000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 344 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 344 1>; + clocks = <&k3_clks 344 2>; status = "disabled"; };
@@ -1774,7 +1774,7 @@ main_spi6: spi@2160000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 345 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 345 1>; + clocks = <&k3_clks 345 2>; status = "disabled"; };
@@ -1785,7 +1785,7 @@ main_spi7: spi@2170000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 346 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 346 1>; + clocks = <&k3_clks 346 2>; status = "disabled"; };
diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi index 9d96b19d0e7c..8232d308c23c 100644 --- a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi +++ b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi @@ -425,7 +425,7 @@ mcu_spi0: spi@40300000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 347 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 347 0>; + clocks = <&k3_clks 347 2>; status = "disabled"; };
@@ -436,7 +436,7 @@ mcu_spi1: spi@40310000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 348 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 348 0>; + clocks = <&k3_clks 348 2>; status = "disabled"; };
@@ -447,7 +447,7 @@ mcu_spi2: spi@40320000 { #address-cells = <1>; #size-cells = <0>; power-domains = <&k3_pds 349 TI_SCI_PD_EXCLUSIVE>; - clocks = <&k3_clks 349 0>; + clocks = <&k3_clks 349 2>; status = "disabled"; };
diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index 8c0a36f72d6f..bc77869dbd43 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -353,6 +353,7 @@ __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000) __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000) __AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000) __AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000) +__AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400) __AARCH64_INSN_FUNCS(stp, 0x7FC00000, 0x29000000) __AARCH64_INSN_FUNCS(ldp, 0x7FC00000, 0x29400000) __AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index bf64fed9820e..c315bc1a4e9a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -74,8 +74,6 @@ enum kvm_mode kvm_get_mode(void); static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; }; #endif
-DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); - extern unsigned int __ro_after_init kvm_sve_max_vl; extern unsigned int __ro_after_init kvm_host_sve_max_vl; int __init kvm_arm_init_sve(void); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 718728a85430..db994d1fd97e 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -228,6 +228,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = { };
static const struct arm64_ftr_bits ftr_id_aa64isar1[] = { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_XS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_I8MM_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_DGH_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_BF16_SHIFT, 4, 0), diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c index 3496d6169e59..42b69936cee3 100644 --- a/arch/arm64/kernel/probes/decode-insn.c +++ b/arch/arm64/kernel/probes/decode-insn.c @@ -58,10 +58,13 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn) * Instructions which load PC relative literals are not going to work * when executed from an XOL slot. Instructions doing an exclusive * load/store are not going to complete successfully when single-step - * exception handling happens in the middle of the sequence. + * exception handling happens in the middle of the sequence. Memory + * copy/set instructions require that all three instructions be placed + * consecutively in memory. */ if (aarch64_insn_uses_literal(insn) || - aarch64_insn_is_exclusive(insn)) + aarch64_insn_is_exclusive(insn) || + aarch64_insn_is_mops(insn)) return false;
return true; diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 3e7c8c8195c3..2bbcbb11d844 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -442,7 +442,7 @@ static void tls_thread_switch(struct task_struct *next)
if (is_compat_thread(task_thread_info(next))) write_sysreg(next->thread.uw.tp_value, tpidrro_el0); - else if (!arm64_kernel_unmapped_at_el0()) + else write_sysreg(0, tpidrro_el0);
write_sysreg(*task_user_tls(next), tpidr_el0); diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index b22d28ec8028..87f61fd6783c 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -175,7 +175,11 @@ static void __init setup_machine_fdt(phys_addr_t dt_phys) if (dt_virt) memblock_reserve(dt_phys, size);
- if (!dt_virt || !early_init_dt_scan(dt_virt)) { + /* + * dt_virt is a fixmap address, hence __pa(dt_virt) can't be used. + * Pass dt_phys directly. + */ + if (!early_init_dt_scan(dt_virt, dt_phys)) { pr_crit("\n" "Error: invalid device tree blob at physical address %pa (virtual address 0x%px)\n" "The dtb must be 8-byte aligned and must not exceed 2 MB in size\n" diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 58d89d997d05..f84c71f04d9e 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -287,6 +287,9 @@ SECTIONS __initdata_end = .; __init_end = .;
+ .data.rel.ro : { *(.data.rel.ro) } + ASSERT(SIZEOF(.data.rel.ro) == 0, "Unexpected RELRO detected!") + _data = .; _sdata = .; RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN) @@ -343,9 +346,6 @@ SECTIONS *(.plt) *(.plt.*) *(.iplt) *(.igot .igot.plt) } ASSERT(SIZEOF(.plt) == 0, "Unexpected run-time procedure linkages detected!") - - .data.rel.ro : { *(.data.rel.ro) } - ASSERT(SIZEOF(.data.rel.ro) == 0, "Unexpected RELRO detected!") }
#include "image-vars.h" diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index 879982b1cc73..1215df590418 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -206,8 +206,7 @@ void get_timer_map(struct kvm_vcpu *vcpu, struct timer_map *map)
static inline bool userspace_irqchip(struct kvm *kvm) { - return static_branch_unlikely(&userspace_irqchip_in_use) && - unlikely(!irqchip_in_kernel(kvm)); + return unlikely(!irqchip_in_kernel(kvm)); }
static void soft_timer_start(struct hrtimer *hrt, u64 ns) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 48cafb65d6ac..70ff9a20ef3a 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -69,7 +69,6 @@ DECLARE_KVM_NVHE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); static bool vgic_present, kvm_arm_initialised;
static DEFINE_PER_CPU(unsigned char, kvm_hyp_initialized); -DEFINE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
bool is_kvm_arm_initialised(void) { @@ -503,9 +502,6 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { - if (vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm))) - static_branch_dec(&userspace_irqchip_in_use); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); @@ -848,14 +844,6 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) return ret; }
- if (!irqchip_in_kernel(kvm)) { - /* - * Tell the rest of the code that there are userspace irqchip - * VMs in the wild. - */ - static_branch_inc(&userspace_irqchip_in_use); - } - /* * Initialize traps for protected VMs. * NOTE: Move to run in EL2 directly, rather than via a hypercall, once @@ -1077,7 +1065,7 @@ static bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu, int *ret) * state gets updated in kvm_timer_update_run and * kvm_pmu_update_run below). */ - if (static_branch_unlikely(&userspace_irqchip_in_use)) { + if (unlikely(!irqchip_in_kernel(vcpu->kvm))) { if (kvm_timer_should_notify_user(vcpu) || kvm_pmu_should_notify_user(vcpu)) { *ret = -EINTR; @@ -1199,7 +1187,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) vcpu->mode = OUTSIDE_GUEST_MODE; isb(); /* Ensure work in x_flush_hwstate is committed */ kvm_pmu_sync_hwstate(vcpu); - if (static_branch_unlikely(&userspace_irqchip_in_use)) + if (unlikely(!irqchip_in_kernel(vcpu->kvm))) kvm_timer_sync_user(vcpu); kvm_vgic_sync_hwstate(vcpu); local_irq_enable(); @@ -1245,7 +1233,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) * we don't want vtimer interrupts to race with syncing the * timer virtual interrupt state. */ - if (static_branch_unlikely(&userspace_irqchip_in_use)) + if (unlikely(!irqchip_in_kernel(vcpu->kvm))) kvm_timer_sync_user(vcpu);
kvm_arch_vcpu_ctxsync_fp(vcpu); diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c index cd6b7b83e2c3..ab365e839874 100644 --- a/arch/arm64/kvm/mmio.c +++ b/arch/arm64/kvm/mmio.c @@ -72,6 +72,31 @@ unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len) return data; }
+static bool kvm_pending_sync_exception(struct kvm_vcpu *vcpu) +{ + if (!vcpu_get_flag(vcpu, PENDING_EXCEPTION)) + return false; + + if (vcpu_el1_is_32bit(vcpu)) { + switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) { + case unpack_vcpu_flag(EXCEPT_AA32_UND): + case unpack_vcpu_flag(EXCEPT_AA32_IABT): + case unpack_vcpu_flag(EXCEPT_AA32_DABT): + return true; + default: + return false; + } + } else { + switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) { + case unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC): + case unpack_vcpu_flag(EXCEPT_AA64_EL2_SYNC): + return true; + default: + return false; + } + } +} + /** * kvm_handle_mmio_return -- Handle MMIO loads after user space emulation * or in-kernel IO emulation @@ -84,8 +109,11 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu) unsigned int len; int mask;
- /* Detect an already handled MMIO return */ - if (unlikely(!vcpu->mmio_needed)) + /* + * Detect if the MMIO return was already handled or if userspace aborted + * the MMIO access. + */ + if (unlikely(!vcpu->mmio_needed || kvm_pending_sync_exception(vcpu))) return 1;
vcpu->mmio_needed = 0; diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index ac36c438b8c1..3940fe893783 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -342,7 +342,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) { reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0); - reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1); }
diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c index ba945ba78cc7..198296933e7e 100644 --- a/arch/arm64/kvm/vgic/vgic-its.c +++ b/arch/arm64/kvm/vgic/vgic-its.c @@ -782,6 +782,9 @@ static int vgic_its_cmd_handle_discard(struct kvm *kvm, struct vgic_its *its,
ite = find_ite(its, device_id, event_id); if (ite && its_is_collection_mapped(ite->collection)) { + struct its_device *device = find_its_device(its, device_id); + int ite_esz = vgic_its_get_abi(its)->ite_esz; + gpa_t gpa = device->itt_addr + ite->event_id * ite_esz; /* * Though the spec talks about removing the pending state, we * don't bother here since we clear the ITTE anyway and the @@ -790,7 +793,8 @@ static int vgic_its_cmd_handle_discard(struct kvm *kvm, struct vgic_its *its, vgic_its_invalidate_cache(its);
its_free_ite(kvm, ite); - return 0; + + return vgic_its_write_entry_lock(its, gpa, 0, ite_esz); }
return E_ITS_DISCARD_UNMAPPED_INTERRUPT; @@ -1139,9 +1143,11 @@ static int vgic_its_cmd_handle_mapd(struct kvm *kvm, struct vgic_its *its, bool valid = its_cmd_get_validbit(its_cmd); u8 num_eventid_bits = its_cmd_get_size(its_cmd); gpa_t itt_addr = its_cmd_get_ittaddr(its_cmd); + int dte_esz = vgic_its_get_abi(its)->dte_esz; struct its_device *device; + gpa_t gpa;
- if (!vgic_its_check_id(its, its->baser_device_table, device_id, NULL)) + if (!vgic_its_check_id(its, its->baser_device_table, device_id, &gpa)) return E_ITS_MAPD_DEVICE_OOR;
if (valid && num_eventid_bits > VITS_TYPER_IDBITS) @@ -1162,7 +1168,7 @@ static int vgic_its_cmd_handle_mapd(struct kvm *kvm, struct vgic_its *its, * is an error, so we are done in any case. */ if (!valid) - return 0; + return vgic_its_write_entry_lock(its, gpa, 0, dte_esz);
device = vgic_its_alloc_device(its, device_id, itt_addr, num_eventid_bits); @@ -2086,7 +2092,6 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz, static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev, struct its_ite *ite, gpa_t gpa, int ite_esz) { - struct kvm *kvm = its->dev->kvm; u32 next_offset; u64 val;
@@ -2095,7 +2100,8 @@ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev, ((u64)ite->irq->intid << KVM_ITS_ITE_PINTID_SHIFT) | ite->collection->collection_id; val = cpu_to_le64(val); - return vgic_write_guest_lock(kvm, gpa, &val, ite_esz); + + return vgic_its_write_entry_lock(its, gpa, val, ite_esz); }
/** @@ -2239,7 +2245,6 @@ static int vgic_its_restore_itt(struct vgic_its *its, struct its_device *dev) static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev, gpa_t ptr, int dte_esz) { - struct kvm *kvm = its->dev->kvm; u64 val, itt_addr_field; u32 next_offset;
@@ -2250,7 +2255,8 @@ static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev, (itt_addr_field << KVM_ITS_DTE_ITTADDR_SHIFT) | (dev->num_eventid_bits - 1)); val = cpu_to_le64(val); - return vgic_write_guest_lock(kvm, ptr, &val, dte_esz); + + return vgic_its_write_entry_lock(its, ptr, val, dte_esz); }
/** @@ -2437,7 +2443,8 @@ static int vgic_its_save_cte(struct vgic_its *its, ((u64)collection->target_addr << KVM_ITS_CTE_RDBASE_SHIFT) | collection->collection_id); val = cpu_to_le64(val); - return vgic_write_guest_lock(its->dev->kvm, gpa, &val, esz); + + return vgic_its_write_entry_lock(its, gpa, val, esz); }
/* @@ -2453,8 +2460,7 @@ static int vgic_its_restore_cte(struct vgic_its *its, gpa_t gpa, int esz) u64 val; int ret;
- BUG_ON(esz > sizeof(val)); - ret = kvm_read_guest_lock(kvm, gpa, &val, esz); + ret = vgic_its_read_entry_lock(its, gpa, &val, esz); if (ret) return ret; val = le64_to_cpu(val); @@ -2492,7 +2498,6 @@ static int vgic_its_save_collection_table(struct vgic_its *its) u64 baser = its->baser_coll_table; gpa_t gpa = GITS_BASER_ADDR_48_to_52(baser); struct its_collection *collection; - u64 val; size_t max_size, filled = 0; int ret, cte_esz = abi->cte_esz;
@@ -2516,10 +2521,7 @@ static int vgic_its_save_collection_table(struct vgic_its *its) * table is not fully filled, add a last dummy element * with valid bit unset */ - val = 0; - BUG_ON(cte_esz > sizeof(val)); - ret = vgic_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz); - return ret; + return vgic_its_write_entry_lock(its, gpa, 0, cte_esz); }
/* diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c index 9e50928f5d7d..70a44852cbaf 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c +++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c @@ -530,6 +530,7 @@ static void vgic_mmio_write_invlpi(struct kvm_vcpu *vcpu, unsigned long val) { struct vgic_irq *irq; + u32 intid;
/* * If the guest wrote only to the upper 32bit part of the @@ -541,9 +542,13 @@ static void vgic_mmio_write_invlpi(struct kvm_vcpu *vcpu, if ((addr & 4) || !vgic_lpis_enabled(vcpu)) return;
+ intid = lower_32_bits(val); + if (intid < VGIC_MIN_LPI) + return; + vgic_set_rdist_busy(vcpu, true);
- irq = vgic_get_irq(vcpu->kvm, NULL, lower_32_bits(val)); + irq = vgic_get_irq(vcpu->kvm, NULL, intid); if (irq) { vgic_its_inv_lpi(vcpu->kvm, irq); vgic_put_irq(vcpu->kvm, irq); diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h index f2486b4d9f95..309295f5e1b0 100644 --- a/arch/arm64/kvm/vgic/vgic.h +++ b/arch/arm64/kvm/vgic/vgic.h @@ -146,6 +146,29 @@ static inline int vgic_write_guest_lock(struct kvm *kvm, gpa_t gpa, return ret; }
+static inline int vgic_its_read_entry_lock(struct vgic_its *its, gpa_t eaddr, + u64 *eval, unsigned long esize) +{ + struct kvm *kvm = its->dev->kvm; + + if (KVM_BUG_ON(esize != sizeof(*eval), kvm)) + return -EINVAL; + + return kvm_read_guest_lock(kvm, eaddr, eval, esize); + +} + +static inline int vgic_its_write_entry_lock(struct vgic_its *its, gpa_t eaddr, + u64 eval, unsigned long esize) +{ + struct kvm *kvm = its->dev->kvm; + + if (KVM_BUG_ON(esize != sizeof(eval), kvm)) + return -EINVAL; + + return vgic_write_guest_lock(kvm, eaddr, &eval, esize); +} + /* * This struct provides an intermediate representation of the fields contained * in the GICH_VMCR and ICH_VMCR registers, such that code exporting the GIC diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 5db82bfc9dc1..27ef366363e4 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -2094,6 +2094,12 @@ static void restore_args(struct jit_ctx *ctx, int args_off, int nregs) } }
+static bool is_struct_ops_tramp(const struct bpf_tramp_links *fentry_links) +{ + return fentry_links->nr_links == 1 && + fentry_links->links[0]->link.type == BPF_LINK_TYPE_STRUCT_OPS; +} + /* Based on the x86's implementation of arch_prepare_bpf_trampoline(). * * bpf prog and function entry before bpf trampoline hooked: @@ -2123,6 +2129,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN]; bool save_ret; __le32 **branches = NULL; + bool is_struct_ops = is_struct_ops_tramp(fentry);
/* trampoline stack layout: * [ parent ip ] @@ -2191,11 +2198,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, */ emit_bti(A64_BTI_JC, ctx);
- /* frame for parent function */ - emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx); - emit(A64_MOV(1, A64_FP, A64_SP), ctx); + /* x9 is not set for struct_ops */ + if (!is_struct_ops) { + /* frame for parent function */ + emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx); + emit(A64_MOV(1, A64_FP, A64_SP), ctx); + }
- /* frame for patched function */ + /* frame for patched function for tracing, or caller for struct_ops */ emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx); emit(A64_MOV(1, A64_FP, A64_SP), ctx);
@@ -2289,19 +2299,24 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, /* reset SP */ emit(A64_MOV(1, A64_SP, A64_FP), ctx);
- /* pop frames */ - emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx); - emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx); - - if (flags & BPF_TRAMP_F_SKIP_FRAME) { - /* skip patched function, return to parent */ - emit(A64_MOV(1, A64_LR, A64_R(9)), ctx); - emit(A64_RET(A64_R(9)), ctx); + if (is_struct_ops) { + emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx); + emit(A64_RET(A64_LR), ctx); } else { - /* return to patched function */ - emit(A64_MOV(1, A64_R(10), A64_LR), ctx); - emit(A64_MOV(1, A64_LR, A64_R(9)), ctx); - emit(A64_RET(A64_R(10)), ctx); + /* pop frames */ + emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx); + emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx); + + if (flags & BPF_TRAMP_F_SKIP_FRAME) { + /* skip patched function, return to parent */ + emit(A64_MOV(1, A64_LR, A64_R(9)), ctx); + emit(A64_RET(A64_R(9)), ctx); + } else { + /* return to patched function */ + emit(A64_MOV(1, A64_R(10), A64_LR), ctx); + emit(A64_MOV(1, A64_LR, A64_R(9)), ctx); + emit(A64_RET(A64_R(10)), ctx); + } }
kfree(branches); diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c index 51012e90780d..fe715b707fd0 100644 --- a/arch/csky/kernel/setup.c +++ b/arch/csky/kernel/setup.c @@ -112,9 +112,9 @@ asmlinkage __visible void __init csky_start(unsigned int unused, pre_trap_init();
if (dtb_start == NULL) - early_init_dt_scan(__dtb_start); + early_init_dt_scan(__dtb_start, __pa(dtb_start)); else - early_init_dt_scan(dtb_start); + early_init_dt_scan(dtb_start, __pa(dtb_start));
start_kernel();
diff --git a/arch/loongarch/Makefile b/arch/loongarch/Makefile index ae3f80622f4c..567bd122a9ee 100644 --- a/arch/loongarch/Makefile +++ b/arch/loongarch/Makefile @@ -59,7 +59,7 @@ endif
ifdef CONFIG_64BIT ld-emul = $(64bit-emul) -cflags-y += -mabi=lp64s +cflags-y += -mabi=lp64s -mcmodel=normal endif
cflags-y += -pipe $(CC_FLAGS_NO_FPU) @@ -104,7 +104,7 @@ ifdef CONFIG_OBJTOOL KBUILD_CFLAGS += -fno-jump-tables endif
-KBUILD_RUSTFLAGS += --target=loongarch64-unknown-none-softfloat +KBUILD_RUSTFLAGS += --target=loongarch64-unknown-none-softfloat -Ccode-model=small KBUILD_RUSTFLAGS_KERNEL += -Zdirect-access-external-data=yes KBUILD_RUSTFLAGS_MODULE += -Zdirect-access-external-data=no
diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c index cbd3c09a93c1..56934fe58170 100644 --- a/arch/loongarch/kernel/setup.c +++ b/arch/loongarch/kernel/setup.c @@ -291,7 +291,7 @@ static void __init fdt_setup(void) if (!fdt_pointer || fdt_check_header(fdt_pointer)) return;
- early_init_dt_scan(fdt_pointer); + early_init_dt_scan(fdt_pointer, __pa(fdt_pointer)); early_init_fdt_reserve_self();
max_low_pfn = PFN_PHYS(memblock_end_of_DRAM()); diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c index 7dbefd4ba210..dd350cba1252 100644 --- a/arch/loongarch/net/bpf_jit.c +++ b/arch/loongarch/net/bpf_jit.c @@ -179,7 +179,7 @@ static void __build_epilogue(struct jit_ctx *ctx, bool is_tail_call)
if (!is_tail_call) { /* Set return value */ - move_reg(ctx, LOONGARCH_GPR_A0, regmap[BPF_REG_0]); + emit_insn(ctx, addiw, LOONGARCH_GPR_A0, regmap[BPF_REG_0], 0); /* Return to the caller */ emit_insn(ctx, jirl, LOONGARCH_GPR_RA, LOONGARCH_GPR_ZERO, 0); } else { diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile index 40c1175823d6..fdde1bcd4e26 100644 --- a/arch/loongarch/vdso/Makefile +++ b/arch/loongarch/vdso/Makefile @@ -19,7 +19,7 @@ ccflags-vdso := \ cflags-vdso := $(ccflags-vdso) \ -isystem $(shell $(CC) -print-file-name=include) \ $(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \ - -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \ + -std=gnu11 -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \ -fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \ $(call cc-option, -fno-asynchronous-unwind-tables) \ $(call cc-option, -fno-stack-protector) diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c index 7dab46728aed..b6958ec2a220 100644 --- a/arch/m68k/coldfire/device.c +++ b/arch/m68k/coldfire/device.c @@ -93,7 +93,7 @@ static struct platform_device mcf_uart = { .dev.platform_data = mcf_uart_platform_data, };
-#if IS_ENABLED(CONFIG_FEC) +#ifdef MCFFEC_BASE0
#ifdef CONFIG_M5441x #define FEC_NAME "enet-fec" @@ -145,6 +145,7 @@ static struct platform_device mcf_fec0 = { .platform_data = FEC_PDATA, } }; +#endif /* MCFFEC_BASE0 */
#ifdef MCFFEC_BASE1 static struct resource mcf_fec1_resources[] = { @@ -182,7 +183,6 @@ static struct platform_device mcf_fec1 = { } }; #endif /* MCFFEC_BASE1 */ -#endif /* CONFIG_FEC */
#if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI) /* @@ -624,12 +624,12 @@ static struct platform_device mcf_flexcan0 = {
static struct platform_device *mcf_devices[] __initdata = { &mcf_uart, -#if IS_ENABLED(CONFIG_FEC) +#ifdef MCFFEC_BASE0 &mcf_fec0, +#endif #ifdef MCFFEC_BASE1 &mcf_fec1, #endif -#endif #if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI) &mcf_qspi, #endif diff --git a/arch/m68k/include/asm/mcfgpio.h b/arch/m68k/include/asm/mcfgpio.h index 019f24439546..9c91ecdafc45 100644 --- a/arch/m68k/include/asm/mcfgpio.h +++ b/arch/m68k/include/asm/mcfgpio.h @@ -136,7 +136,7 @@ static inline void gpio_free(unsigned gpio) * read-modify-write as well as those controlled by the EPORT and GPIO modules. */ #define MCFGPIO_SCR_START 40 -#elif defined(CONFIGM5441x) +#elif defined(CONFIG_M5441x) /* The m5441x EPORT doesn't have its own GPIO port, uses PORT C */ #define MCFGPIO_SCR_START 0 #else diff --git a/arch/m68k/include/asm/mvme147hw.h b/arch/m68k/include/asm/mvme147hw.h index e28eb1c0e0bf..dbf88059e47a 100644 --- a/arch/m68k/include/asm/mvme147hw.h +++ b/arch/m68k/include/asm/mvme147hw.h @@ -93,8 +93,8 @@ struct pcc_regs { #define M147_SCC_B_ADDR 0xfffe3000 #define M147_SCC_PCLK 5000000
-#define MVME147_IRQ_SCSI_PORT (IRQ_USER+0x45) -#define MVME147_IRQ_SCSI_DMA (IRQ_USER+0x46) +#define MVME147_IRQ_SCSI_PORT (IRQ_USER + 5) +#define MVME147_IRQ_SCSI_DMA (IRQ_USER + 6)
/* SCC interrupts, for MVME147 */
diff --git a/arch/m68k/kernel/early_printk.c b/arch/m68k/kernel/early_printk.c index 3cc944df04f6..f11ef9f1f56f 100644 --- a/arch/m68k/kernel/early_printk.c +++ b/arch/m68k/kernel/early_printk.c @@ -13,6 +13,7 @@ #include <asm/setup.h>
+#include "../mvme147/mvme147.h" #include "../mvme16x/mvme16x.h"
asmlinkage void __init debug_cons_nputs(const char *s, unsigned n); @@ -22,7 +23,9 @@ static void __ref debug_cons_write(struct console *c, { #if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \ defined(CONFIG_COLDFIRE)) - if (MACH_IS_MVME16x) + if (MACH_IS_MVME147) + mvme147_scc_write(c, s, n); + else if (MACH_IS_MVME16x) mvme16x_cons_write(c, s, n); else debug_cons_nputs(s, n); diff --git a/arch/m68k/mvme147/config.c b/arch/m68k/mvme147/config.c index 8b5dc07f0811..cc2fb0a83cf0 100644 --- a/arch/m68k/mvme147/config.c +++ b/arch/m68k/mvme147/config.c @@ -32,6 +32,7 @@ #include <asm/mvme147hw.h> #include <asm/config.h>
+#include "mvme147.h"
static void mvme147_get_model(char *model); extern void mvme147_sched_init(void); @@ -185,3 +186,32 @@ int mvme147_hwclk(int op, struct rtc_time *t) } return 0; } + +static void scc_delay(void) +{ + __asm__ __volatile__ ("nop; nop;"); +} + +static void scc_write(char ch) +{ + do { + scc_delay(); + } while (!(in_8(M147_SCC_A_ADDR) & BIT(2))); + scc_delay(); + out_8(M147_SCC_A_ADDR, 8); + scc_delay(); + out_8(M147_SCC_A_ADDR, ch); +} + +void mvme147_scc_write(struct console *co, const char *str, unsigned int count) +{ + unsigned long flags; + + local_irq_save(flags); + while (count--) { + if (*str == '\n') + scc_write('\r'); + scc_write(*str++); + } + local_irq_restore(flags); +} diff --git a/arch/m68k/mvme147/mvme147.h b/arch/m68k/mvme147/mvme147.h new file mode 100644 index 000000000000..140bc98b0102 --- /dev/null +++ b/arch/m68k/mvme147/mvme147.h @@ -0,0 +1,6 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +struct console; + +/* config.c */ +void mvme147_scc_write(struct console *co, const char *str, unsigned int count); diff --git a/arch/microblaze/kernel/microblaze_ksyms.c b/arch/microblaze/kernel/microblaze_ksyms.c index c892e173ec99..a8553f54152b 100644 --- a/arch/microblaze/kernel/microblaze_ksyms.c +++ b/arch/microblaze/kernel/microblaze_ksyms.c @@ -16,6 +16,7 @@ #include <asm/page.h> #include <linux/ftrace.h> #include <linux/uaccess.h> +#include <asm/xilinx_mb_manager.h>
#ifdef CONFIG_FUNCTION_TRACER extern void _mcount(void); @@ -46,3 +47,12 @@ extern void __udivsi3(void); EXPORT_SYMBOL(__udivsi3); extern void __umodsi3(void); EXPORT_SYMBOL(__umodsi3); + +#ifdef CONFIG_MB_MANAGER +extern void xmb_manager_register(uintptr_t phys_baseaddr, u32 cr_val, + void (*callback)(void *data), + void *priv, void (*reset_callback)(void *data)); +EXPORT_SYMBOL(xmb_manager_register); +extern asmlinkage void xmb_inject_err(void); +EXPORT_SYMBOL(xmb_inject_err); +#endif diff --git a/arch/microblaze/kernel/prom.c b/arch/microblaze/kernel/prom.c index e424c796e297..76ac4cfdfb42 100644 --- a/arch/microblaze/kernel/prom.c +++ b/arch/microblaze/kernel/prom.c @@ -18,7 +18,7 @@ void __init early_init_devtree(void *params) { pr_debug(" -> early_init_devtree(%p)\n", params);
- early_init_dt_scan(params); + early_init_dt_scan(params, __pa(params)); if (!strlen(boot_command_line)) strscpy(boot_command_line, cmd_line, COMMAND_LINE_SIZE);
diff --git a/arch/mips/include/asm/switch_to.h b/arch/mips/include/asm/switch_to.h index a4374b4cb88f..d6ccd5344021 100644 --- a/arch/mips/include/asm/switch_to.h +++ b/arch/mips/include/asm/switch_to.h @@ -97,7 +97,7 @@ do { \ } \ } while (0) #else -# define __sanitize_fcr31(next) +# define __sanitize_fcr31(next) do { (void) (next); } while (0) #endif
/* diff --git a/arch/mips/kernel/prom.c b/arch/mips/kernel/prom.c index 6062e6fa589a..4fd6da0a06c3 100644 --- a/arch/mips/kernel/prom.c +++ b/arch/mips/kernel/prom.c @@ -41,7 +41,7 @@ char *mips_get_machine_name(void)
void __init __dt_setup_arch(void *bph) { - if (!early_init_dt_scan(bph)) + if (!early_init_dt_scan(bph, __pa(bph))) return;
mips_set_machine_name(of_flat_dt_get_machine_name()); diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c index 7eeeaf1ff95d..cda7983e7c18 100644 --- a/arch/mips/kernel/relocate.c +++ b/arch/mips/kernel/relocate.c @@ -337,7 +337,7 @@ void *__init relocate_kernel(void) #if defined(CONFIG_USE_OF) /* Deal with the device tree */ fdt = plat_get_fdt(); - early_init_dt_scan(fdt); + early_init_dt_scan(fdt, __pa(fdt)); if (boot_command_line[0]) { /* Boot command line was passed in device tree */ strscpy(arcs_cmdline, boot_command_line, COMMAND_LINE_SIZE); diff --git a/arch/nios2/kernel/prom.c b/arch/nios2/kernel/prom.c index 9a8393e6b4a8..db049249766f 100644 --- a/arch/nios2/kernel/prom.c +++ b/arch/nios2/kernel/prom.c @@ -27,7 +27,7 @@ void __init early_init_devtree(void *params) if (be32_to_cpup((__be32 *)CONFIG_NIOS2_DTB_PHYS_ADDR) == OF_DT_HEADER) { params = (void *)CONFIG_NIOS2_DTB_PHYS_ADDR; - early_init_dt_scan(params); + early_init_dt_scan(params, __pa(params)); return; } #endif @@ -37,5 +37,5 @@ void __init early_init_devtree(void *params) params = (void *)__dtb_start; #endif
- early_init_dt_scan(params); + early_init_dt_scan(params, __pa(params)); } diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig index 69c0258700b2..3279ef457c57 100644 --- a/arch/openrisc/Kconfig +++ b/arch/openrisc/Kconfig @@ -65,6 +65,9 @@ config STACKTRACE_SUPPORT config LOCKDEP_SUPPORT def_bool y
+config FIX_EARLYCON_MEM + def_bool y + menu "Processor type and features"
choice diff --git a/arch/openrisc/include/asm/fixmap.h b/arch/openrisc/include/asm/fixmap.h index ecdb98a5839f..aaa6a26a3e92 100644 --- a/arch/openrisc/include/asm/fixmap.h +++ b/arch/openrisc/include/asm/fixmap.h @@ -26,29 +26,18 @@ #include <linux/bug.h> #include <asm/page.h>
-/* - * On OpenRISC we use these special fixed_addresses for doing ioremap - * early in the boot process before memory initialization is complete. - * This is used, in particular, by the early serial console code. - * - * It's not really 'fixmap', per se, but fits loosely into the same - * paradigm. - */ enum fixed_addresses { - /* - * FIX_IOREMAP entries are useful for mapping physical address - * space before ioremap() is useable, e.g. really early in boot - * before kmalloc() is working. - */ -#define FIX_N_IOREMAPS 32 - FIX_IOREMAP_BEGIN, - FIX_IOREMAP_END = FIX_IOREMAP_BEGIN + FIX_N_IOREMAPS - 1, + FIX_EARLYCON_MEM_BASE, __end_of_fixed_addresses };
#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT) /* FIXADDR_BOTTOM might be a better name here... */ #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE) +#define FIXMAP_PAGE_IO PAGE_KERNEL_NOCACHE + +extern void __set_fixmap(enum fixed_addresses idx, + phys_addr_t phys, pgprot_t flags);
#include <asm-generic/fixmap.h>
diff --git a/arch/openrisc/kernel/prom.c b/arch/openrisc/kernel/prom.c index 19e6008bf114..e424e9bd12a7 100644 --- a/arch/openrisc/kernel/prom.c +++ b/arch/openrisc/kernel/prom.c @@ -22,6 +22,6 @@
void __init early_init_devtree(void *params) { - early_init_dt_scan(params); + early_init_dt_scan(params, __pa(params)); memblock_allow_resize(); } diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c index 1dcd78c8f0e9..d0cb1a0126f9 100644 --- a/arch/openrisc/mm/init.c +++ b/arch/openrisc/mm/init.c @@ -207,6 +207,43 @@ void __init mem_init(void) return; }
+static int __init map_page(unsigned long va, phys_addr_t pa, pgprot_t prot) +{ + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + p4d = p4d_offset(pgd_offset_k(va), va); + pud = pud_offset(p4d, va); + pmd = pmd_offset(pud, va); + pte = pte_alloc_kernel(pmd, va); + + if (pte == NULL) + return -ENOMEM; + + if (pgprot_val(prot)) + set_pte_at(&init_mm, va, pte, pfn_pte(pa >> PAGE_SHIFT, prot)); + else + pte_clear(&init_mm, va, pte); + + local_flush_tlb_page(NULL, va); + return 0; +} + +void __init __set_fixmap(enum fixed_addresses idx, + phys_addr_t phys, pgprot_t prot) +{ + unsigned long address = __fix_to_virt(idx); + + if (idx >= __end_of_fixed_addresses) { + BUG(); + return; + } + + map_page(address, phys, prot); +} + static const pgprot_t protection_map[16] = { [VM_NONE] = PAGE_NONE, [VM_READ] = PAGE_READONLY_X, diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c index c91f9c2e61ed..f8d08eab7db8 100644 --- a/arch/parisc/kernel/ftrace.c +++ b/arch/parisc/kernel/ftrace.c @@ -87,7 +87,7 @@ int ftrace_enable_ftrace_graph_caller(void)
int ftrace_disable_ftrace_graph_caller(void) { - static_key_enable(&ftrace_graph_enable.key); + static_key_disable(&ftrace_graph_enable.key); return 0; } #endif diff --git a/arch/powerpc/include/asm/dtl.h b/arch/powerpc/include/asm/dtl.h index d6f43d149f8d..a5c21bc623cb 100644 --- a/arch/powerpc/include/asm/dtl.h +++ b/arch/powerpc/include/asm/dtl.h @@ -1,8 +1,8 @@ #ifndef _ASM_POWERPC_DTL_H #define _ASM_POWERPC_DTL_H
+#include <linux/rwsem.h> #include <asm/lppaca.h> -#include <linux/spinlock_types.h>
/* * Layout of entries in the hypervisor's dispatch trace log buffer. @@ -35,7 +35,7 @@ struct dtl_entry { #define DTL_LOG_ALL (DTL_LOG_CEDE | DTL_LOG_PREEMPT | DTL_LOG_FAULT)
extern struct kmem_cache *dtl_cache; -extern rwlock_t dtl_access_lock; +extern struct rw_semaphore dtl_access_lock;
extern void register_dtl_buffer(int cpu); extern void alloc_dtl_buffers(unsigned long *time_limit); diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h index ef40c9b6972a..a48f54dde4f6 100644 --- a/arch/powerpc/include/asm/fadump.h +++ b/arch/powerpc/include/asm/fadump.h @@ -19,6 +19,7 @@ extern int is_fadump_active(void); extern int should_fadump_crash(void); extern void crash_fadump(struct pt_regs *, const char *); extern void fadump_cleanup(void); +void fadump_setup_param_area(void); extern void fadump_append_bootargs(void);
#else /* CONFIG_FA_DUMP */ @@ -26,6 +27,7 @@ static inline int is_fadump_active(void) { return 0; } static inline int should_fadump_crash(void) { return 0; } static inline void crash_fadump(struct pt_regs *regs, const char *str) { } static inline void fadump_cleanup(void) { } +static inline void fadump_setup_param_area(void) { } static inline void fadump_append_bootargs(void) { } #endif /* !CONFIG_FA_DUMP */
@@ -34,4 +36,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname, int depth, void *data); extern int fadump_reserve_mem(void); #endif + +#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA) +void fadump_cma_init(void); +#else +static inline void fadump_cma_init(void) { } +#endif + #endif /* _ASM_POWERPC_FADUMP_H */ diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 2ef9a5f4e5d1..11065313d4c1 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -684,8 +684,8 @@ int kvmhv_nestedv2_set_ptbl_entry(unsigned long lpid, u64 dw0, u64 dw1); int kvmhv_nestedv2_parse_output(struct kvm_vcpu *vcpu); int kvmhv_nestedv2_set_vpa(struct kvm_vcpu *vcpu, unsigned long vpa);
-int kmvhv_counters_tracepoint_regfunc(void); -void kmvhv_counters_tracepoint_unregfunc(void); +int kvmhv_counters_tracepoint_regfunc(void); +void kvmhv_counters_tracepoint_unregfunc(void); int kvmhv_get_l2_counters_status(void); void kvmhv_set_l2_counters_status(int cpu, bool status);
diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h index 50950deedb87..e3d0e714ff28 100644 --- a/arch/powerpc/include/asm/sstep.h +++ b/arch/powerpc/include/asm/sstep.h @@ -173,9 +173,4 @@ int emulate_step(struct pt_regs *regs, ppc_inst_t instr); */ extern int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op);
-extern void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg, - const void *mem, bool cross_endian); -extern void emulate_vsx_store(struct instruction_op *op, - const union vsx_reg *reg, void *mem, - bool cross_endian); extern int emulate_dcbz(unsigned long ea, struct pt_regs *regs); diff --git a/arch/powerpc/include/asm/vdso.h b/arch/powerpc/include/asm/vdso.h index 7650b6ce14c8..8d972bc98b55 100644 --- a/arch/powerpc/include/asm/vdso.h +++ b/arch/powerpc/include/asm/vdso.h @@ -25,6 +25,7 @@ int vdso_getcpu_init(void); #ifdef __VDSO64__ #define V_FUNCTION_BEGIN(name) \ .globl name; \ + .type name,@function; \ name: \
#define V_FUNCTION_END(name) \ diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c index af4263594eb2..1bee15c013e7 100644 --- a/arch/powerpc/kernel/dt_cpu_ftrs.c +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c @@ -867,7 +867,7 @@ bool __init dt_cpu_ftrs_init(void *fdt) using_dt_cpu_ftrs = false;
/* Setup and verify the FDT, if it fails we just bail */ - if (!early_init_dt_verify(fdt)) + if (!early_init_dt_verify(fdt, __pa(fdt))) return false;
if (!of_scan_flat_dt(fdt_find_cpu_features, NULL)) diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c index a612e7513a4f..4641de75f7fc 100644 --- a/arch/powerpc/kernel/fadump.c +++ b/arch/powerpc/kernel/fadump.c @@ -78,27 +78,23 @@ static struct cma *fadump_cma; * But for some reason even if it fails we still have the memory reservation * with us and we can still continue doing fadump. */ -static int __init fadump_cma_init(void) +void __init fadump_cma_init(void) { unsigned long long base, size; int rc;
- if (!fw_dump.fadump_enabled) - return 0; - + if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled || + fw_dump.dump_active) + return; /* * Do not use CMA if user has provided fadump=nocma kernel parameter. - * Return 1 to continue with fadump old behaviour. */ - if (fw_dump.nocma) - return 1; + if (fw_dump.nocma || !fw_dump.boot_memory_size) + return;
base = fw_dump.reserve_dump_area_start; size = fw_dump.boot_memory_size;
- if (!size) - return 0; - rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma); if (rc) { pr_err("Failed to init cma area for firmware-assisted dump,%d\n", rc); @@ -108,7 +104,7 @@ static int __init fadump_cma_init(void) * blocked from production system usage. Hence return 1, * so that we can continue with fadump. */ - return 1; + return; }
/* @@ -125,10 +121,7 @@ static int __init fadump_cma_init(void) cma_get_size(fadump_cma), (unsigned long)cma_get_base(fadump_cma) >> 20, fw_dump.reserve_dump_area_size); - return 1; } -#else -static int __init fadump_cma_init(void) { return 1; } #endif /* CONFIG_CMA */
/* @@ -143,7 +136,7 @@ void __init fadump_append_bootargs(void) if (!fw_dump.dump_active || !fw_dump.param_area_supported || !fw_dump.param_area) return;
- if (fw_dump.param_area >= fw_dump.boot_mem_top) { + if (fw_dump.param_area < fw_dump.boot_mem_top) { if (memblock_reserve(fw_dump.param_area, COMMAND_LINE_SIZE)) { pr_warn("WARNING: Can't use additional parameters area!\n"); fw_dump.param_area = 0; @@ -637,8 +630,6 @@ int __init fadump_reserve_mem(void)
pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n", (size >> 20), base, (memblock_phys_mem_size() >> 20)); - - ret = fadump_cma_init(); }
return ret; @@ -1586,6 +1577,12 @@ static void __init fadump_init_files(void) return; }
+ if (fw_dump.param_area) { + rc = sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr); + if (rc) + pr_err("unable to create bootargs_append sysfs file (%d)\n", rc); + } + debugfs_create_file("fadump_region", 0444, arch_debugfs_dir, NULL, &fadump_region_fops);
@@ -1740,7 +1737,7 @@ static void __init fadump_process(void) * Reserve memory to store additional parameters to be passed * for fadump/capture kernel. */ -static void __init fadump_setup_param_area(void) +void __init fadump_setup_param_area(void) { phys_addr_t range_start, range_end;
@@ -1748,7 +1745,7 @@ static void __init fadump_setup_param_area(void) return;
/* This memory can't be used by PFW or bootloader as it is shared across kernels */ - if (radix_enabled()) { + if (early_radix_enabled()) { /* * Anywhere in the upper half should be good enough as all memory * is accessible in real mode. @@ -1776,12 +1773,12 @@ static void __init fadump_setup_param_area(void) COMMAND_LINE_SIZE, range_start, range_end); - if (!fw_dump.param_area || sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr)) { + if (!fw_dump.param_area) { pr_warn("WARNING: Could not setup area to pass additional parameters!\n"); return; }
- memset(phys_to_virt(fw_dump.param_area), 0, COMMAND_LINE_SIZE); + memset((void *)fw_dump.param_area, 0, COMMAND_LINE_SIZE); }
/* @@ -1807,7 +1804,6 @@ int __init setup_fadump(void) } /* Initialize the kernel dump memory structure and register with f/w */ else if (fw_dump.reserve_dump_area_size) { - fadump_setup_param_area(); fw_dump.ops->fadump_init_mem_struct(&fw_dump); register_fadump(); } diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c index 0be07ed407c7..e0059842a1c6 100644 --- a/arch/powerpc/kernel/prom.c +++ b/arch/powerpc/kernel/prom.c @@ -791,7 +791,7 @@ void __init early_init_devtree(void *params) DBG(" -> early_init_devtree(%px)\n", params);
/* Too early to BUG_ON(), do it by hand */ - if (!early_init_dt_verify(params)) + if (!early_init_dt_verify(params, __pa(params))) panic("BUG: Failed verifying flat device tree, bad version?");
of_scan_flat_dt(early_init_dt_scan_model, NULL); @@ -908,6 +908,9 @@ void __init early_init_devtree(void *params)
mmu_early_init_devtree();
+ /* Setup param area for passing additional parameters to fadump capture kernel. */ + fadump_setup_param_area(); + #ifdef CONFIG_PPC_POWERNV /* Scan and build the list of machine check recoverable ranges */ of_scan_flat_dt(early_init_dt_scan_recoverable_ranges, NULL); diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index 943430077375..b6b01502e504 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -997,9 +997,11 @@ void __init setup_arch(char **cmdline_p) initmem_init();
/* - * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must - * be called after initmem_init(), so that pageblock_order is initialised. + * Reserve large chunks of memory for use by CMA for fadump, KVM and + * hugetlb. These must be called after initmem_init(), so that + * pageblock_order is initialised. */ + fadump_cma_init(); kvm_cma_reserve(); gigantic_hugetlb_cma_reserve();
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index 22f83fbbc762..1edc7cd68c10 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -920,6 +920,7 @@ static int __init disable_hardlockup_detector(void) hardlockup_detector_disable(); #else if (firmware_has_feature(FW_FEATURE_LPAR)) { + check_kvm_guest(); if (is_kvm_guest()) hardlockup_detector_disable(); } diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c index 9738adabeb1f..dc65c1391157 100644 --- a/arch/powerpc/kexec/file_load_64.c +++ b/arch/powerpc/kexec/file_load_64.c @@ -736,13 +736,18 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code, if (dn) { u64 val;
- of_property_read_u64(dn, "opal-base-address", &val); + ret = of_property_read_u64(dn, "opal-base-address", &val); + if (ret) + goto out; + ret = kexec_purgatory_get_set_symbol(image, "opal_base", &val, sizeof(val), false); if (ret) goto out;
- of_property_read_u64(dn, "opal-entry-address", &val); + ret = of_property_read_u64(dn, "opal-entry-address", &val); + if (ret) + goto out; ret = kexec_purgatory_get_set_symbol(image, "opal_entry", &val, sizeof(val), false); } diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index ad8dc4ccdaab..57b6c1ba84d4 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4154,7 +4154,7 @@ void kvmhv_set_l2_counters_status(int cpu, bool status) lppaca_of(cpu).l2_counters_enable = 0; }
-int kmvhv_counters_tracepoint_regfunc(void) +int kvmhv_counters_tracepoint_regfunc(void) { int cpu;
@@ -4164,7 +4164,7 @@ int kmvhv_counters_tracepoint_regfunc(void) return 0; }
-void kmvhv_counters_tracepoint_unregfunc(void) +void kvmhv_counters_tracepoint_unregfunc(void) { int cpu;
@@ -4309,6 +4309,15 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns } hvregs.hdec_expiry = time_limit;
+ /* + * hvregs has the doorbell status, so zero it here which + * enables us to receive doorbells when H_ENTER_NESTED is + * in progress for this vCPU + */ + + if (vcpu->arch.doorbell_request) + vcpu->arch.doorbell_request = 0; + /* * When setting DEC, we must always deal with irq_work_raise * via NMI vs setting DEC. The problem occurs right as we @@ -4912,7 +4921,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, lpcr &= ~LPCR_MER; } } else if (vcpu->arch.pending_exceptions || - vcpu->arch.doorbell_request || xive_interrupt_pending(vcpu)) { vcpu->arch.ret = RESUME_HOST; goto out; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 05f5220960c6..125440a606ee 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -32,7 +32,7 @@ void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) struct kvmppc_vcore *vc = vcpu->arch.vcore;
hr->pcr = vc->pcr | PCR_MASK; - hr->dpdes = vc->dpdes; + hr->dpdes = vcpu->arch.doorbell_request; hr->hfscr = vcpu->arch.hfscr; hr->tb_offset = vc->tb_offset; hr->dawr0 = vcpu->arch.dawr0; @@ -105,7 +105,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, { struct kvmppc_vcore *vc = vcpu->arch.vcore;
- hr->dpdes = vc->dpdes; + hr->dpdes = vcpu->arch.doorbell_request; hr->purr = vcpu->arch.purr; hr->spurr = vcpu->arch.spurr; hr->ic = vcpu->arch.ic; @@ -143,7 +143,7 @@ static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state * struct kvmppc_vcore *vc = vcpu->arch.vcore;
vc->pcr = hr->pcr | PCR_MASK; - vc->dpdes = hr->dpdes; + vcpu->arch.doorbell_request = hr->dpdes; vcpu->arch.hfscr = hr->hfscr; vcpu->arch.dawr0 = hr->dawr0; vcpu->arch.dawrx0 = hr->dawrx0; @@ -170,7 +170,13 @@ void kvmhv_restore_hv_return_state(struct kvm_vcpu *vcpu, { struct kvmppc_vcore *vc = vcpu->arch.vcore;
- vc->dpdes = hr->dpdes; + /* + * This L2 vCPU might have received a doorbell while H_ENTER_NESTED was being handled. + * Make sure we preserve the doorbell if it was either: + * a) Sent after H_ENTER_NESTED was called on this vCPU (arch.doorbell_request would be 1) + * b) Doorbell was not handled and L2 exited for some other reason (hr->dpdes would be 1) + */ + vcpu->arch.doorbell_request = vcpu->arch.doorbell_request | hr->dpdes; vcpu->arch.hfscr = hr->hfscr; vcpu->arch.purr = hr->purr; vcpu->arch.spurr = hr->spurr; diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h index 77ebc724e6cd..35fccaa575cc 100644 --- a/arch/powerpc/kvm/trace_hv.h +++ b/arch/powerpc/kvm/trace_hv.h @@ -538,7 +538,7 @@ TRACE_EVENT_FN_COND(kvmppc_vcpu_stats, TP_printk("VCPU %d: l1_to_l2_cs_time=%llu ns l2_to_l1_cs_time=%llu ns l2_runtime=%llu ns", __entry->vcpu_id, __entry->l1_to_l2_cs, __entry->l2_to_l1_cs, __entry->l2_runtime), - kmvhv_counters_tracepoint_regfunc, kmvhv_counters_tracepoint_unregfunc + kvmhv_counters_tracepoint_regfunc, kvmhv_counters_tracepoint_unregfunc ); #endif #endif /* _TRACE_KVM_HV_H */ diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c index e65f3fb68d06..ac3ee19531d8 100644 --- a/arch/powerpc/lib/sstep.c +++ b/arch/powerpc/lib/sstep.c @@ -780,8 +780,8 @@ static nokprobe_inline int emulate_stq(struct pt_regs *regs, unsigned long ea, #endif /* __powerpc64 */
#ifdef CONFIG_VSX -void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg, - const void *mem, bool rev) +static nokprobe_inline void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg, + const void *mem, bool rev) { int size, read_size; int i, j; @@ -863,11 +863,9 @@ void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg, break; } } -EXPORT_SYMBOL_GPL(emulate_vsx_load); -NOKPROBE_SYMBOL(emulate_vsx_load);
-void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg, - void *mem, bool rev) +static nokprobe_inline void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg, + void *mem, bool rev) { int size, write_size; int i, j; @@ -955,8 +953,6 @@ void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg, break; } } -EXPORT_SYMBOL_GPL(emulate_vsx_store); -NOKPROBE_SYMBOL(emulate_vsx_store);
static nokprobe_inline int do_vsx_load(struct instruction_op *op, unsigned long ea, struct pt_regs *regs, diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 81c77ddce2e3..c156fe0d53c3 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -439,10 +439,16 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, /* * The kernel should never take an execute fault nor should it * take a page fault to a kernel address or a page fault to a user - * address outside of dedicated places + * address outside of dedicated places. + * + * Rather than kfence directly reporting false negatives, search whether + * the NIP belongs to the fixup table for cases where fault could come + * from functions like copy_from_kernel_nofault(). */ if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address, is_write))) { - if (kfence_handle_page_fault(address, is_write, regs)) + if (is_kfence_address((void *)address) && + !search_exception_tables(instruction_pointer(regs)) && + kfence_handle_page_fault(address, is_write, regs)) return 0;
return SIGSEGV; diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c index 8cb9d36ea491..f293588b8c7b 100644 --- a/arch/powerpc/platforms/pseries/dtl.c +++ b/arch/powerpc/platforms/pseries/dtl.c @@ -191,7 +191,7 @@ static int dtl_enable(struct dtl *dtl) return -EBUSY;
/* ensure there are no other conflicting dtl users */ - if (!read_trylock(&dtl_access_lock)) + if (!down_read_trylock(&dtl_access_lock)) return -EBUSY;
n_entries = dtl_buf_entries; @@ -199,7 +199,7 @@ static int dtl_enable(struct dtl *dtl) if (!buf) { printk(KERN_WARNING "%s: buffer alloc failed for cpu %d\n", __func__, dtl->cpu); - read_unlock(&dtl_access_lock); + up_read(&dtl_access_lock); return -ENOMEM; }
@@ -217,7 +217,7 @@ static int dtl_enable(struct dtl *dtl) spin_unlock(&dtl->lock);
if (rc) { - read_unlock(&dtl_access_lock); + up_read(&dtl_access_lock); kmem_cache_free(dtl_cache, buf); }
@@ -232,7 +232,7 @@ static void dtl_disable(struct dtl *dtl) dtl->buf = NULL; dtl->buf_entries = 0; spin_unlock(&dtl->lock); - read_unlock(&dtl_access_lock); + up_read(&dtl_access_lock); }
/* file interface */ diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c index c1d8bee8f701..bb09990eec30 100644 --- a/arch/powerpc/platforms/pseries/lpar.c +++ b/arch/powerpc/platforms/pseries/lpar.c @@ -169,7 +169,7 @@ struct vcpu_dispatch_data { */ #define NR_CPUS_H NR_CPUS
-DEFINE_RWLOCK(dtl_access_lock); +DECLARE_RWSEM(dtl_access_lock); static DEFINE_PER_CPU(struct vcpu_dispatch_data, vcpu_disp_data); static DEFINE_PER_CPU(u64, dtl_entry_ridx); static DEFINE_PER_CPU(struct dtl_worker, dtl_workers); @@ -463,7 +463,7 @@ static int dtl_worker_enable(unsigned long *time_limit) { int rc = 0, state;
- if (!write_trylock(&dtl_access_lock)) { + if (!down_write_trylock(&dtl_access_lock)) { rc = -EBUSY; goto out; } @@ -479,7 +479,7 @@ static int dtl_worker_enable(unsigned long *time_limit) pr_err("vcpudispatch_stats: unable to setup workqueue for DTL processing\n"); free_dtl_buffers(time_limit); reset_global_dtl_mask(); - write_unlock(&dtl_access_lock); + up_write(&dtl_access_lock); rc = -EINVAL; goto out; } @@ -494,7 +494,7 @@ static void dtl_worker_disable(unsigned long *time_limit) cpuhp_remove_state(dtl_worker_state); free_dtl_buffers(time_limit); reset_global_dtl_mask(); - write_unlock(&dtl_access_lock); + up_write(&dtl_access_lock); }
static ssize_t vcpudispatch_stats_write(struct file *file, const char __user *p, diff --git a/arch/powerpc/platforms/pseries/plpks.c b/arch/powerpc/platforms/pseries/plpks.c index 4a595493d28a..b1667ed05f98 100644 --- a/arch/powerpc/platforms/pseries/plpks.c +++ b/arch/powerpc/platforms/pseries/plpks.c @@ -683,7 +683,7 @@ void __init plpks_early_init_devtree(void) out: fdt_nop_property(fdt, chosen_node, "ibm,plpks-pw"); // Since we've cleared the password, we must update the FDT checksum - early_init_dt_verify(fdt); + early_init_dt_verify(fdt, __pa(fdt)); }
static __init int pseries_plpks_init(void) diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h index 45f9c1171a48..dfa5cdddd367 100644 --- a/arch/riscv/include/asm/cpufeature.h +++ b/arch/riscv/include/asm/cpufeature.h @@ -8,6 +8,7 @@
#include <linux/bitmap.h> #include <linux/jump_label.h> +#include <linux/workqueue.h> #include <asm/hwcap.h> #include <asm/alternative-macros.h> #include <asm/errno.h> @@ -60,6 +61,7 @@ void riscv_user_isa_enable(void);
#if defined(CONFIG_RISCV_MISALIGNED) bool check_unaligned_access_emulated_all_cpus(void); +void check_unaligned_access_emulated(struct work_struct *work __always_unused); void unaligned_emulation_finish(void); bool unaligned_ctl_available(void); DECLARE_PER_CPU(long, misaligned_access_speed); diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index a2cde65b69e9..26c886db4fb3 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -227,7 +227,7 @@ static void __init init_resources(void) static void __init parse_dtb(void) { /* Early scan of device tree from init memory */ - if (early_init_dt_scan(dtb_early_va)) { + if (early_init_dt_scan(dtb_early_va, __pa(dtb_early_va))) { const char *name = of_flat_dt_get_machine_name();
if (name) { diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index 1b9867136b61..9a80a12f6b48 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -524,11 +524,11 @@ int handle_misaligned_store(struct pt_regs *regs) return 0; }
-static bool check_unaligned_access_emulated(int cpu) +void check_unaligned_access_emulated(struct work_struct *work __always_unused) { + int cpu = smp_processor_id(); long *mas_ptr = per_cpu_ptr(&misaligned_access_speed, cpu); unsigned long tmp_var, tmp_val; - bool misaligned_emu_detected;
*mas_ptr = RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN;
@@ -536,19 +536,16 @@ static bool check_unaligned_access_emulated(int cpu) " "REG_L" %[tmp], 1(%[ptr])\n" : [tmp] "=r" (tmp_val) : [ptr] "r" (&tmp_var) : "memory");
- misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED); /* * If unaligned_ctl is already set, this means that we detected that all * CPUS uses emulated misaligned access at boot time. If that changed * when hotplugging the new cpu, this is something we don't handle. */ - if (unlikely(unaligned_ctl && !misaligned_emu_detected)) { + if (unlikely(unaligned_ctl && (*mas_ptr != RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED))) { pr_crit("CPU misaligned accesses non homogeneous (expected all emulated)\n"); while (true) cpu_relax(); } - - return misaligned_emu_detected; }
bool check_unaligned_access_emulated_all_cpus(void) @@ -560,8 +557,11 @@ bool check_unaligned_access_emulated_all_cpus(void) * accesses emulated since tasks requesting such control can run on any * CPU. */ + schedule_on_each_cpu(check_unaligned_access_emulated); + for_each_online_cpu(cpu) - if (!check_unaligned_access_emulated(cpu)) + if (per_cpu(misaligned_access_speed, cpu) + != RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED) return false;
unaligned_ctl = true; diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c index 160628a2116d..f3508cc54f91 100644 --- a/arch/riscv/kernel/unaligned_access_speed.c +++ b/arch/riscv/kernel/unaligned_access_speed.c @@ -191,6 +191,7 @@ static int riscv_online_cpu(unsigned int cpu) if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN) goto exit;
+ check_unaligned_access_emulated(NULL); buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER); if (!buf) { pr_warn("Allocation failure, not measuring misaligned performance\n"); diff --git a/arch/riscv/kvm/aia_aplic.c b/arch/riscv/kvm/aia_aplic.c index da6ff1bade0d..f59d1c0c8c43 100644 --- a/arch/riscv/kvm/aia_aplic.c +++ b/arch/riscv/kvm/aia_aplic.c @@ -143,7 +143,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending) if (sm == APLIC_SOURCECFG_SM_LEVEL_HIGH || sm == APLIC_SOURCECFG_SM_LEVEL_LOW) { if (!pending) - goto skip_write_pending; + goto noskip_write_pending; if ((irqd->state & APLIC_IRQ_STATE_INPUT) && sm == APLIC_SOURCECFG_SM_LEVEL_LOW) goto skip_write_pending; @@ -152,6 +152,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending) goto skip_write_pending; }
+noskip_write_pending: if (pending) irqd->state |= APLIC_IRQ_STATE_PENDING; else diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 7de128be8db9..6e704ed86a83 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -486,19 +486,22 @@ void kvm_riscv_vcpu_sbi_init(struct kvm_vcpu *vcpu) struct kvm_vcpu_sbi_context *scontext = &vcpu->arch.sbi_context; const struct kvm_riscv_sbi_extension_entry *entry; const struct kvm_vcpu_sbi_extension *ext; - int i; + int idx, i;
for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) { entry = &sbi_ext[i]; ext = entry->ext_ptr; + idx = entry->ext_idx; + + if (idx < 0 || idx >= ARRAY_SIZE(scontext->ext_status)) + continue;
if (ext->probe && !ext->probe(vcpu)) { - scontext->ext_status[entry->ext_idx] = - KVM_RISCV_SBI_EXT_STATUS_UNAVAILABLE; + scontext->ext_status[idx] = KVM_RISCV_SBI_EXT_STATUS_UNAVAILABLE; continue; }
- scontext->ext_status[entry->ext_idx] = ext->default_disabled ? + scontext->ext_status[idx] = ext->default_disabled ? KVM_RISCV_SBI_EXT_STATUS_DISABLED : KVM_RISCV_SBI_EXT_STATUS_ENABLED; } diff --git a/arch/s390/include/asm/facility.h b/arch/s390/include/asm/facility.h index 715bcf8fb69a..5f5b1aa6c233 100644 --- a/arch/s390/include/asm/facility.h +++ b/arch/s390/include/asm/facility.h @@ -88,7 +88,7 @@ static __always_inline bool test_facility(unsigned long nr) return __test_facility(nr, &stfle_fac_list); }
-static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size) +static inline unsigned long __stfle_asm(u64 *fac_list, int size) { unsigned long reg0 = size - 1;
@@ -96,7 +96,7 @@ static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size) " lgr 0,%[reg0]\n" " .insn s,0xb2b00000,%[list]\n" /* stfle */ " lgr %[reg0],0\n" - : [reg0] "+&d" (reg0), [list] "+Q" (*stfle_fac_list) + : [reg0] "+&d" (reg0), [list] "+Q" (*fac_list) : : "memory", "cc", "0"); return reg0; @@ -104,10 +104,10 @@ static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
/** * stfle - Store facility list extended - * @stfle_fac_list: array where facility list can be stored + * @fac_list: array where facility list can be stored * @size: size of passed in array in double words */ -static inline void __stfle(u64 *stfle_fac_list, int size) +static inline void __stfle(u64 *fac_list, int size) { unsigned long nr; u32 stfl_fac_list; @@ -116,20 +116,20 @@ static inline void __stfle(u64 *stfle_fac_list, int size) " stfl 0(0)\n" : "=m" (get_lowcore()->stfl_fac_list)); stfl_fac_list = get_lowcore()->stfl_fac_list; - memcpy(stfle_fac_list, &stfl_fac_list, 4); + memcpy(fac_list, &stfl_fac_list, 4); nr = 4; /* bytes stored by stfl */ if (stfl_fac_list & 0x01000000) { /* More facility bits available with stfle */ - nr = __stfle_asm(stfle_fac_list, size); + nr = __stfle_asm(fac_list, size); nr = min_t(unsigned long, (nr + 1) * 8, size * 8); } - memset((char *) stfle_fac_list + nr, 0, size * 8 - nr); + memset((char *)fac_list + nr, 0, size * 8 - nr); }
-static inline void stfle(u64 *stfle_fac_list, int size) +static inline void stfle(u64 *fac_list, int size) { preempt_disable(); - __stfle(stfle_fac_list, size); + __stfle(fac_list, size); preempt_enable(); }
diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h index 9d920ced6047..30b20ce9a700 100644 --- a/arch/s390/include/asm/pci.h +++ b/arch/s390/include/asm/pci.h @@ -96,7 +96,6 @@ struct zpci_bar_struct { u8 size; /* order 2 exponent */ };
-struct s390_domain; struct kvm_zdev;
#define ZPCI_FUNCTIONS_PER_BUS 256 @@ -181,9 +180,10 @@ struct zpci_dev { struct dentry *debugfs_dev;
/* IOMMU and passthrough */ - struct s390_domain *s390_domain; /* s390 IOMMU domain data */ + struct iommu_domain *s390_domain; /* attached IOMMU domain */ struct kvm_zdev *kzdev; struct mutex kzdev_lock; + spinlock_t dom_lock; /* protect s390_domain change */ };
static inline bool zdev_enabled(struct zpci_dev *zdev) diff --git a/arch/s390/include/asm/set_memory.h b/arch/s390/include/asm/set_memory.h index 06fbabe2f66c..cb4cc0f59012 100644 --- a/arch/s390/include/asm/set_memory.h +++ b/arch/s390/include/asm/set_memory.h @@ -62,5 +62,6 @@ __SET_MEMORY_FUNC(set_memory_4k, SET_MEMORY_4K)
int set_direct_map_invalid_noflush(struct page *page); int set_direct_map_default_noflush(struct page *page); +bool kernel_page_present(struct page *page);
#endif diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c index 5b765e3ccf0c..3317f4878eaa 100644 --- a/arch/s390/kernel/perf_cpum_sf.c +++ b/arch/s390/kernel/perf_cpum_sf.c @@ -759,7 +759,6 @@ static int __hw_perf_event_init(struct perf_event *event) reserve_pmc_hardware(); refcount_set(&num_events, 1); } - mutex_unlock(&pmc_reserve_mutex); event->destroy = hw_perf_event_destroy;
/* Access per-CPU sampling information (query sampling info) */ @@ -848,6 +847,7 @@ static int __hw_perf_event_init(struct perf_event *event) if (is_default_overflow_handler(event)) event->overflow_handler = cpumsf_output_event_pid; out: + mutex_unlock(&pmc_reserve_mutex); return err; }
diff --git a/arch/s390/kernel/syscalls/Makefile b/arch/s390/kernel/syscalls/Makefile index 1bb78b9468e8..e85c14f9058b 100644 --- a/arch/s390/kernel/syscalls/Makefile +++ b/arch/s390/kernel/syscalls/Makefile @@ -12,7 +12,7 @@ kapi-hdrs-y := $(kapi)/unistd_nr.h uapi-hdrs-y := $(uapi)/unistd_32.h uapi-hdrs-y += $(uapi)/unistd_64.h
-targets += $(addprefix ../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y)) +targets += $(addprefix ../../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
PHONY += kapi uapi
diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c index 5f805ad42d4c..aec9eb16b6f7 100644 --- a/arch/s390/mm/pageattr.c +++ b/arch/s390/mm/pageattr.c @@ -406,6 +406,21 @@ int set_direct_map_default_noflush(struct page *page) return __set_memory((unsigned long)page_to_virt(page), 1, SET_MEMORY_DEF); }
+bool kernel_page_present(struct page *page) +{ + unsigned long addr; + unsigned int cc; + + addr = (unsigned long)page_address(page); + asm volatile( + " lra %[addr],0(%[addr])\n" + " ipm %[cc]\n" + : [cc] "=d" (cc), [addr] "+a" (addr) + : + : "cc"); + return (cc >> 28) == 0; +} + #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
static void ipte_range(pte_t *pte, unsigned long address, int nr) diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c index bd9624c20b80..635fd8f2acba 100644 --- a/arch/s390/pci/pci.c +++ b/arch/s390/pci/pci.c @@ -160,6 +160,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev) u64 req = ZPCI_CREATE_REQ(zdev->fh, 0, ZPCI_MOD_FC_SET_MEASURE); struct zpci_iommu_ctrs *ctrs; struct zpci_fib fib = {0}; + unsigned long flags; u8 cc, status;
if (zdev->fmb || sizeof(*zdev->fmb) < zdev->fmb_length) @@ -171,6 +172,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev) WARN_ON((u64) zdev->fmb & 0xf);
/* reset software counters */ + spin_lock_irqsave(&zdev->dom_lock, flags); ctrs = zpci_get_iommu_ctrs(zdev); if (ctrs) { atomic64_set(&ctrs->mapped_pages, 0); @@ -179,6 +181,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev) atomic64_set(&ctrs->sync_map_rpcits, 0); atomic64_set(&ctrs->sync_rpcits, 0); } + spin_unlock_irqrestore(&zdev->dom_lock, flags);
fib.fmb_addr = virt_to_phys(zdev->fmb); @@ -914,10 +917,8 @@ void zpci_device_reserved(struct zpci_dev *zdev) void zpci_release_device(struct kref *kref) { struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref); - int ret;
- if (zdev->has_hp_slot) - zpci_exit_slot(zdev); + WARN_ON(zdev->state != ZPCI_FN_STATE_RESERVED);
if (zdev->zbus->bus) zpci_bus_remove_device(zdev, false); @@ -925,28 +926,14 @@ void zpci_release_device(struct kref *kref) if (zdev_enabled(zdev)) zpci_disable_device(zdev);
- switch (zdev->state) { - case ZPCI_FN_STATE_CONFIGURED: - ret = sclp_pci_deconfigure(zdev->fid); - zpci_dbg(3, "deconf fid:%x, rc:%d\n", zdev->fid, ret); - fallthrough; - case ZPCI_FN_STATE_STANDBY: - if (zdev->has_hp_slot) - zpci_exit_slot(zdev); - spin_lock(&zpci_list_lock); - list_del(&zdev->entry); - spin_unlock(&zpci_list_lock); - zpci_dbg(3, "rsv fid:%x\n", zdev->fid); - fallthrough; - case ZPCI_FN_STATE_RESERVED: - if (zdev->has_resources) - zpci_cleanup_bus_resources(zdev); - zpci_bus_device_unregister(zdev); - zpci_destroy_iommu(zdev); - fallthrough; - default: - break; - } + if (zdev->has_hp_slot) + zpci_exit_slot(zdev); + + if (zdev->has_resources) + zpci_cleanup_bus_resources(zdev); + + zpci_bus_device_unregister(zdev); + zpci_destroy_iommu(zdev); zpci_dbg(3, "rem fid:%x\n", zdev->fid); kfree_rcu(zdev, rcu); } diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c index 2cb5043a997d..38014206c16b 100644 --- a/arch/s390/pci/pci_debug.c +++ b/arch/s390/pci/pci_debug.c @@ -71,17 +71,23 @@ static void pci_fmb_show(struct seq_file *m, char *name[], int length,
static void pci_sw_counter_show(struct seq_file *m) { - struct zpci_iommu_ctrs *ctrs = zpci_get_iommu_ctrs(m->private); + struct zpci_dev *zdev = m->private; + struct zpci_iommu_ctrs *ctrs; atomic64_t *counter; + unsigned long flags; int i;
+ spin_lock_irqsave(&zdev->dom_lock, flags); + ctrs = zpci_get_iommu_ctrs(m->private); if (!ctrs) - return; + goto unlock;
counter = &ctrs->mapped_pages; for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++) seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i], atomic64_read(counter)); +unlock: + spin_unlock_irqrestore(&zdev->dom_lock, flags); }
static int pci_perf_show(struct seq_file *m, void *v) diff --git a/arch/sh/kernel/cpu/proc.c b/arch/sh/kernel/cpu/proc.c index a306bcd6b341..5f6d0e827bae 100644 --- a/arch/sh/kernel/cpu/proc.c +++ b/arch/sh/kernel/cpu/proc.c @@ -132,7 +132,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
static void *c_start(struct seq_file *m, loff_t *pos) { - return *pos < NR_CPUS ? cpu_data + *pos : NULL; + return *pos < nr_cpu_ids ? cpu_data + *pos : NULL; } static void *c_next(struct seq_file *m, void *v, loff_t *pos) { diff --git a/arch/sh/kernel/setup.c b/arch/sh/kernel/setup.c index 620e5cf8ae1e..f2b6f16a46b8 100644 --- a/arch/sh/kernel/setup.c +++ b/arch/sh/kernel/setup.c @@ -255,7 +255,7 @@ void __ref sh_fdt_init(phys_addr_t dt_phys) dt_virt = phys_to_virt(dt_phys); #endif
- if (!dt_virt || !early_init_dt_scan(dt_virt)) { + if (!dt_virt || !early_init_dt_scan(dt_virt, __pa(dt_virt))) { pr_crit("Error: invalid device tree blob" " at physical address %p\n", (void *)dt_phys);
diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c index 77c4afb8ab90..75d04fb4994a 100644 --- a/arch/um/drivers/net_kern.c +++ b/arch/um/drivers/net_kern.c @@ -336,7 +336,7 @@ static struct platform_driver uml_net_driver = {
static void net_device_release(struct device *dev) { - struct uml_net *device = dev_get_drvdata(dev); + struct uml_net *device = container_of(dev, struct uml_net, pdev.dev); struct net_device *netdev = device->dev; struct uml_net_private *lp = netdev_priv(netdev);
diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c index 7f28ec1929dc..2bfb17373244 100644 --- a/arch/um/drivers/ubd_kern.c +++ b/arch/um/drivers/ubd_kern.c @@ -779,7 +779,7 @@ static int ubd_open_dev(struct ubd *ubd_dev)
static void ubd_device_release(struct device *dev) { - struct ubd *ubd_dev = dev_get_drvdata(dev); + struct ubd *ubd_dev = container_of(dev, struct ubd, pdev.dev);
blk_mq_free_tag_set(&ubd_dev->tag_set); *ubd_dev = ((struct ubd) DEFAULT_UBD); @@ -898,6 +898,8 @@ static int ubd_add(int n, char **error_out) if (err) goto out_cleanup_disk;
+ ubd_dev->disk = disk; + return 0;
out_cleanup_disk: diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c index c992da83268d..64c09db392c1 100644 --- a/arch/um/drivers/vector_kern.c +++ b/arch/um/drivers/vector_kern.c @@ -815,7 +815,8 @@ static struct platform_driver uml_net_driver = {
static void vector_device_release(struct device *dev) { - struct vector_device *device = dev_get_drvdata(dev); + struct vector_device *device = + container_of(dev, struct vector_device, pdev.dev); struct net_device *netdev = device->dev;
list_del(&device->list); diff --git a/arch/um/kernel/dtb.c b/arch/um/kernel/dtb.c index 4954188a6a09..8d78ced9e08f 100644 --- a/arch/um/kernel/dtb.c +++ b/arch/um/kernel/dtb.c @@ -17,7 +17,7 @@ void uml_dtb_init(void)
area = uml_load_file(dtb, &size); if (area) { - if (!early_init_dt_scan(area)) { + if (!early_init_dt_scan(area, __pa(area))) { pr_err("invalid DTB %s\n", dtb); memblock_free(area, size); return; diff --git a/arch/um/kernel/physmem.c b/arch/um/kernel/physmem.c index fb2adfb49945..ee693e0b2b58 100644 --- a/arch/um/kernel/physmem.c +++ b/arch/um/kernel/physmem.c @@ -81,10 +81,10 @@ void __init setup_physmem(unsigned long start, unsigned long reserve_end, unsigned long len, unsigned long long highmem) { unsigned long reserve = reserve_end - start; - long map_size = len - reserve; + unsigned long map_size = len - reserve; int err;
- if(map_size <= 0) { + if (len <= reserve) { os_warn("Too few physical memory! Needed=%lu, given=%lu\n", reserve, len); exit(1); @@ -95,7 +95,7 @@ void __init setup_physmem(unsigned long start, unsigned long reserve_end, err = os_map_memory((void *) reserve_end, physmem_fd, reserve, map_size, 1, 1, 1); if (err < 0) { - os_warn("setup_physmem - mapping %ld bytes of memory at 0x%p " + os_warn("setup_physmem - mapping %lu bytes of memory at 0x%p " "failed - errno = %d\n", map_size, (void *) reserve_end, err); exit(1); diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c index be2856af6d4c..9c6cf03ed02b 100644 --- a/arch/um/kernel/process.c +++ b/arch/um/kernel/process.c @@ -292,6 +292,6 @@ int elf_core_copy_task_fpregs(struct task_struct *t, elf_fpregset_t *fpu) { int cpu = current_thread_info()->cpu;
- return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu); + return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu) == 0; }
diff --git a/arch/um/kernel/sysrq.c b/arch/um/kernel/sysrq.c index 4bb8622dc512..e3b6a2fd75d9 100644 --- a/arch/um/kernel/sysrq.c +++ b/arch/um/kernel/sysrq.c @@ -52,5 +52,5 @@ void show_stack(struct task_struct *task, unsigned long *stack, }
printk("%sCall Trace:\n", loglvl); - dump_trace(current, &stackops, (void *)loglvl); + dump_trace(task ?: current, &stackops, (void *)loglvl); } diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 327c45c5013f..2f85ed005c42 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -78,6 +78,32 @@ static inline void tdcall(u64 fn, struct tdx_module_args *args) panic("TDCALL %lld failed (Buggy TDX module!)\n", fn); }
+/* Read TD-scoped metadata */ +static inline u64 tdg_vm_rd(u64 field, u64 *value) +{ + struct tdx_module_args args = { + .rdx = field, + }; + u64 ret; + + ret = __tdcall_ret(TDG_VM_RD, &args); + *value = args.r8; + + return ret; +} + +/* Write TD-scoped metadata */ +static inline u64 tdg_vm_wr(u64 field, u64 value, u64 mask) +{ + struct tdx_module_args args = { + .rdx = field, + .r8 = value, + .r9 = mask, + }; + + return __tdcall(TDG_VM_WR, &args); +} + /** * tdx_mcall_get_report0() - Wrapper to get TDREPORT0 (a.k.a. TDREPORT * subtype 0) using TDG.MR.REPORT TDCALL. @@ -168,7 +194,61 @@ static void __noreturn tdx_panic(const char *msg) __tdx_hypercall(&args); }
-static void tdx_parse_tdinfo(u64 *cc_mask) +/* + * The kernel cannot handle #VEs when accessing normal kernel memory. Ensure + * that no #VE will be delivered for accesses to TD-private memory. + * + * TDX 1.0 does not allow the guest to disable SEPT #VE on its own. The VMM + * controls if the guest will receive such #VE with TD attribute + * ATTR_SEPT_VE_DISABLE. + * + * Newer TDX modules allow the guest to control if it wants to receive SEPT + * violation #VEs. + * + * Check if the feature is available and disable SEPT #VE if possible. + * + * If the TD is allowed to disable/enable SEPT #VEs, the ATTR_SEPT_VE_DISABLE + * attribute is no longer reliable. It reflects the initial state of the + * control for the TD, but it will not be updated if someone (e.g. bootloader) + * changes it before the kernel starts. Kernel must check TDCS_TD_CTLS bit to + * determine if SEPT #VEs are enabled or disabled. + */ +static void disable_sept_ve(u64 td_attr) +{ + const char *msg = "TD misconfiguration: SEPT #VE has to be disabled"; + bool debug = td_attr & ATTR_DEBUG; + u64 config, controls; + + /* Is this TD allowed to disable SEPT #VE */ + tdg_vm_rd(TDCS_CONFIG_FLAGS, &config); + if (!(config & TDCS_CONFIG_FLEXIBLE_PENDING_VE)) { + /* No SEPT #VE controls for the guest: check the attribute */ + if (td_attr & ATTR_SEPT_VE_DISABLE) + return; + + /* Relax SEPT_VE_DISABLE check for debug TD for backtraces */ + if (debug) + pr_warn("%s\n", msg); + else + tdx_panic(msg); + return; + } + + /* Check if SEPT #VE has been disabled before us */ + tdg_vm_rd(TDCS_TD_CTLS, &controls); + if (controls & TD_CTLS_PENDING_VE_DISABLE) + return; + + /* Keep #VEs enabled for splats in debugging environments */ + if (debug) + return; + + /* Disable SEPT #VEs */ + tdg_vm_wr(TDCS_TD_CTLS, TD_CTLS_PENDING_VE_DISABLE, + TD_CTLS_PENDING_VE_DISABLE); +} + +static void tdx_setup(u64 *cc_mask) { struct tdx_module_args args = {}; unsigned int gpa_width; @@ -193,21 +273,12 @@ static void tdx_parse_tdinfo(u64 *cc_mask) gpa_width = args.rcx & GENMASK(5, 0); *cc_mask = BIT_ULL(gpa_width - 1);
- /* - * The kernel can not handle #VE's when accessing normal kernel - * memory. Ensure that no #VE will be delivered for accesses to - * TD-private memory. Only VMM-shared memory (MMIO) will #VE. - */ td_attr = args.rdx; - if (!(td_attr & ATTR_SEPT_VE_DISABLE)) { - const char *msg = "TD misconfiguration: SEPT_VE_DISABLE attribute must be set.";
- /* Relax SEPT_VE_DISABLE check for debug TD. */ - if (td_attr & ATTR_DEBUG) - pr_warn("%s\n", msg); - else - tdx_panic(msg); - } + /* Kernel does not use NOTIFY_ENABLES and does not need random #VEs */ + tdg_vm_wr(TDCS_NOTIFY_ENABLES, 0, -1ULL); + + disable_sept_ve(td_attr); }
/* @@ -929,10 +1000,6 @@ static void tdx_kexec_finish(void)
void __init tdx_early_init(void) { - struct tdx_module_args args = { - .rdx = TDCS_NOTIFY_ENABLES, - .r9 = -1ULL, - }; u64 cc_mask; u32 eax, sig[3];
@@ -947,11 +1014,11 @@ void __init tdx_early_init(void) setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
cc_vendor = CC_VENDOR_INTEL; - tdx_parse_tdinfo(&cc_mask); - cc_set_mask(cc_mask);
- /* Kernel does not use NOTIFY_ENABLES and does not need random #VEs */ - tdcall(TDG_VM_WR, &args); + /* Configure the TD */ + tdx_setup(&cc_mask); + + cc_set_mask(cc_mask);
/* * All bits above GPA width are reserved and kernel treats shared bit diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S index ad7f4c891625..2de859173940 100644 --- a/arch/x86/crypto/aegis128-aesni-asm.S +++ b/arch/x86/crypto/aegis128-aesni-asm.S @@ -21,7 +21,7 @@ #define T1 %xmm7
#define STATEP %rdi -#define LEN %rsi +#define LEN %esi #define SRC %rdx #define DST %rcx
@@ -76,32 +76,32 @@ SYM_FUNC_START_LOCAL(__load_partial) xor %r9d, %r9d pxor MSG, MSG
- mov LEN, %r8 + mov LEN, %r8d and $0x1, %r8 jz .Lld_partial_1
- mov LEN, %r8 + mov LEN, %r8d and $0x1E, %r8 add SRC, %r8 mov (%r8), %r9b
.Lld_partial_1: - mov LEN, %r8 + mov LEN, %r8d and $0x2, %r8 jz .Lld_partial_2
- mov LEN, %r8 + mov LEN, %r8d and $0x1C, %r8 add SRC, %r8 shl $0x10, %r9 mov (%r8), %r9w
.Lld_partial_2: - mov LEN, %r8 + mov LEN, %r8d and $0x4, %r8 jz .Lld_partial_4
- mov LEN, %r8 + mov LEN, %r8d and $0x18, %r8 add SRC, %r8 shl $32, %r9 @@ -111,11 +111,11 @@ SYM_FUNC_START_LOCAL(__load_partial) .Lld_partial_4: movq %r9, MSG
- mov LEN, %r8 + mov LEN, %r8d and $0x8, %r8 jz .Lld_partial_8
- mov LEN, %r8 + mov LEN, %r8d and $0x10, %r8 add SRC, %r8 pslldq $8, MSG @@ -139,7 +139,7 @@ SYM_FUNC_END(__load_partial) * %r10 */ SYM_FUNC_START_LOCAL(__store_partial) - mov LEN, %r8 + mov LEN, %r8d mov DST, %r9
movq T0, %r10 @@ -677,7 +677,7 @@ SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec_tail) call __store_partial
/* mask with byte count: */ - movq LEN, T0 + movd LEN, T0 punpcklbw T0, T0 punpcklbw T0, T0 punpcklbw T0, T0 @@ -702,7 +702,8 @@ SYM_FUNC_END(crypto_aegis128_aesni_dec_tail)
/* * void crypto_aegis128_aesni_final(void *state, void *tag_xor, - * u64 assoclen, u64 cryptlen); + * unsigned int assoclen, + * unsigned int cryptlen); */ SYM_FUNC_START(crypto_aegis128_aesni_final) FRAME_BEGIN @@ -715,8 +716,8 @@ SYM_FUNC_START(crypto_aegis128_aesni_final) movdqu 0x40(STATEP), STATE4
/* prepare length block: */ - movq %rdx, MSG - movq %rcx, T0 + movd %edx, MSG + movd %ecx, T0 pslldq $8, T0 pxor T0, MSG psllq $3, MSG /* multiply by 8 (to get bit count) */ diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c index fd4670a6694e..a087bc0c5498 100644 --- a/arch/x86/events/intel/pt.c +++ b/arch/x86/events/intel/pt.c @@ -828,11 +828,13 @@ static void pt_buffer_advance(struct pt_buffer *buf) buf->cur_idx++;
if (buf->cur_idx == buf->cur->last) { - if (buf->cur == buf->last) + if (buf->cur == buf->last) { buf->cur = buf->first; - else + buf->wrapped = true; + } else { buf->cur = list_entry(buf->cur->list.next, struct topa, list); + } buf->cur_idx = 0; } } @@ -846,8 +848,11 @@ static void pt_buffer_advance(struct pt_buffer *buf) static void pt_update_head(struct pt *pt) { struct pt_buffer *buf = perf_get_aux(&pt->handle); + bool wrapped = buf->wrapped; u64 topa_idx, base, old;
+ buf->wrapped = false; + if (buf->single) { local_set(&buf->data_size, buf->output_off); return; @@ -865,7 +870,7 @@ static void pt_update_head(struct pt *pt) } else { old = (local64_xchg(&buf->head, base) & ((buf->nr_pages << PAGE_SHIFT) - 1)); - if (base < old) + if (base < old || (base == old && wrapped)) base += buf->nr_pages << PAGE_SHIFT;
local_add(base - old, &buf->data_size); diff --git a/arch/x86/events/intel/pt.h b/arch/x86/events/intel/pt.h index f5e46c04c145..a1b6c04b7f68 100644 --- a/arch/x86/events/intel/pt.h +++ b/arch/x86/events/intel/pt.h @@ -65,6 +65,7 @@ struct pt_pmu { * @head: logical write offset inside the buffer * @snapshot: if this is for a snapshot/overwrite counter * @single: use Single Range Output instead of ToPA + * @wrapped: buffer advance wrapped back to the first topa table * @stop_pos: STOP topa entry index * @intr_pos: INT topa entry index * @stop_te: STOP topa entry pointer @@ -82,6 +83,7 @@ struct pt_buffer { local64_t head; bool snapshot; bool single; + bool wrapped; long stop_pos, intr_pos; struct topa_entry *stop_te, *intr_te; void **data_pages; diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 1f650b4dde50..6c6e9b9f98a4 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -51,7 +51,8 @@ static __always_inline s64 arch_atomic64_read_nonatomic(const atomic64_t *v) #ifdef CONFIG_X86_CMPXCHG64 #define __alternative_atomic64(f, g, out, in...) \ asm volatile("call %c[func]" \ - : out : [func] "i" (atomic64_##g##_cx8), ## in) + : ALT_OUTPUT_SP(out) \ + : [func] "i" (atomic64_##g##_cx8), ## in)
#define ATOMIC64_DECL(sym) ATOMIC64_DECL_ONE(sym##_cx8) #else diff --git a/arch/x86/include/asm/cmpxchg_32.h b/arch/x86/include/asm/cmpxchg_32.h index 62cef2113ca7..fd1282a783dd 100644 --- a/arch/x86/include/asm/cmpxchg_32.h +++ b/arch/x86/include/asm/cmpxchg_32.h @@ -94,7 +94,7 @@ static __always_inline bool __try_cmpxchg64_local(volatile u64 *ptr, u64 *oldp, asm volatile(ALTERNATIVE(_lock_loc \ "call cmpxchg8b_emu", \ _lock "cmpxchg8b %a[ptr]", X86_FEATURE_CX8) \ - : "+a" (o.low), "+d" (o.high) \ + : ALT_OUTPUT_SP("+a" (o.low), "+d" (o.high)) \ : "b" (n.low), "c" (n.high), [ptr] "S" (_ptr) \ : "memory"); \ \ @@ -123,8 +123,8 @@ static __always_inline u64 arch_cmpxchg64_local(volatile u64 *ptr, u64 old, u64 "call cmpxchg8b_emu", \ _lock "cmpxchg8b %a[ptr]", X86_FEATURE_CX8) \ CC_SET(e) \ - : CC_OUT(e) (ret), \ - "+a" (o.low), "+d" (o.high) \ + : ALT_OUTPUT_SP(CC_OUT(e) (ret), \ + "+a" (o.low), "+d" (o.high)) \ : "b" (n.low), "c" (n.high), [ptr] "S" (_ptr) \ : "memory"); \ \ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6d9f763a7bb9..427d1daf06d0 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -26,6 +26,7 @@ #include <linux/irqbypass.h> #include <linux/hyperv.h> #include <linux/kfifo.h> +#include <linux/sched/vhost_task.h>
#include <asm/apic.h> #include <asm/pvclock-abi.h> @@ -1443,7 +1444,8 @@ struct kvm_arch { bool sgx_provisioning_allowed;
struct kvm_x86_pmu_event_filter __rcu *pmu_event_filter; - struct task_struct *nx_huge_page_recovery_thread; + struct vhost_task *nx_huge_page_recovery_thread; + u64 nx_huge_page_last;
#ifdef CONFIG_X86_64 /* The number of TDP MMU pages across all roots. */ diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h index fdfd41511b02..fecb2a6e864b 100644 --- a/arch/x86/include/asm/shared/tdx.h +++ b/arch/x86/include/asm/shared/tdx.h @@ -16,11 +16,20 @@ #define TDG_VP_VEINFO_GET 3 #define TDG_MR_REPORT 4 #define TDG_MEM_PAGE_ACCEPT 6 +#define TDG_VM_RD 7 #define TDG_VM_WR 8
-/* TDCS fields. To be used by TDG.VM.WR and TDG.VM.RD module calls */ +/* TDX TD-Scope Metadata. To be used by TDG.VM.WR and TDG.VM.RD */ +#define TDCS_CONFIG_FLAGS 0x1110000300000016 +#define TDCS_TD_CTLS 0x1110000300000017 #define TDCS_NOTIFY_ENABLES 0x9100000000000010
+/* TDCS_CONFIG_FLAGS bits */ +#define TDCS_CONFIG_FLEXIBLE_PENDING_VE BIT_ULL(1) + +/* TDCS_TD_CTLS bits */ +#define TD_CTLS_PENDING_VE_DISABLE BIT_ULL(0) + /* TDX hypercall Leaf IDs */ #define TDVMCALL_MAP_GPA 0x10001 #define TDVMCALL_GET_QUOTE 0x10002 diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 580636cdc257..4d3c9d00d6b6 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -34,4 +34,8 @@ static inline void __tlb_remove_table(void *table) free_page_and_swap_cache(table); }
+static inline void invlpg(unsigned long addr) +{ + asm volatile("invlpg (%0)" ::"r" (addr) : "memory"); +} #endif /* _ASM_X86_TLB_H */ diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 823f44f7bc94..d8408aafeed9 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -798,6 +798,7 @@ static void init_amd_bd(struct cpuinfo_x86 *c) static const struct x86_cpu_desc erratum_1386_microcode[] = { AMD_CPU_DESC(0x17, 0x1, 0x2, 0x0800126e), AMD_CPU_DESC(0x17, 0x31, 0x0, 0x08301052), + {}, };
static void fix_erratum_1386(struct cpuinfo_x86 *c) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index f43bb974fc66..b17bcf9b67ee 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -2392,12 +2392,12 @@ void __init arch_cpu_finalize_init(void) alternative_instructions();
if (IS_ENABLED(CONFIG_X86_64)) { - unsigned long USER_PTR_MAX = TASK_SIZE_MAX-1; + unsigned long USER_PTR_MAX = TASK_SIZE_MAX;
/* * Enable this when LAM is gated on LASS support if (cpu_feature_enabled(X86_FEATURE_LAM)) - USER_PTR_MAX = (1ul << 63) - PAGE_SIZE - 1; + USER_PTR_MAX = (1ul << 63) - PAGE_SIZE; */ runtime_const_init(ptr, USER_PTR_MAX);
diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c index 31a73715d755..fb5d0c67fbab 100644 --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -34,6 +34,7 @@ #include <asm/setup.h> #include <asm/cpu.h> #include <asm/msr.h> +#include <asm/tlb.h>
#include "internal.h"
@@ -483,11 +484,25 @@ static void scan_containers(u8 *ucode, size_t size, struct cont_desc *desc) } }
-static int __apply_microcode_amd(struct microcode_amd *mc) +static int __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize) { + unsigned long p_addr = (unsigned long)&mc->hdr.data_code; u32 rev, dummy;
- native_wrmsrl(MSR_AMD64_PATCH_LOADER, (u64)(long)&mc->hdr.data_code); + native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr); + + if (x86_family(bsp_cpuid_1_eax) == 0x17) { + unsigned long p_addr_end = p_addr + psize - 1; + + invlpg(p_addr); + + /* + * Flush next page too if patch image is crossing a page + * boundary. + */ + if (p_addr >> PAGE_SHIFT != p_addr_end >> PAGE_SHIFT) + invlpg(p_addr_end); + }
/* verify patch application was successful */ native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); @@ -529,7 +544,7 @@ static bool early_apply_microcode(u32 old_rev, void *ucode, size_t size) if (old_rev > mc->hdr.patch_id) return ret;
- return !__apply_microcode_amd(mc); + return !__apply_microcode_amd(mc, desc.psize); }
static bool get_builtin_microcode(struct cpio_data *cp) @@ -745,7 +760,7 @@ void reload_ucode_amd(unsigned int cpu) rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
if (rev < mc->hdr.patch_id) { - if (!__apply_microcode_amd(mc)) + if (!__apply_microcode_amd(mc, p->size)) pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id); } } @@ -798,7 +813,7 @@ static enum ucode_state apply_microcode_amd(int cpu) goto out; }
- if (__apply_microcode_amd(mc_amd)) { + if (__apply_microcode_amd(mc_amd, p->size)) { pr_err("CPU%d: update failed for patch_level=0x%08x\n", cpu, mc_amd->hdr.patch_id); return UCODE_ERROR; diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c index 64280879c68c..59d23cdf4ed0 100644 --- a/arch/x86/kernel/devicetree.c +++ b/arch/x86/kernel/devicetree.c @@ -305,7 +305,7 @@ void __init x86_flattree_get_config(void) map_len = size; }
- early_init_dt_verify(dt); + early_init_dt_verify(dt, __pa(dt)); }
unflatten_and_copy_device_tree(); diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c index d00c28aaa5be..d4705a348a80 100644 --- a/arch/x86/kernel/unwind_orc.c +++ b/arch/x86/kernel/unwind_orc.c @@ -723,7 +723,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task, state->sp = task->thread.sp + sizeof(*frame); state->bp = READ_ONCE_NOCHECK(frame->bp); state->ip = READ_ONCE_NOCHECK(frame->ret_addr); - state->signal = (void *)state->ip == ret_from_fork; + state->signal = (void *)state->ip == ret_from_fork_asm; }
if (get_stack_info((unsigned long *)state->sp, state->task, diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index f09f13c01c6b..d7f27a327654 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -18,8 +18,7 @@ menuconfig VIRTUALIZATION if VIRTUALIZATION
config KVM_X86 - def_tristate KVM if KVM_INTEL || KVM_AMD - depends on X86_LOCAL_APIC + def_tristate KVM if (KVM_INTEL != n || KVM_AMD != n) select KVM_COMMON select KVM_GENERIC_MMU_NOTIFIER select HAVE_KVM_IRQCHIP @@ -29,6 +28,7 @@ config KVM_X86 select HAVE_KVM_IRQ_BYPASS select HAVE_KVM_IRQ_ROUTING select HAVE_KVM_READONLY_MEM + select VHOST_TASK select KVM_ASYNC_PF select USER_RETURN_NOTIFIER select KVM_MMIO @@ -49,6 +49,7 @@ config KVM_X86
config KVM tristate "Kernel-based Virtual Machine (KVM) support" + depends on X86_LOCAL_APIC help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8e853a5fc867..3e353ed1f767 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7281,7 +7281,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) kvm_mmu_zap_all_fast(kvm); mutex_unlock(&kvm->slots_lock);
- wake_up_process(kvm->arch.nx_huge_page_recovery_thread); + vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread); } mutex_unlock(&kvm_lock); } @@ -7427,7 +7427,7 @@ static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel mutex_lock(&kvm_lock);
list_for_each_entry(kvm, &vm_list, vm_list) - wake_up_process(kvm->arch.nx_huge_page_recovery_thread); + vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
mutex_unlock(&kvm_lock); } @@ -7530,62 +7530,56 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm) srcu_read_unlock(&kvm->srcu, rcu_idx); }
-static long get_nx_huge_page_recovery_timeout(u64 start_time) +static void kvm_nx_huge_page_recovery_worker_kill(void *data) { - bool enabled; - uint period; - - enabled = calc_nx_huge_pages_recovery_period(&period); - - return enabled ? start_time + msecs_to_jiffies(period) - get_jiffies_64() - : MAX_SCHEDULE_TIMEOUT; }
-static int kvm_nx_huge_page_recovery_worker(struct kvm *kvm, uintptr_t data) +static bool kvm_nx_huge_page_recovery_worker(void *data) { - u64 start_time; + struct kvm *kvm = data; + bool enabled; + uint period; long remaining_time;
- while (true) { - start_time = get_jiffies_64(); - remaining_time = get_nx_huge_page_recovery_timeout(start_time); - - set_current_state(TASK_INTERRUPTIBLE); - while (!kthread_should_stop() && remaining_time > 0) { - schedule_timeout(remaining_time); - remaining_time = get_nx_huge_page_recovery_timeout(start_time); - set_current_state(TASK_INTERRUPTIBLE); - } - - set_current_state(TASK_RUNNING); - - if (kthread_should_stop()) - return 0; + enabled = calc_nx_huge_pages_recovery_period(&period); + if (!enabled) + return false;
- kvm_recover_nx_huge_pages(kvm); + remaining_time = kvm->arch.nx_huge_page_last + msecs_to_jiffies(period) + - get_jiffies_64(); + if (remaining_time > 0) { + schedule_timeout(remaining_time); + /* check for signals and come back */ + return true; } + + __set_current_state(TASK_RUNNING); + kvm_recover_nx_huge_pages(kvm); + kvm->arch.nx_huge_page_last = get_jiffies_64(); + return true; }
int kvm_mmu_post_init_vm(struct kvm *kvm) { - int err; - if (nx_hugepage_mitigation_hard_disabled) return 0;
- err = kvm_vm_create_worker_thread(kvm, kvm_nx_huge_page_recovery_worker, 0, - "kvm-nx-lpage-recovery", - &kvm->arch.nx_huge_page_recovery_thread); - if (!err) - kthread_unpark(kvm->arch.nx_huge_page_recovery_thread); + kvm->arch.nx_huge_page_last = get_jiffies_64(); + kvm->arch.nx_huge_page_recovery_thread = vhost_task_create( + kvm_nx_huge_page_recovery_worker, kvm_nx_huge_page_recovery_worker_kill, + kvm, "kvm-nx-lpage-recovery");
- return err; + if (!kvm->arch.nx_huge_page_recovery_thread) + return -ENOMEM; + + vhost_task_start(kvm->arch.nx_huge_page_recovery_thread); + return 0; }
void kvm_mmu_pre_destroy_vm(struct kvm *kvm) { if (kvm->arch.nx_huge_page_recovery_thread) - kthread_stop(kvm->arch.nx_huge_page_recovery_thread); + vhost_task_stop(kvm->arch.nx_huge_page_recovery_thread); }
#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 8f7eb3ad88fc..5521608077ec 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -226,12 +226,20 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask;
/* - * Optimization: for pte sync, if spte was writable the hash - * lookup is unnecessary (and expensive). Write protection - * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. - * Same reasoning can be applied to dirty page accounting. + * When overwriting an existing leaf SPTE, and the old SPTE was + * writable, skip trying to unsync shadow pages as any relevant + * shadow pages must already be unsync, i.e. the hash lookup is + * unnecessary (and expensive). + * + * The same reasoning applies to dirty page/folio accounting; + * KVM will mark the folio dirty using the old SPTE, thus + * there's no need to immediately mark the new SPTE as dirty. + * + * Note, both cases rely on KVM not changing PFNs without first + * zapping the old SPTE, which is guaranteed by both the shadow + * MMU and the TDP MMU. */ - if (is_writable_pte(old_spte)) + if (is_last_spte(old_spte, level) && is_writable_pte(old_spte)) goto out;
/* diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d28618e9277e..92fee5e8a3c7 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2551,28 +2551,6 @@ static bool cpu_has_sgx(void) return cpuid_eax(0) >= 0x12 && (cpuid_eax(0x12) & BIT(0)); }
-/* - * Some cpus support VM_{ENTRY,EXIT}_IA32_PERF_GLOBAL_CTRL but they - * can't be used due to errata where VM Exit may incorrectly clear - * IA32_PERF_GLOBAL_CTRL[34:32]. Work around the errata by using the - * MSR load mechanism to switch IA32_PERF_GLOBAL_CTRL. - */ -static bool cpu_has_perf_global_ctrl_bug(void) -{ - switch (boot_cpu_data.x86_vfm) { - case INTEL_NEHALEM_EP: /* AAK155 */ - case INTEL_NEHALEM: /* AAP115 */ - case INTEL_WESTMERE: /* AAT100 */ - case INTEL_WESTMERE_EP: /* BC86,AAY89,BD102 */ - case INTEL_NEHALEM_EX: /* BA97 */ - return true; - default: - break; - } - - return false; -} - static int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt, u32 msr, u32 *result) { u32 vmx_msr_low, vmx_msr_high; @@ -2732,6 +2710,27 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf, _vmexit_control &= ~x_ctrl; }
+ /* + * Some cpus support VM_{ENTRY,EXIT}_IA32_PERF_GLOBAL_CTRL but they + * can't be used due to an errata where VM Exit may incorrectly clear + * IA32_PERF_GLOBAL_CTRL[34:32]. Workaround the errata by using the + * MSR load mechanism to switch IA32_PERF_GLOBAL_CTRL. + */ + switch (boot_cpu_data.x86_vfm) { + case INTEL_NEHALEM_EP: /* AAK155 */ + case INTEL_NEHALEM: /* AAP115 */ + case INTEL_WESTMERE: /* AAT100 */ + case INTEL_WESTMERE_EP: /* BC86,AAY89,BD102 */ + case INTEL_NEHALEM_EX: /* BA97 */ + _vmentry_control &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; + _vmexit_control &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; + pr_warn_once("VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL " + "does not work properly. Using workaround\n"); + break; + default: + break; + } + rdmsrl(MSR_IA32_VMX_BASIC, basic_msr);
/* IA-32 SDM Vol 3B: VMCS size is never greater than 4kB. */ @@ -4422,9 +4421,6 @@ static u32 vmx_vmentry_ctrl(void) VM_ENTRY_LOAD_IA32_EFER | VM_ENTRY_IA32E_MODE);
- if (cpu_has_perf_global_ctrl_bug()) - vmentry_ctrl &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; - return vmentry_ctrl; }
@@ -4442,10 +4438,6 @@ static u32 vmx_vmexit_ctrl(void) if (vmx_pt_mode_is_system()) vmexit_ctrl &= ~(VM_EXIT_PT_CONCEAL_PIP | VM_EXIT_CLEAR_IA32_RTIT_CTL); - - if (cpu_has_perf_global_ctrl_bug()) - vmexit_ctrl &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; - /* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */ return vmexit_ctrl & ~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER); @@ -8400,10 +8392,6 @@ __init int vmx_hardware_setup(void) if (setup_vmcs_config(&vmcs_config, &vmx_capability) < 0) return -EIO;
- if (cpu_has_perf_global_ctrl_bug()) - pr_warn_once("VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL " - "does not work properly. Using workaround\n"); - if (boot_cpu_has(X86_FEATURE_NX)) kvm_enable_efer_bits(EFER_NX);
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 86593d1b787d..b0678d59ebdb 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -20,6 +20,7 @@ #include <asm/cacheflush.h> #include <asm/apic.h> #include <asm/perf_event.h> +#include <asm/tlb.h>
#include "mm_internal.h"
@@ -1140,7 +1141,7 @@ STATIC_NOPV void native_flush_tlb_one_user(unsigned long addr) bool cpu_pcide;
/* Flush 'addr' from the kernel PCID: */ - asm volatile("invlpg (%0)" ::"r" (addr) : "memory"); + invlpg(addr);
/* If PTI is off there is no user PCID and nothing to flush. */ if (!static_cpu_has(X86_FEATURE_PTI)) diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S index 64fca49cd88f..ce4fd8d33da4 100644 --- a/arch/x86/platform/pvh/head.S +++ b/arch/x86/platform/pvh/head.S @@ -172,7 +172,14 @@ SYM_CODE_START_LOCAL(pvh_start_xen) movq %rbp, %rbx subq $_pa(pvh_start_xen), %rbx movq %rbx, phys_base(%rip) - call xen_prepare_pvh + + /* Call xen_prepare_pvh() via the kernel virtual mapping */ + leaq xen_prepare_pvh(%rip), %rax + subq phys_base(%rip), %rax + addq $__START_KERNEL_map, %rax + ANNOTATE_RETPOLINE_SAFE + call *%rax + /* * Clear phys_base. __startup_64 will *add* to its value, * so reset to 0. diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c index bdec4a773af0..e51f2060e830 100644 --- a/arch/xtensa/kernel/setup.c +++ b/arch/xtensa/kernel/setup.c @@ -216,7 +216,7 @@ static int __init xtensa_dt_io_area(unsigned long node, const char *uname,
void __init early_init_devtree(void *params) { - early_init_dt_scan(params); + early_init_dt_scan(params, __pa(params)); of_scan_flat_dt(xtensa_dt_io_area, NULL);
if (!command_line[0]) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index e831aedb4643..9fb9f3533150 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -736,6 +736,7 @@ static void bfq_sync_bfqq_move(struct bfq_data *bfqd, */ bfq_put_cooperator(sync_bfqq); bic_set_bfqq(bic, NULL, true, act_idx); + bfq_release_process_ref(bfqd, sync_bfqq); } }
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 0747d9d0e48c..95dd7b795935 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -582,23 +582,31 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd, #define BFQ_LIMIT_INLINE_DEPTH 16
#ifdef CONFIG_BFQ_GROUP_IOSCHED -static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit) +static bool bfqq_request_over_limit(struct bfq_data *bfqd, + struct bfq_io_cq *bic, blk_opf_t opf, + unsigned int act_idx, int limit) { - struct bfq_data *bfqd = bfqq->bfqd; - struct bfq_entity *entity = &bfqq->entity; struct bfq_entity *inline_entities[BFQ_LIMIT_INLINE_DEPTH]; struct bfq_entity **entities = inline_entities; - int depth, level, alloc_depth = BFQ_LIMIT_INLINE_DEPTH; - int class_idx = bfqq->ioprio_class - 1; + int alloc_depth = BFQ_LIMIT_INLINE_DEPTH; struct bfq_sched_data *sched_data; + struct bfq_entity *entity; + struct bfq_queue *bfqq; unsigned long wsum; bool ret = false; - - if (!entity->on_st_or_in_serv) - return false; + int depth; + int level;
retry: spin_lock_irq(&bfqd->lock); + bfqq = bic_to_bfqq(bic, op_is_sync(opf), act_idx); + if (!bfqq) + goto out; + + entity = &bfqq->entity; + if (!entity->on_st_or_in_serv) + goto out; + /* +1 for bfqq entity, root cgroup not included */ depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1; if (depth > alloc_depth) { @@ -643,7 +651,7 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit) * class. */ wsum = 0; - for (i = 0; i <= class_idx; i++) { + for (i = 0; i <= bfqq->ioprio_class - 1; i++) { wsum = wsum * IOPRIO_BE_NR + sched_data->service_tree[i].wsum; } @@ -666,7 +674,9 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit) return ret; } #else -static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit) +static bool bfqq_request_over_limit(struct bfq_data *bfqd, + struct bfq_io_cq *bic, blk_opf_t opf, + unsigned int act_idx, int limit) { return false; } @@ -704,8 +714,9 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data) }
for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) { - struct bfq_queue *bfqq = - bic_to_bfqq(bic, op_is_sync(opf), act_idx); + /* Fast path to check if bfqq is already allocated. */ + if (!bic_to_bfqq(bic, op_is_sync(opf), act_idx)) + continue;
/* * Does queue (or any parent entity) exceed number of @@ -713,7 +724,7 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data) * limit depth so that it cannot consume more * available requests and thus starve other entities. */ - if (bfqq && bfqq_request_over_limit(bfqq, limit)) { + if (bfqq_request_over_limit(bfqd, bic, opf, act_idx, limit)) { depth = 1; break; } @@ -5434,8 +5445,6 @@ void bfq_put_cooperator(struct bfq_queue *bfqq) bfq_put_queue(__bfqq); __bfqq = next; } - - bfq_release_process_ref(bfqq->bfqd, bfqq); }
static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq) @@ -5448,6 +5457,8 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq) bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq, bfqq->ref);
bfq_put_cooperator(bfqq); + + bfq_release_process_ref(bfqd, bfqq); }
static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync, @@ -6734,6 +6745,8 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq) bic_set_bfqq(bic, NULL, true, bfqq->actuator_idx);
bfq_put_cooperator(bfqq); + + bfq_release_process_ref(bfqq->bfqd, bfqq); return NULL; }
diff --git a/block/blk-core.c b/block/blk-core.c index bc5e8c5eaac9..4f791a3114a1 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -261,6 +261,8 @@ static void blk_free_queue(struct request_queue *q) blk_mq_release(q);
ida_free(&blk_queue_ida, q->id); + lockdep_unregister_key(&q->io_lock_cls_key); + lockdep_unregister_key(&q->q_lock_cls_key); call_rcu(&q->rcu_head, blk_free_queue_rcu); }
@@ -278,18 +280,20 @@ void blk_put_queue(struct request_queue *q) } EXPORT_SYMBOL(blk_put_queue);
-void blk_queue_start_drain(struct request_queue *q) +bool blk_queue_start_drain(struct request_queue *q) { /* * When queue DYING flag is set, we need to block new req * entering queue, so we call blk_freeze_queue_start() to * prevent I/O from crossing blk_queue_enter(). */ - blk_freeze_queue_start(q); + bool freeze = __blk_freeze_queue_start(q, current); if (queue_is_mq(q)) blk_mq_wake_waiters(q); /* Make blk_queue_enter() reexamine the DYING flag. */ wake_up_all(&q->mq_freeze_wq); + + return freeze; }
/** @@ -321,6 +325,8 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) return -ENODEV; }
+ rwsem_acquire_read(&q->q_lockdep_map, 0, 0, _RET_IP_); + rwsem_release(&q->q_lockdep_map, _RET_IP_); return 0; }
@@ -352,6 +358,8 @@ int __bio_queue_enter(struct request_queue *q, struct bio *bio) goto dead; }
+ rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_); + rwsem_release(&q->io_lockdep_map, _RET_IP_); return 0; dead: bio_io_error(bio); @@ -441,6 +449,12 @@ struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id) PERCPU_REF_INIT_ATOMIC, GFP_KERNEL); if (error) goto fail_stats; + lockdep_register_key(&q->io_lock_cls_key); + lockdep_register_key(&q->q_lock_cls_key); + lockdep_init_map(&q->io_lockdep_map, "&q->q_usage_counter(io)", + &q->io_lock_cls_key, 0); + lockdep_init_map(&q->q_lockdep_map, "&q->q_usage_counter(queue)", + &q->q_lock_cls_key, 0);
q->nr_requests = BLKDEV_DEFAULT_RQ;
diff --git a/block/blk-merge.c b/block/blk-merge.c index ad763ec313b6..5baa950f34fe 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -166,17 +166,6 @@ struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim, return bio_submit_split(bio, split_sectors); }
-struct bio *bio_split_write_zeroes(struct bio *bio, - const struct queue_limits *lim, unsigned *nsegs) -{ - *nsegs = 0; - if (!lim->max_write_zeroes_sectors) - return bio; - if (bio_sectors(bio) <= lim->max_write_zeroes_sectors) - return bio; - return bio_submit_split(bio, lim->max_write_zeroes_sectors); -} - static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim, bool is_atomic) { @@ -211,7 +200,9 @@ static inline unsigned get_max_io_size(struct bio *bio, * We ignore lim->max_sectors for atomic writes because it may less * than the actual bio size, which we cannot tolerate. */ - if (is_atomic) + if (bio_op(bio) == REQ_OP_WRITE_ZEROES) + max_sectors = lim->max_write_zeroes_sectors; + else if (is_atomic) max_sectors = lim->atomic_write_max_sectors; else max_sectors = lim->max_sectors; @@ -296,6 +287,14 @@ static bool bvec_split_segs(const struct queue_limits *lim, return len > 0 || bv->bv_len > max_len; }
+static unsigned int bio_split_alignment(struct bio *bio, + const struct queue_limits *lim) +{ + if (op_is_write(bio_op(bio)) && lim->zone_write_granularity) + return lim->zone_write_granularity; + return lim->logical_block_size; +} + /** * bio_split_rw_at - check if and where to split a read/write bio * @bio: [in] bio to be split @@ -358,7 +357,7 @@ int bio_split_rw_at(struct bio *bio, const struct queue_limits *lim, * split size so that each bio is properly block size aligned, even if * we do not use the full hardware limits. */ - bytes = ALIGN_DOWN(bytes, lim->logical_block_size); + bytes = ALIGN_DOWN(bytes, bio_split_alignment(bio, lim));
/* * Bio splitting may cause subtle trouble such as hang when doing sync @@ -398,6 +397,26 @@ struct bio *bio_split_zone_append(struct bio *bio, return bio_submit_split(bio, split_sectors); }
+struct bio *bio_split_write_zeroes(struct bio *bio, + const struct queue_limits *lim, unsigned *nsegs) +{ + unsigned int max_sectors = get_max_io_size(bio, lim); + + *nsegs = 0; + + /* + * An unset limit should normally not happen, as bio submission is keyed + * off having a non-zero limit. But SCSI can clear the limit in the + * I/O completion handler, and we can race and see this. Splitting to a + * zero limit obviously doesn't make sense, so band-aid it here. + */ + if (!max_sectors) + return bio; + if (bio_sectors(bio) <= max_sectors) + return bio; + return bio_submit_split(bio, max_sectors); +} + /** * bio_split_to_limits - split a bio to fit the queue limits * @bio: bio to be split diff --git a/block/blk-mq.c b/block/blk-mq.c index cf626e061dd7..b4fba7b398e5 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -120,9 +120,59 @@ void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part, inflight[1] = mi.inflight[1]; }
-void blk_freeze_queue_start(struct request_queue *q) +#ifdef CONFIG_LOCKDEP +static bool blk_freeze_set_owner(struct request_queue *q, + struct task_struct *owner) +{ + if (!owner) + return false; + + if (!q->mq_freeze_depth) { + q->mq_freeze_owner = owner; + q->mq_freeze_owner_depth = 1; + return true; + } + + if (owner == q->mq_freeze_owner) + q->mq_freeze_owner_depth += 1; + return false; +} + +/* verify the last unfreeze in owner context */ +static bool blk_unfreeze_check_owner(struct request_queue *q) +{ + if (!q->mq_freeze_owner) + return false; + if (q->mq_freeze_owner != current) + return false; + if (--q->mq_freeze_owner_depth == 0) { + q->mq_freeze_owner = NULL; + return true; + } + return false; +} + +#else + +static bool blk_freeze_set_owner(struct request_queue *q, + struct task_struct *owner) +{ + return false; +} + +static bool blk_unfreeze_check_owner(struct request_queue *q) { + return false; +} +#endif + +bool __blk_freeze_queue_start(struct request_queue *q, + struct task_struct *owner) +{ + bool freeze; + mutex_lock(&q->mq_freeze_lock); + freeze = blk_freeze_set_owner(q, owner); if (++q->mq_freeze_depth == 1) { percpu_ref_kill(&q->q_usage_counter); mutex_unlock(&q->mq_freeze_lock); @@ -131,6 +181,14 @@ void blk_freeze_queue_start(struct request_queue *q) } else { mutex_unlock(&q->mq_freeze_lock); } + + return freeze; +} + +void blk_freeze_queue_start(struct request_queue *q) +{ + if (__blk_freeze_queue_start(q, current)) + blk_freeze_acquire_lock(q, false, false); } EXPORT_SYMBOL_GPL(blk_freeze_queue_start);
@@ -176,8 +234,10 @@ void blk_mq_freeze_queue(struct request_queue *q) } EXPORT_SYMBOL_GPL(blk_mq_freeze_queue);
-void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic) +bool __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic) { + bool unfreeze; + mutex_lock(&q->mq_freeze_lock); if (force_atomic) q->q_usage_counter.data->force_atomic = true; @@ -187,15 +247,39 @@ void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic) percpu_ref_resurrect(&q->q_usage_counter); wake_up_all(&q->mq_freeze_wq); } + unfreeze = blk_unfreeze_check_owner(q); mutex_unlock(&q->mq_freeze_lock); + + return unfreeze; }
void blk_mq_unfreeze_queue(struct request_queue *q) { - __blk_mq_unfreeze_queue(q, false); + if (__blk_mq_unfreeze_queue(q, false)) + blk_unfreeze_release_lock(q, false, false); } EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
+/* + * non_owner variant of blk_freeze_queue_start + * + * Unlike blk_freeze_queue_start, the queue doesn't need to be unfrozen + * by the same task. This is fragile and should not be used if at all + * possible. + */ +void blk_freeze_queue_start_non_owner(struct request_queue *q) +{ + __blk_freeze_queue_start(q, NULL); +} +EXPORT_SYMBOL_GPL(blk_freeze_queue_start_non_owner); + +/* non_owner variant of blk_mq_unfreeze_queue */ +void blk_mq_unfreeze_queue_non_owner(struct request_queue *q) +{ + __blk_mq_unfreeze_queue(q, false); +} +EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue_non_owner); + /* * FIXME: replace the scsi_internal_device_*block_nowait() calls in the * mpt3sas driver such that this function can be removed. @@ -283,8 +367,9 @@ void blk_mq_quiesce_tagset(struct blk_mq_tag_set *set) if (!blk_queue_skip_tagset_quiesce(q)) blk_mq_quiesce_queue_nowait(q); } - blk_mq_wait_quiesce_done(set); mutex_unlock(&set->tag_list_lock); + + blk_mq_wait_quiesce_done(set); } EXPORT_SYMBOL_GPL(blk_mq_quiesce_tagset);
@@ -2200,6 +2285,24 @@ void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs) } EXPORT_SYMBOL(blk_mq_delay_run_hw_queue);
+static inline bool blk_mq_hw_queue_need_run(struct blk_mq_hw_ctx *hctx) +{ + bool need_run; + + /* + * When queue is quiesced, we may be switching io scheduler, or + * updating nr_hw_queues, or other things, and we can't run queue + * any more, even blk_mq_hctx_has_pending() can't be called safely. + * + * And queue will be rerun in blk_mq_unquiesce_queue() if it is + * quiesced. + */ + __blk_mq_run_dispatch_ops(hctx->queue, false, + need_run = !blk_queue_quiesced(hctx->queue) && + blk_mq_hctx_has_pending(hctx)); + return need_run; +} + /** * blk_mq_run_hw_queue - Start to run a hardware queue. * @hctx: Pointer to the hardware queue to run. @@ -2220,20 +2323,23 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
- /* - * When queue is quiesced, we may be switching io scheduler, or - * updating nr_hw_queues, or other things, and we can't run queue - * any more, even __blk_mq_hctx_has_pending() can't be called safely. - * - * And queue will be rerun in blk_mq_unquiesce_queue() if it is - * quiesced. - */ - __blk_mq_run_dispatch_ops(hctx->queue, false, - need_run = !blk_queue_quiesced(hctx->queue) && - blk_mq_hctx_has_pending(hctx)); + need_run = blk_mq_hw_queue_need_run(hctx); + if (!need_run) { + unsigned long flags;
- if (!need_run) - return; + /* + * Synchronize with blk_mq_unquiesce_queue(), because we check + * if hw queue is quiesced locklessly above, we need the use + * ->queue_lock to make sure we see the up-to-date status to + * not miss rerunning the hw queue. + */ + spin_lock_irqsave(&hctx->queue->queue_lock, flags); + need_run = blk_mq_hw_queue_need_run(hctx); + spin_unlock_irqrestore(&hctx->queue->queue_lock, flags); + + if (!need_run) + return; + }
if (async || !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) { blk_mq_delay_run_hw_queue(hctx, 0); @@ -2390,6 +2496,12 @@ void blk_mq_start_stopped_hw_queue(struct blk_mq_hw_ctx *hctx, bool async) return;
clear_bit(BLK_MQ_S_STOPPED, &hctx->state); + /* + * Pairs with the smp_mb() in blk_mq_hctx_stopped() to order the + * clearing of BLK_MQ_S_STOPPED above and the checking of dispatch + * list in the subsequent routine. + */ + smp_mb__after_atomic(); blk_mq_run_hw_queue(hctx, async); } EXPORT_SYMBOL_GPL(blk_mq_start_stopped_hw_queue); @@ -2620,6 +2732,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) { blk_mq_insert_request(rq, 0); + blk_mq_run_hw_queue(hctx, false); return; }
@@ -2650,6 +2763,7 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) { blk_mq_insert_request(rq, 0); + blk_mq_run_hw_queue(hctx, false); return BLK_STS_OK; }
diff --git a/block/blk-mq.h b/block/blk-mq.h index 3bd43b10032f..f4ac1af77a26 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -230,6 +230,19 @@ static inline struct blk_mq_tags *blk_mq_tags_from_data(struct blk_mq_alloc_data
static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx) { + /* Fast path: hardware queue is not stopped most of the time. */ + if (likely(!test_bit(BLK_MQ_S_STOPPED, &hctx->state))) + return false; + + /* + * This barrier is used to order adding of dispatch list before and + * the test of BLK_MQ_S_STOPPED below. Pairs with the memory barrier + * in blk_mq_start_stopped_hw_queue() so that dispatch code could + * either see BLK_MQ_S_STOPPED is cleared or dispatch list is not + * empty to avoid missing dispatching requests. + */ + smp_mb(); + return test_bit(BLK_MQ_S_STOPPED, &hctx->state); }
diff --git a/block/blk-settings.c b/block/blk-settings.c index a446654ddee5..7abf034089cd 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -249,6 +249,13 @@ static int blk_validate_limits(struct queue_limits *lim) if (lim->io_min < lim->physical_block_size) lim->io_min = lim->physical_block_size;
+ /* + * The optimal I/O size may not be aligned to physical block size + * (because it may be limited by dma engines which have no clue about + * block size of the disks attached to them), so we round it down here. + */ + lim->io_opt = round_down(lim->io_opt, lim->physical_block_size); + /* * max_hw_sectors has a somewhat weird default for historical reason, * but driver really should set their own instead of relying on this diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index e85941bec857..207577145c54 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -794,10 +794,8 @@ int blk_register_queue(struct gendisk *disk) * faster to shut down and is made fully functional here as * request_queues for non-existent devices never get registered. */ - if (!blk_queue_init_done(q)) { - blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q); - percpu_ref_switch_to_percpu(&q->q_usage_counter); - } + blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q); + percpu_ref_switch_to_percpu(&q->q_usage_counter);
return ret;
diff --git a/block/blk-zoned.c b/block/blk-zoned.c index af19296fa50d..95e517723db3 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -1541,6 +1541,7 @@ static int disk_update_zone_resources(struct gendisk *disk, unsigned int nr_seq_zones, nr_conv_zones = 0; unsigned int pool_size; struct queue_limits lim; + int ret;
disk->nr_zones = args->nr_zones; disk->zone_capacity = args->zone_capacity; @@ -1593,7 +1594,11 @@ static int disk_update_zone_resources(struct gendisk *disk, }
commit: - return queue_limits_commit_update(q, &lim); + blk_mq_freeze_queue(q); + ret = queue_limits_commit_update(q, &lim); + blk_mq_unfreeze_queue(q); + + return ret; }
static int blk_revalidate_conv_zone(struct blk_zone *zone, unsigned int idx, @@ -1814,14 +1819,15 @@ int blk_revalidate_disk_zones(struct gendisk *disk) * Set the new disk zone parameters only once the queue is frozen and * all I/Os are completed. */ - blk_mq_freeze_queue(q); if (ret > 0) ret = disk_update_zone_resources(disk, &args); else pr_warn("%s: failed to revalidate zones\n", disk->disk_name); - if (ret) + if (ret) { + blk_mq_freeze_queue(q); disk_free_zone_resources(disk); - blk_mq_unfreeze_queue(q); + blk_mq_unfreeze_queue(q); + }
kfree(args.conv_zones_bitmap);
diff --git a/block/blk.h b/block/blk.h index c718e4291db0..88fab6a81701 100644 --- a/block/blk.h +++ b/block/blk.h @@ -4,6 +4,7 @@
#include <linux/bio-integrity.h> #include <linux/blk-crypto.h> +#include <linux/lockdep.h> #include <linux/memblock.h> /* for max_pfn/max_low_pfn */ #include <linux/sched/sysctl.h> #include <linux/timekeeping.h> @@ -35,8 +36,10 @@ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size, void blk_free_flush_queue(struct blk_flush_queue *q);
void blk_freeze_queue(struct request_queue *q); -void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic); -void blk_queue_start_drain(struct request_queue *q); +bool __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic); +bool blk_queue_start_drain(struct request_queue *q); +bool __blk_freeze_queue_start(struct request_queue *q, + struct task_struct *owner); int __bio_queue_enter(struct request_queue *q, struct bio *bio); void submit_bio_noacct_nocheck(struct bio *bio); void bio_await_chain(struct bio *bio); @@ -69,8 +72,11 @@ static inline int bio_queue_enter(struct bio *bio) { struct request_queue *q = bdev_get_queue(bio->bi_bdev);
- if (blk_try_enter_queue(q, false)) + if (blk_try_enter_queue(q, false)) { + rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_); + rwsem_release(&q->io_lockdep_map, _RET_IP_); return 0; + } return __bio_queue_enter(q, bio); }
@@ -734,4 +740,22 @@ void blk_integrity_verify(struct bio *bio); void blk_integrity_prepare(struct request *rq); void blk_integrity_complete(struct request *rq, unsigned int nr_bytes);
+static inline void blk_freeze_acquire_lock(struct request_queue *q, bool + disk_dead, bool queue_dying) +{ + if (!disk_dead) + rwsem_acquire(&q->io_lockdep_map, 0, 1, _RET_IP_); + if (!queue_dying) + rwsem_acquire(&q->q_lockdep_map, 0, 1, _RET_IP_); +} + +static inline void blk_unfreeze_release_lock(struct request_queue *q, bool + disk_dead, bool queue_dying) +{ + if (!queue_dying) + rwsem_release(&q->q_lockdep_map, _RET_IP_); + if (!disk_dead) + rwsem_release(&q->io_lockdep_map, _RET_IP_); +} + #endif /* BLK_INTERNAL_H */ diff --git a/block/elevator.c b/block/elevator.c index 9430cde13d1a..43ba4ab1ada7 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -598,13 +598,19 @@ void elevator_init_mq(struct request_queue *q) * drain any dispatch activities originated from passthrough * requests, then no need to quiesce queue which may add long boot * latency, especially when lots of disks are involved. + * + * Disk isn't added yet, so verifying queue lock only manually. */ - blk_mq_freeze_queue(q); + blk_freeze_queue_start_non_owner(q); + blk_freeze_acquire_lock(q, true, false); + blk_mq_freeze_queue_wait(q); + blk_mq_cancel_work_sync(q);
err = blk_mq_init_sched(q, e);
- blk_mq_unfreeze_queue(q); + blk_unfreeze_release_lock(q, true, false); + blk_mq_unfreeze_queue_non_owner(q);
if (err) { pr_warn(""%s" elevator initialization failed, " diff --git a/block/fops.c b/block/fops.c index e696ae53bf1e..13a67940d040 100644 --- a/block/fops.c +++ b/block/fops.c @@ -35,13 +35,10 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb) return opf; }
-static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos, - struct iov_iter *iter, bool is_atomic) +static bool blkdev_dio_invalid(struct block_device *bdev, struct kiocb *iocb, + struct iov_iter *iter) { - if (is_atomic && !generic_atomic_write_valid(iter, pos)) - return true; - - return pos & (bdev_logical_block_size(bdev) - 1) || + return iocb->ki_pos & (bdev_logical_block_size(bdev) - 1) || !bdev_iter_is_aligned(bdev, iter); }
@@ -368,13 +365,12 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb, static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter) { struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host); - bool is_atomic = iocb->ki_flags & IOCB_ATOMIC; unsigned int nr_pages;
if (!iov_iter_count(iter)) return 0;
- if (blkdev_dio_invalid(bdev, iocb->ki_pos, iter, is_atomic)) + if (blkdev_dio_invalid(bdev, iocb, iter)) return -EINVAL;
nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1); @@ -383,7 +379,7 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter) return __blkdev_direct_IO_simple(iocb, iter, bdev, nr_pages); return __blkdev_direct_IO_async(iocb, iter, bdev, nr_pages); - } else if (is_atomic) { + } else if (iocb->ki_flags & IOCB_ATOMIC) { return -EINVAL; } return __blkdev_direct_IO(iocb, iter, bdev, bio_max_segs(nr_pages)); @@ -625,7 +621,7 @@ static int blkdev_open(struct inode *inode, struct file *filp) if (!bdev) return -ENXIO;
- if (bdev_can_atomic_write(bdev) && filp->f_flags & O_DIRECT) + if (bdev_can_atomic_write(bdev)) filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
ret = bdev_open(bdev, mode, filp->private_data, NULL, filp); @@ -681,6 +677,7 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from) struct file *file = iocb->ki_filp; struct inode *bd_inode = bdev_file_inode(file); struct block_device *bdev = I_BDEV(bd_inode); + bool atomic = iocb->ki_flags & IOCB_ATOMIC; loff_t size = bdev_nr_bytes(bdev); size_t shorted = 0; ssize_t ret; @@ -700,8 +697,16 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from) if ((iocb->ki_flags & (IOCB_NOWAIT | IOCB_DIRECT)) == IOCB_NOWAIT) return -EOPNOTSUPP;
+ if (atomic) { + ret = generic_atomic_write_valid(iocb, from); + if (ret) + return ret; + } + size -= iocb->ki_pos; if (iov_iter_count(from) > size) { + if (atomic) + return -EINVAL; shorted = iov_iter_count(from) - size; iov_iter_truncate(from, size); } diff --git a/block/genhd.c b/block/genhd.c index 1c05dd4c6980..8645cf3b0816 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -581,13 +581,13 @@ static void blk_report_disk_dead(struct gendisk *disk, bool surprise) rcu_read_unlock(); }
-static void __blk_mark_disk_dead(struct gendisk *disk) +static bool __blk_mark_disk_dead(struct gendisk *disk) { /* * Fail any new I/O. */ if (test_and_set_bit(GD_DEAD, &disk->state)) - return; + return false;
if (test_bit(GD_OWNS_QUEUE, &disk->state)) blk_queue_flag_set(QUEUE_FLAG_DYING, disk->queue); @@ -600,7 +600,7 @@ static void __blk_mark_disk_dead(struct gendisk *disk) /* * Prevent new I/O from crossing bio_queue_enter(). */ - blk_queue_start_drain(disk->queue); + return blk_queue_start_drain(disk->queue); }
/** @@ -641,6 +641,7 @@ void del_gendisk(struct gendisk *disk) struct request_queue *q = disk->queue; struct block_device *part; unsigned long idx; + bool start_drain, queue_dying;
might_sleep();
@@ -668,7 +669,10 @@ void del_gendisk(struct gendisk *disk) * Drop all partitions now that the disk is marked dead. */ mutex_lock(&disk->open_mutex); - __blk_mark_disk_dead(disk); + start_drain = __blk_mark_disk_dead(disk); + queue_dying = blk_queue_dying(q); + if (start_drain) + blk_freeze_acquire_lock(q, true, queue_dying); xa_for_each_start(&disk->part_tbl, idx, part, 1) drop_partition(part); mutex_unlock(&disk->open_mutex); @@ -718,13 +722,13 @@ void del_gendisk(struct gendisk *disk) * If the disk does not own the queue, allow using passthrough requests * again. Else leave the queue frozen to fail all I/O. */ - if (!test_bit(GD_OWNS_QUEUE, &disk->state)) { - blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q); + if (!test_bit(GD_OWNS_QUEUE, &disk->state)) __blk_mq_unfreeze_queue(q, true); - } else { - if (queue_is_mq(q)) - blk_mq_exit_queue(q); - } + else if (queue_is_mq(q)) + blk_mq_exit_queue(q); + + if (start_drain) + blk_unfreeze_release_lock(q, true, queue_dying); } EXPORT_SYMBOL(del_gendisk);
diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c index d0d954fe9d54..7fc79e7dce44 100644 --- a/crypto/pcrypt.c +++ b/crypto/pcrypt.c @@ -117,8 +117,10 @@ static int pcrypt_aead_encrypt(struct aead_request *req) err = padata_do_parallel(ictx->psenc, padata, &ctx->cb_cpu); if (!err) return -EINPROGRESS; - if (err == -EBUSY) - return -EAGAIN; + if (err == -EBUSY) { + /* try non-parallel mode */ + return crypto_aead_encrypt(creq); + }
return err; } @@ -166,8 +168,10 @@ static int pcrypt_aead_decrypt(struct aead_request *req) err = padata_do_parallel(ictx->psdec, padata, &ctx->cb_cpu); if (!err) return -EINPROGRESS; - if (err == -EBUSY) - return -EAGAIN; + if (err == -EBUSY) { + /* try non-parallel mode */ + return crypto_aead_decrypt(creq); + }
return err; } diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c index 78b32a823241..29b723039a34 100644 --- a/drivers/accel/ivpu/ivpu_ipc.c +++ b/drivers/accel/ivpu/ivpu_ipc.c @@ -291,15 +291,16 @@ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons, return ret; }
-static int +int ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req, enum vpu_ipc_msg_type expected_resp_type, - struct vpu_jsm_msg *resp, u32 channel, - unsigned long timeout_ms) + struct vpu_jsm_msg *resp, u32 channel, unsigned long timeout_ms) { struct ivpu_ipc_consumer cons; int ret;
+ drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev)); + ivpu_ipc_consumer_add(vdev, &cons, channel, NULL);
ret = ivpu_ipc_send(vdev, &cons, req); @@ -325,19 +326,21 @@ ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req return ret; }
-int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *req, - enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp, - u32 channel, unsigned long timeout_ms) +int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req, + enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp, + u32 channel, unsigned long timeout_ms) { struct vpu_jsm_msg hb_req = { .type = VPU_JSM_MSG_QUERY_ENGINE_HB }; struct vpu_jsm_msg hb_resp; int ret, hb_ret;
- drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev)); + ret = ivpu_rpm_get(vdev); + if (ret < 0) + return ret;
ret = ivpu_ipc_send_receive_internal(vdev, req, expected_resp, resp, channel, timeout_ms); if (ret != -ETIMEDOUT) - return ret; + goto rpm_put;
hb_ret = ivpu_ipc_send_receive_internal(vdev, &hb_req, VPU_JSM_MSG_QUERY_ENGINE_HB_DONE, &hb_resp, VPU_IPC_CHAN_ASYNC_CMD, @@ -345,21 +348,7 @@ int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *r if (hb_ret == -ETIMEDOUT) ivpu_pm_trigger_recovery(vdev, "IPC timeout");
- return ret; -} - -int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req, - enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp, - u32 channel, unsigned long timeout_ms) -{ - int ret; - - ret = ivpu_rpm_get(vdev); - if (ret < 0) - return ret; - - ret = ivpu_ipc_send_receive_active(vdev, req, expected_resp, resp, channel, timeout_ms); - +rpm_put: ivpu_rpm_put(vdev); return ret; } diff --git a/drivers/accel/ivpu/ivpu_ipc.h b/drivers/accel/ivpu/ivpu_ipc.h index 4fe38141045e..fb4de7fb8210 100644 --- a/drivers/accel/ivpu/ivpu_ipc.h +++ b/drivers/accel/ivpu/ivpu_ipc.h @@ -101,10 +101,9 @@ int ivpu_ipc_send(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons, int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons, struct ivpu_ipc_hdr *ipc_buf, struct vpu_jsm_msg *jsm_msg, unsigned long timeout_ms); - -int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *req, - enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp, - u32 channel, unsigned long timeout_ms); +int ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req, + enum vpu_ipc_msg_type expected_resp_type, + struct vpu_jsm_msg *resp, u32 channel, unsigned long timeout_ms); int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req, enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp, u32 channel, unsigned long timeout_ms); diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.c b/drivers/accel/ivpu/ivpu_jsm_msg.c index 46ef16c3c069..88105963c1b2 100644 --- a/drivers/accel/ivpu/ivpu_jsm_msg.c +++ b/drivers/accel/ivpu/ivpu_jsm_msg.c @@ -270,9 +270,8 @@ int ivpu_jsm_pwr_d0i3_enter(struct ivpu_device *vdev)
req.payload.pwr_d0i3_enter.send_response = 1;
- ret = ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_PWR_D0I3_ENTER_DONE, - &resp, VPU_IPC_CHAN_GEN_CMD, - vdev->timeout.d0i3_entry_msg); + ret = ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_PWR_D0I3_ENTER_DONE, &resp, + VPU_IPC_CHAN_GEN_CMD, vdev->timeout.d0i3_entry_msg); if (ret) return ret;
@@ -430,8 +429,8 @@ int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev)
req.payload.hws_priority_band_setup.normal_band_percentage = 10;
- ret = ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP, - &resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm); + ret = ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP, + &resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm); if (ret) ivpu_warn_ratelimited(vdev, "Failed to set priority bands: %d\n", ret);
@@ -544,9 +543,8 @@ int ivpu_jsm_dct_enable(struct ivpu_device *vdev, u32 active_us, u32 inactive_us req.payload.pwr_dct_control.dct_active_us = active_us; req.payload.pwr_dct_control.dct_inactive_us = inactive_us;
- return ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_DCT_ENABLE_DONE, - &resp, VPU_IPC_CHAN_ASYNC_CMD, - vdev->timeout.jsm); + return ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_DCT_ENABLE_DONE, &resp, + VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm); }
int ivpu_jsm_dct_disable(struct ivpu_device *vdev) @@ -554,7 +552,6 @@ int ivpu_jsm_dct_disable(struct ivpu_device *vdev) struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_DCT_DISABLE }; struct vpu_jsm_msg resp;
- return ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_DCT_DISABLE_DONE, - &resp, VPU_IPC_CHAN_ASYNC_CMD, - vdev->timeout.jsm); + return ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_DCT_DISABLE_DONE, &resp, + VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm); } diff --git a/drivers/acpi/arm64/gtdt.c b/drivers/acpi/arm64/gtdt.c index c0e77c1c8e09..eb6c2d360387 100644 --- a/drivers/acpi/arm64/gtdt.c +++ b/drivers/acpi/arm64/gtdt.c @@ -283,7 +283,7 @@ static int __init gtdt_parse_timer_block(struct acpi_gtdt_timer_block *block, if (frame->virt_irq > 0) acpi_unregister_gsi(gtdt_frame->virtual_timer_interrupt); frame->virt_irq = 0; - } while (i-- >= 0 && gtdt_frame--); + } while (i-- > 0 && gtdt_frame--);
return -EINVAL; } diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c index 5c0cc7aae872..e78e3754d99e 100644 --- a/drivers/acpi/cppc_acpi.c +++ b/drivers/acpi/cppc_acpi.c @@ -1140,7 +1140,6 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val) return -EFAULT; } val = MASK_VAL_WRITE(reg, prev_val, val); - val |= prev_val; }
switch (size) { diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c index 324a9a3c087a..c6664a787969 100644 --- a/drivers/base/firmware_loader/main.c +++ b/drivers/base/firmware_loader/main.c @@ -829,19 +829,18 @@ static void fw_log_firmware_info(const struct firmware *fw, const char *name, st shash->tfm = alg;
if (crypto_shash_digest(shash, fw->data, fw->size, sha256buf) < 0) - goto out_shash; + goto out_free;
for (int i = 0; i < SHA256_DIGEST_SIZE; i++) sprintf(&outbuf[i * 2], "%02x", sha256buf[i]); outbuf[SHA256_BLOCK_SIZE] = 0; dev_dbg(device, "Loaded FW: %s, sha256: %s\n", name, outbuf);
-out_shash: - crypto_free_shash(alg); out_free: kfree(shash); kfree(outbuf); kfree(sha256buf); + crypto_free_shash(alg); } #else static void fw_log_firmware_info(const struct firmware *fw, const char *name, diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c index a750e48a26b8..6981e5f974e9 100644 --- a/drivers/base/regmap/regmap-irq.c +++ b/drivers/base/regmap/regmap-irq.c @@ -514,12 +514,16 @@ static irqreturn_t regmap_irq_thread(int irq, void *d) return IRQ_NONE; }
+static struct lock_class_key regmap_irq_lock_class; +static struct lock_class_key regmap_irq_request_class; + static int regmap_irq_map(struct irq_domain *h, unsigned int virq, irq_hw_number_t hw) { struct regmap_irq_chip_data *data = h->host_data;
irq_set_chip_data(virq, data); + irq_set_lockdep_class(virq, ®map_irq_lock_class, ®map_irq_request_class); irq_set_chip(virq, &data->irq_chip); irq_set_nested_thread(virq, 1); irq_set_parent(virq, data->irq); diff --git a/drivers/base/trace.h b/drivers/base/trace.h index e52b6eae060d..3b83b13a57ff 100644 --- a/drivers/base/trace.h +++ b/drivers/base/trace.h @@ -24,18 +24,18 @@ DECLARE_EVENT_CLASS(devres, __field(struct device *, dev) __field(const char *, op) __field(void *, node) - __field(const char *, name) + __string(name, name) __field(size_t, size) ), TP_fast_assign( __assign_str(devname); __entry->op = op; __entry->node = node; - __entry->name = name; + __assign_str(name); __entry->size = size; ), TP_printk("%s %3s %p %s (%zu bytes)", __get_str(devname), - __entry->op, __entry->node, __entry->name, __entry->size) + __entry->op, __entry->node, __get_str(name), __entry->size) );
DEFINE_EVENT(devres, devres_log, diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 2fd1ed101748..292f127cae0a 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -231,8 +231,10 @@ static void brd_do_discard(struct brd_device *brd, sector_t sector, u32 size) xa_lock(&brd->brd_pages); while (size >= PAGE_SIZE && aligned_sector < rd_size * 2) { page = __xa_erase(&brd->brd_pages, aligned_sector >> PAGE_SECTORS_SHIFT); - if (page) + if (page) { __free_page(page); + brd->brd_nr_pages--; + } aligned_sector += PAGE_SECTORS; size -= PAGE_SIZE; } @@ -316,8 +318,40 @@ __setup("ramdisk_size=", ramdisk_size); * (should share code eventually). */ static LIST_HEAD(brd_devices); +static DEFINE_MUTEX(brd_devices_mutex); static struct dentry *brd_debugfs_dir;
+static struct brd_device *brd_find_or_alloc_device(int i) +{ + struct brd_device *brd; + + mutex_lock(&brd_devices_mutex); + list_for_each_entry(brd, &brd_devices, brd_list) { + if (brd->brd_number == i) { + mutex_unlock(&brd_devices_mutex); + return ERR_PTR(-EEXIST); + } + } + + brd = kzalloc(sizeof(*brd), GFP_KERNEL); + if (!brd) { + mutex_unlock(&brd_devices_mutex); + return ERR_PTR(-ENOMEM); + } + brd->brd_number = i; + list_add_tail(&brd->brd_list, &brd_devices); + mutex_unlock(&brd_devices_mutex); + return brd; +} + +static void brd_free_device(struct brd_device *brd) +{ + mutex_lock(&brd_devices_mutex); + list_del(&brd->brd_list); + mutex_unlock(&brd_devices_mutex); + kfree(brd); +} + static int brd_alloc(int i) { struct brd_device *brd; @@ -340,14 +374,9 @@ static int brd_alloc(int i) BLK_FEAT_NOWAIT, };
- list_for_each_entry(brd, &brd_devices, brd_list) - if (brd->brd_number == i) - return -EEXIST; - brd = kzalloc(sizeof(*brd), GFP_KERNEL); - if (!brd) - return -ENOMEM; - brd->brd_number = i; - list_add_tail(&brd->brd_list, &brd_devices); + brd = brd_find_or_alloc_device(i); + if (IS_ERR(brd)) + return PTR_ERR(brd);
xa_init(&brd->brd_pages);
@@ -378,8 +407,7 @@ static int brd_alloc(int i) out_cleanup_disk: put_disk(disk); out_free_dev: - list_del(&brd->brd_list); - kfree(brd); + brd_free_device(brd); return err; }
@@ -398,8 +426,7 @@ static void brd_cleanup(void) del_gendisk(brd->brd_disk); put_disk(brd->brd_disk); brd_free_pages(brd); - list_del(&brd->brd_list); - kfree(brd); + brd_free_device(brd); } }
@@ -426,16 +453,6 @@ static int __init brd_init(void) { int err, i;
- brd_check_and_reset_par(); - - brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL); - - for (i = 0; i < rd_nr; i++) { - err = brd_alloc(i); - if (err) - goto out_free; - } - /* * brd module now has a feature to instantiate underlying device * structure on-demand, provided that there is an access dev node. @@ -451,11 +468,18 @@ static int __init brd_init(void) * dynamically. */
+ brd_check_and_reset_par(); + + brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL); + if (__register_blkdev(RAMDISK_MAJOR, "ramdisk", brd_probe)) { err = -EIO; goto out_free; }
+ for (i = 0; i < rd_nr; i++) + brd_alloc(i); + pr_info("brd: module loaded\n"); return 0;
diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 78a7bb28defe..86cc3b19faae 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -173,7 +173,7 @@ static loff_t get_loop_size(struct loop_device *lo, struct file *file) static bool lo_bdev_can_use_dio(struct loop_device *lo, struct block_device *backing_bdev) { - unsigned short sb_bsize = bdev_logical_block_size(backing_bdev); + unsigned int sb_bsize = bdev_logical_block_size(backing_bdev);
if (queue_logical_block_size(lo->lo_queue) < sb_bsize) return false; @@ -977,7 +977,7 @@ loop_set_status_from_info(struct loop_device *lo, return 0; }
-static unsigned short loop_default_blocksize(struct loop_device *lo, +static unsigned int loop_default_blocksize(struct loop_device *lo, struct block_device *backing_bdev) { /* In case of direct I/O, match underlying block size */ @@ -986,7 +986,7 @@ static unsigned short loop_default_blocksize(struct loop_device *lo, return SECTOR_SIZE; }
-static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize) +static int loop_reconfigure_limits(struct loop_device *lo, unsigned int bsize) { struct file *file = lo->lo_backing_file; struct inode *inode = file->f_mapping->host; diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 6ba2c1dd1d87..90bc605ff6c2 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -664,12 +664,21 @@ static inline char *ublk_queue_cmd_buf(struct ublk_device *ub, int q_id) return ublk_get_queue(ub, q_id)->io_cmd_buf; }
+static inline int __ublk_queue_cmd_buf_size(int depth) +{ + return round_up(depth * sizeof(struct ublksrv_io_desc), PAGE_SIZE); +} + static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id) { struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
- return round_up(ubq->q_depth * sizeof(struct ublksrv_io_desc), - PAGE_SIZE); + return __ublk_queue_cmd_buf_size(ubq->q_depth); +} + +static int ublk_max_cmd_buf_size(void) +{ + return __ublk_queue_cmd_buf_size(UBLK_MAX_QUEUE_DEPTH); }
static inline bool ublk_queue_can_use_recovery_reissue( @@ -1322,7 +1331,7 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma) { struct ublk_device *ub = filp->private_data; size_t sz = vma->vm_end - vma->vm_start; - unsigned max_sz = UBLK_MAX_QUEUE_DEPTH * sizeof(struct ublksrv_io_desc); + unsigned max_sz = ublk_max_cmd_buf_size(); unsigned long pfn, end, phys_off = vma->vm_pgoff << PAGE_SHIFT; int q_id, ret = 0;
@@ -2965,7 +2974,7 @@ static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd, ret = ublk_ctrl_end_recovery(ub, cmd); break; default: - ret = -ENOTSUPP; + ret = -EOPNOTSUPP; break; }
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 194417abc105..43c96b73a711 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -471,18 +471,18 @@ static bool virtblk_prep_rq_batch(struct request *req) return virtblk_prep_rq(req->mq_hctx, vblk, req, vbr) == BLK_STS_OK; }
-static bool virtblk_add_req_batch(struct virtio_blk_vq *vq, +static void virtblk_add_req_batch(struct virtio_blk_vq *vq, struct request **rqlist) { + struct request *req; unsigned long flags; - int err; bool kick;
spin_lock_irqsave(&vq->lock, flags);
- while (!rq_list_empty(*rqlist)) { - struct request *req = rq_list_pop(rqlist); + while ((req = rq_list_pop(rqlist))) { struct virtblk_req *vbr = blk_mq_rq_to_pdu(req); + int err;
err = virtblk_add_req(vq->vq, vbr); if (err) { @@ -495,37 +495,33 @@ static bool virtblk_add_req_batch(struct virtio_blk_vq *vq, kick = virtqueue_kick_prepare(vq->vq); spin_unlock_irqrestore(&vq->lock, flags);
- return kick; + if (kick) + virtqueue_notify(vq->vq); }
static void virtio_queue_rqs(struct request **rqlist) { - struct request *req, *next, *prev = NULL; + struct request *submit_list = NULL; struct request *requeue_list = NULL; + struct request **requeue_lastp = &requeue_list; + struct virtio_blk_vq *vq = NULL; + struct request *req;
- rq_list_for_each_safe(rqlist, req, next) { - struct virtio_blk_vq *vq = get_virtio_blk_vq(req->mq_hctx); - bool kick; - - if (!virtblk_prep_rq_batch(req)) { - rq_list_move(rqlist, &requeue_list, req, prev); - req = prev; - if (!req) - continue; - } + while ((req = rq_list_pop(rqlist))) { + struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req->mq_hctx);
- if (!next || req->mq_hctx != next->mq_hctx) { - req->rq_next = NULL; - kick = virtblk_add_req_batch(vq, rqlist); - if (kick) - virtqueue_notify(vq->vq); + if (vq && vq != this_vq) + virtblk_add_req_batch(vq, &submit_list); + vq = this_vq;
- *rqlist = next; - prev = NULL; - } else - prev = req; + if (virtblk_prep_rq_batch(req)) + rq_list_add(&submit_list, req); /* reverse order */ + else + rq_list_add_tail(&requeue_lastp, req); }
+ if (vq) + virtblk_add_req_batch(vq, &submit_list); *rqlist = requeue_list; }
diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig index 6aea609b795c..402b7b175863 100644 --- a/drivers/block/zram/Kconfig +++ b/drivers/block/zram/Kconfig @@ -94,6 +94,7 @@ endchoice
config ZRAM_DEF_COMP string + depends on ZRAM default "lzo-rle" if ZRAM_DEF_COMP_LZORLE default "lzo" if ZRAM_DEF_COMP_LZO default "lz4" if ZRAM_DEF_COMP_LZ4 diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index ad9c9bc3ccfc..e682797cdee7 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -626,6 +626,12 @@ static ssize_t writeback_store(struct device *dev, goto release_init_lock; }
+ /* Do not permit concurrent post-processing actions. */ + if (atomic_xchg(&zram->pp_in_progress, 1)) { + up_read(&zram->init_lock); + return -EAGAIN; + } + if (!zram->backing_dev) { ret = -ENODEV; goto release_init_lock; @@ -752,6 +758,7 @@ static ssize_t writeback_store(struct device *dev, free_block_bdev(zram, blk_idx); __free_page(page); release_init_lock: + atomic_set(&zram->pp_in_progress, 0); up_read(&zram->init_lock);
return ret; @@ -1881,6 +1888,12 @@ static ssize_t recompress_store(struct device *dev, goto release_init_lock; }
+ /* Do not permit concurrent post-processing actions. */ + if (atomic_xchg(&zram->pp_in_progress, 1)) { + up_read(&zram->init_lock); + return -EAGAIN; + } + if (algo) { bool found = false;
@@ -1948,6 +1961,7 @@ static ssize_t recompress_store(struct device *dev, __free_page(page);
release_init_lock: + atomic_set(&zram->pp_in_progress, 0); up_read(&zram->init_lock); return ret; } @@ -2144,6 +2158,7 @@ static void zram_reset_device(struct zram *zram) zram->disksize = 0; zram_destroy_comps(zram); memset(&zram->stats, 0, sizeof(zram->stats)); + atomic_set(&zram->pp_in_progress, 0); reset_bdev(zram);
comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor); @@ -2381,6 +2396,9 @@ static int zram_add(void) zram->disk->fops = &zram_devops; zram->disk->private_data = zram; snprintf(zram->disk->disk_name, 16, "zram%d", device_id); + atomic_set(&zram->pp_in_progress, 0); + zram_comp_params_reset(zram); + comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
/* Actual capacity set using sysfs (/sys/block/zram<id>/disksize */ set_capacity(zram->disk, 0); @@ -2388,9 +2406,6 @@ static int zram_add(void) if (ret) goto out_cleanup_disk;
- zram_comp_params_reset(zram); - comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor); - zram_debugfs_register(zram); pr_info("Added device: %s\n", zram->disk->disk_name); return device_id; diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index cfc8c059db63..8acf9d2ee42b 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -139,5 +139,6 @@ struct zram { #ifdef CONFIG_ZRAM_MEMORY_TRACKING struct dentry *debugfs_dir; #endif + atomic_t pp_in_progress; }; #endif diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c index eef00467905e..a1153ada74d2 100644 --- a/drivers/bluetooth/btbcm.c +++ b/drivers/bluetooth/btbcm.c @@ -541,11 +541,10 @@ static const struct bcm_subver_table bcm_usb_subver_table[] = { static const char *btbcm_get_board_name(struct device *dev) { #ifdef CONFIG_OF - struct device_node *root; + struct device_node *root __free(device_node) = of_find_node_by_path("/"); char *board_type; const char *tmp;
- root = of_find_node_by_path("/"); if (!root) return NULL;
@@ -555,7 +554,6 @@ static const char *btbcm_get_board_name(struct device *dev) /* get rid of any '/' in the compatible string */ board_type = devm_kstrdup(dev, tmp, GFP_KERNEL); strreplace(board_type, '/', '-'); - of_node_put(root);
return board_type; #else diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c index 30a32ebbcc68..645047fb92fd 100644 --- a/drivers/bluetooth/btintel.c +++ b/drivers/bluetooth/btintel.c @@ -1841,6 +1841,37 @@ static int btintel_boot_wait(struct hci_dev *hdev, ktime_t calltime, int msec) return 0; }
+static int btintel_boot_wait_d0(struct hci_dev *hdev, ktime_t calltime, + int msec) +{ + ktime_t delta, rettime; + unsigned long long duration; + int err; + + bt_dev_info(hdev, "Waiting for device transition to d0"); + + err = btintel_wait_on_flag_timeout(hdev, INTEL_WAIT_FOR_D0, + TASK_INTERRUPTIBLE, + msecs_to_jiffies(msec)); + if (err == -EINTR) { + bt_dev_err(hdev, "Device d0 move interrupted"); + return -EINTR; + } + + if (err) { + bt_dev_err(hdev, "Device d0 move timeout"); + return -ETIMEDOUT; + } + + rettime = ktime_get(); + delta = ktime_sub(rettime, calltime); + duration = (unsigned long long)ktime_to_ns(delta) >> 10; + + bt_dev_info(hdev, "Device moved to D0 in %llu usecs", duration); + + return 0; +} + static int btintel_boot(struct hci_dev *hdev, u32 boot_addr) { ktime_t calltime; @@ -1849,6 +1880,7 @@ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr) calltime = ktime_get();
btintel_set_flag(hdev, INTEL_BOOTING); + btintel_set_flag(hdev, INTEL_WAIT_FOR_D0);
err = btintel_send_intel_reset(hdev, boot_addr); if (err) { @@ -1861,13 +1893,28 @@ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr) * is done by the operational firmware sending bootup notification. * * Booting into operational firmware should not take longer than - * 1 second. However if that happens, then just fail the setup + * 5 second. However if that happens, then just fail the setup * since something went wrong. */ - err = btintel_boot_wait(hdev, calltime, 1000); - if (err == -ETIMEDOUT) + err = btintel_boot_wait(hdev, calltime, 5000); + if (err == -ETIMEDOUT) { btintel_reset_to_bootloader(hdev); + goto exit_error; + }
+ if (hdev->bus == HCI_PCI) { + /* In case of PCIe, after receiving bootup event, driver performs + * D0 entry by writing 0 to sleep control register (check + * btintel_pcie_recv_event()) + * Firmware acks with alive interrupt indicating host is full ready to + * perform BT operation. Lets wait here till INTEL_WAIT_FOR_D0 + * bit is cleared. + */ + calltime = ktime_get(); + err = btintel_boot_wait_d0(hdev, calltime, 2000); + } + +exit_error: return err; }
@@ -3273,7 +3320,7 @@ int btintel_configure_setup(struct hci_dev *hdev, const char *driver_name) } EXPORT_SYMBOL_GPL(btintel_configure_setup);
-static int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb) +int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb) { struct intel_tlv *tlv = (void *)&skb->data[5];
@@ -3301,6 +3348,7 @@ static int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb) recv_frame: return hci_recv_frame(hdev, skb); } +EXPORT_SYMBOL_GPL(btintel_diagnostics);
int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb) { @@ -3320,7 +3368,8 @@ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb) * indicating that the bootup completed. */ btintel_bootup(hdev, ptr, len); - break; + kfree_skb(skb); + return 0; case 0x06: /* When the firmware loading completes the * device sends out a vendor specific event @@ -3328,7 +3377,8 @@ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb) * loading. */ btintel_secure_send_result(hdev, ptr, len); - break; + kfree_skb(skb); + return 0; } }
diff --git a/drivers/bluetooth/btintel.h b/drivers/bluetooth/btintel.h index aa70e4c27416..b448c67e8ed9 100644 --- a/drivers/bluetooth/btintel.h +++ b/drivers/bluetooth/btintel.h @@ -178,6 +178,7 @@ enum { INTEL_ROM_LEGACY, INTEL_ROM_LEGACY_NO_WBS_SUPPORT, INTEL_ACPI_RESET_ACTIVE, + INTEL_WAIT_FOR_D0,
__INTEL_NUM_FLAGS, }; @@ -249,6 +250,7 @@ int btintel_bootloader_setup_tlv(struct hci_dev *hdev, int btintel_shutdown_combined(struct hci_dev *hdev); void btintel_hw_error(struct hci_dev *hdev, u8 code); void btintel_print_fseq_info(struct hci_dev *hdev); +int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb); #else
static inline int btintel_check_bdaddr(struct hci_dev *hdev) @@ -382,4 +384,9 @@ static inline void btintel_hw_error(struct hci_dev *hdev, u8 code) static inline void btintel_print_fseq_info(struct hci_dev *hdev) { } + +static inline int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb) +{ + return -EOPNOTSUPP; +} #endif diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c index 5252125b003f..8bd663f4bac1 100644 --- a/drivers/bluetooth/btintel_pcie.c +++ b/drivers/bluetooth/btintel_pcie.c @@ -48,6 +48,17 @@ MODULE_DEVICE_TABLE(pci, btintel_pcie_table); #define BTINTEL_PCIE_HCI_EVT_PKT 0x00000004 #define BTINTEL_PCIE_HCI_ISO_PKT 0x00000005
+/* Alive interrupt context */ +enum { + BTINTEL_PCIE_ROM, + BTINTEL_PCIE_FW_DL, + BTINTEL_PCIE_HCI_RESET, + BTINTEL_PCIE_INTEL_HCI_RESET1, + BTINTEL_PCIE_INTEL_HCI_RESET2, + BTINTEL_PCIE_D0, + BTINTEL_PCIE_D3 +}; + static inline void ipc_print_ia_ring(struct hci_dev *hdev, struct ia *ia, u16 queue_num) { @@ -290,8 +301,9 @@ static int btintel_pcie_enable_bt(struct btintel_pcie_data *data) /* wait for interrupt from the device after booting up to primary * bootloader. */ + data->alive_intr_ctxt = BTINTEL_PCIE_ROM; err = wait_event_timeout(data->gp0_wait_q, data->gp0_received, - msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT)); + msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS)); if (!err) return -ETIME;
@@ -302,12 +314,78 @@ static int btintel_pcie_enable_bt(struct btintel_pcie_data *data) return 0; }
+/* BIT(0) - ROM, BIT(1) - IML and BIT(3) - OP + * Sometimes during firmware image switching from ROM to IML or IML to OP image, + * the previous image bit is not cleared by firmware when alive interrupt is + * received. Driver needs to take care of these sticky bits when deciding the + * current image running on controller. + * Ex: 0x10 and 0x11 - both represents that controller is running IML + */ +static inline bool btintel_pcie_in_rom(struct btintel_pcie_data *data) +{ + return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_ROM && + !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML) && + !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW); +} + +static inline bool btintel_pcie_in_op(struct btintel_pcie_data *data) +{ + return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW; +} + +static inline bool btintel_pcie_in_iml(struct btintel_pcie_data *data) +{ + return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML && + !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW); +} + +static inline bool btintel_pcie_in_d3(struct btintel_pcie_data *data) +{ + return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY; +} + +static inline bool btintel_pcie_in_d0(struct btintel_pcie_data *data) +{ + return !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY); +} + +static void btintel_pcie_wr_sleep_cntrl(struct btintel_pcie_data *data, + u32 dxstate) +{ + bt_dev_dbg(data->hdev, "writing sleep_ctl_reg: 0x%8.8x", dxstate); + btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_IPC_SLEEP_CTL_REG, dxstate); +} + +static inline char *btintel_pcie_alivectxt_state2str(u32 alive_intr_ctxt) +{ + switch (alive_intr_ctxt) { + case BTINTEL_PCIE_ROM: + return "rom"; + case BTINTEL_PCIE_FW_DL: + return "fw_dl"; + case BTINTEL_PCIE_D0: + return "d0"; + case BTINTEL_PCIE_D3: + return "d3"; + case BTINTEL_PCIE_HCI_RESET: + return "hci_reset"; + case BTINTEL_PCIE_INTEL_HCI_RESET1: + return "intel_reset1"; + case BTINTEL_PCIE_INTEL_HCI_RESET2: + return "intel_reset2"; + default: + return "unknown"; + } + return "null"; +} + /* This function handles the MSI-X interrupt for gp0 cause (bit 0 in * BTINTEL_PCIE_CSR_MSIX_HW_INT_CAUSES) which is sent for boot stage and image response. */ static void btintel_pcie_msix_gp0_handler(struct btintel_pcie_data *data) { - u32 reg; + bool submit_rx, signal_waitq; + u32 reg, old_ctxt;
/* This interrupt is for three different causes and it is not easy to * know what causes the interrupt. So, it compares each register value @@ -317,20 +395,87 @@ static void btintel_pcie_msix_gp0_handler(struct btintel_pcie_data *data) if (reg != data->boot_stage_cache) data->boot_stage_cache = reg;
+ bt_dev_dbg(data->hdev, "Alive context: %s old_boot_stage: 0x%8.8x new_boot_stage: 0x%8.8x", + btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt), + data->boot_stage_cache, reg); reg = btintel_pcie_rd_reg32(data, BTINTEL_PCIE_CSR_IMG_RESPONSE_REG); if (reg != data->img_resp_cache) data->img_resp_cache = reg;
data->gp0_received = true;
- /* If the boot stage is OP or IML, reset IA and start RX again */ - if (data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW || - data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML) { + old_ctxt = data->alive_intr_ctxt; + submit_rx = false; + signal_waitq = false; + + switch (data->alive_intr_ctxt) { + case BTINTEL_PCIE_ROM: + data->alive_intr_ctxt = BTINTEL_PCIE_FW_DL; + signal_waitq = true; + break; + case BTINTEL_PCIE_FW_DL: + /* Error case is already handled. Ideally control shall not + * reach here + */ + break; + case BTINTEL_PCIE_INTEL_HCI_RESET1: + if (btintel_pcie_in_op(data)) { + submit_rx = true; + break; + } + + if (btintel_pcie_in_iml(data)) { + submit_rx = true; + data->alive_intr_ctxt = BTINTEL_PCIE_FW_DL; + break; + } + break; + case BTINTEL_PCIE_INTEL_HCI_RESET2: + if (btintel_test_and_clear_flag(data->hdev, INTEL_WAIT_FOR_D0)) { + btintel_wake_up_flag(data->hdev, INTEL_WAIT_FOR_D0); + data->alive_intr_ctxt = BTINTEL_PCIE_D0; + } + break; + case BTINTEL_PCIE_D0: + if (btintel_pcie_in_d3(data)) { + data->alive_intr_ctxt = BTINTEL_PCIE_D3; + signal_waitq = true; + break; + } + break; + case BTINTEL_PCIE_D3: + if (btintel_pcie_in_d0(data)) { + data->alive_intr_ctxt = BTINTEL_PCIE_D0; + submit_rx = true; + signal_waitq = true; + break; + } + break; + case BTINTEL_PCIE_HCI_RESET: + data->alive_intr_ctxt = BTINTEL_PCIE_D0; + submit_rx = true; + signal_waitq = true; + break; + default: + bt_dev_err(data->hdev, "Unknown state: 0x%2.2x", + data->alive_intr_ctxt); + break; + } + + if (submit_rx) { btintel_pcie_reset_ia(data); btintel_pcie_start_rx(data); }
- wake_up(&data->gp0_wait_q); + if (signal_waitq) { + bt_dev_dbg(data->hdev, "wake up gp0 wait_q"); + wake_up(&data->gp0_wait_q); + } + + if (old_ctxt != data->alive_intr_ctxt) + bt_dev_dbg(data->hdev, "alive context changed: %s -> %s", + btintel_pcie_alivectxt_state2str(old_ctxt), + btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt)); }
/* This function handles the MSX-X interrupt for rx queue 0 which is for TX @@ -364,6 +509,83 @@ static void btintel_pcie_msix_tx_handle(struct btintel_pcie_data *data) } }
+static int btintel_pcie_recv_event(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_event_hdr *hdr = (void *)skb->data; + const char diagnostics_hdr[] = { 0x87, 0x80, 0x03 }; + struct btintel_pcie_data *data = hci_get_drvdata(hdev); + + if (skb->len > HCI_EVENT_HDR_SIZE && hdr->evt == 0xff && + hdr->plen > 0) { + const void *ptr = skb->data + HCI_EVENT_HDR_SIZE + 1; + unsigned int len = skb->len - HCI_EVENT_HDR_SIZE - 1; + + if (btintel_test_flag(hdev, INTEL_BOOTLOADER)) { + switch (skb->data[2]) { + case 0x02: + /* When switching to the operational firmware + * the device sends a vendor specific event + * indicating that the bootup completed. + */ + btintel_bootup(hdev, ptr, len); + + /* If bootup event is from operational image, + * driver needs to write sleep control register to + * move into D0 state + */ + if (btintel_pcie_in_op(data)) { + btintel_pcie_wr_sleep_cntrl(data, BTINTEL_PCIE_STATE_D0); + data->alive_intr_ctxt = BTINTEL_PCIE_INTEL_HCI_RESET2; + kfree_skb(skb); + return 0; + } + + if (btintel_pcie_in_iml(data)) { + /* In case of IML, there is no concept + * of D0 transition. Just mimic as if + * IML moved to D0 by clearing INTEL_WAIT_FOR_D0 + * bit and waking up the task waiting on + * INTEL_WAIT_FOR_D0. This is required + * as intel_boot() is common function for + * both IML and OP image loading. + */ + if (btintel_test_and_clear_flag(data->hdev, + INTEL_WAIT_FOR_D0)) + btintel_wake_up_flag(data->hdev, + INTEL_WAIT_FOR_D0); + } + kfree_skb(skb); + return 0; + case 0x06: + /* When the firmware loading completes the + * device sends out a vendor specific event + * indicating the result of the firmware + * loading. + */ + btintel_secure_send_result(hdev, ptr, len); + kfree_skb(skb); + return 0; + } + } + + /* Handle all diagnostics events separately. May still call + * hci_recv_frame. + */ + if (len >= sizeof(diagnostics_hdr) && + memcmp(&skb->data[2], diagnostics_hdr, + sizeof(diagnostics_hdr)) == 0) { + return btintel_diagnostics(hdev, skb); + } + + /* This is a debug event that comes from IML and OP image when it + * starts execution. There is no need pass this event to stack. + */ + if (skb->data[2] == 0x97) + return 0; + } + + return hci_recv_frame(hdev, skb); +} /* Process the received rx data * It check the frame header to identify the data type and create skb * and calling HCI API @@ -465,7 +687,7 @@ static int btintel_pcie_recv_frame(struct btintel_pcie_data *data, hdev->stat.byte_rx += plen;
if (pcie_pkt_type == BTINTEL_PCIE_HCI_EVT_PKT) - ret = btintel_recv_event(hdev, new_skb); + ret = btintel_pcie_recv_event(hdev, new_skb); else ret = hci_recv_frame(hdev, new_skb);
@@ -1053,8 +1275,11 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev, struct sk_buff *skb) { struct btintel_pcie_data *data = hci_get_drvdata(hdev); + struct hci_command_hdr *cmd; + __u16 opcode = ~0; int ret; u32 type; + u32 old_ctxt;
/* Due to the fw limitation, the type header of the packet should be * 4 bytes unlike 1 byte for UART. In UART, the firmware can read @@ -1073,6 +1298,8 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev, switch (hci_skb_pkt_type(skb)) { case HCI_COMMAND_PKT: type = BTINTEL_PCIE_HCI_CMD_PKT; + cmd = (void *)skb->data; + opcode = le16_to_cpu(cmd->opcode); if (btintel_test_flag(hdev, INTEL_BOOTLOADER)) { struct hci_command_hdr *cmd = (void *)skb->data; __u16 opcode = le16_to_cpu(cmd->opcode); @@ -1111,6 +1338,30 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev, bt_dev_err(hdev, "Failed to send frame (%d)", ret); goto exit_error; } + + if (type == BTINTEL_PCIE_HCI_CMD_PKT && + (opcode == HCI_OP_RESET || opcode == 0xfc01)) { + old_ctxt = data->alive_intr_ctxt; + data->alive_intr_ctxt = + (opcode == 0xfc01 ? BTINTEL_PCIE_INTEL_HCI_RESET1 : + BTINTEL_PCIE_HCI_RESET); + bt_dev_dbg(data->hdev, "sent cmd: 0x%4.4x alive context changed: %s -> %s", + opcode, btintel_pcie_alivectxt_state2str(old_ctxt), + btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt)); + if (opcode == HCI_OP_RESET) { + data->gp0_received = false; + ret = wait_event_timeout(data->gp0_wait_q, + data->gp0_received, + msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS)); + if (!ret) { + hdev->stat.err_tx++; + bt_dev_err(hdev, "No alive interrupt received for %s", + btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt)); + ret = -ETIME; + goto exit_error; + } + } + } hdev->stat.byte_tx += skb->len; kfree_skb(skb);
diff --git a/drivers/bluetooth/btintel_pcie.h b/drivers/bluetooth/btintel_pcie.h index baaff70420f5..8b7824ad005a 100644 --- a/drivers/bluetooth/btintel_pcie.h +++ b/drivers/bluetooth/btintel_pcie.h @@ -12,6 +12,7 @@ #define BTINTEL_PCIE_CSR_HW_REV_REG (BTINTEL_PCIE_CSR_BASE + 0x028) #define BTINTEL_PCIE_CSR_RF_ID_REG (BTINTEL_PCIE_CSR_BASE + 0x09C) #define BTINTEL_PCIE_CSR_BOOT_STAGE_REG (BTINTEL_PCIE_CSR_BASE + 0x108) +#define BTINTEL_PCIE_CSR_IPC_SLEEP_CTL_REG (BTINTEL_PCIE_CSR_BASE + 0x114) #define BTINTEL_PCIE_CSR_CI_ADDR_LSB_REG (BTINTEL_PCIE_CSR_BASE + 0x118) #define BTINTEL_PCIE_CSR_CI_ADDR_MSB_REG (BTINTEL_PCIE_CSR_BASE + 0x11C) #define BTINTEL_PCIE_CSR_IMG_RESPONSE_REG (BTINTEL_PCIE_CSR_BASE + 0x12C) @@ -32,6 +33,7 @@ #define BTINTEL_PCIE_CSR_BOOT_STAGE_IML_LOCKDOWN (BIT(11)) #define BTINTEL_PCIE_CSR_BOOT_STAGE_MAC_ACCESS_ON (BIT(16)) #define BTINTEL_PCIE_CSR_BOOT_STAGE_ALIVE (BIT(23)) +#define BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY (BIT(24))
/* Registers for MSI-X */ #define BTINTEL_PCIE_CSR_MSIX_BASE (0x2000) @@ -55,6 +57,16 @@ enum msix_hw_int_causes { BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP0 = BIT(0), /* cause 32 */ };
+/* PCIe device states + * Host-Device interface is active + * Host-Device interface is inactive(as reflected by IPC_SLEEP_CONTROL_CSR_AD) + * Host-Device interface is inactive(as reflected by IPC_SLEEP_CONTROL_CSR_AD) + */ +enum { + BTINTEL_PCIE_STATE_D0 = 0, + BTINTEL_PCIE_STATE_D3_HOT = 2, + BTINTEL_PCIE_STATE_D3_COLD = 3, +}; #define BTINTEL_PCIE_MSIX_NON_AUTO_CLEAR_CAUSE BIT(7)
/* Minimum and Maximum number of MSI-X Vector @@ -67,7 +79,7 @@ enum msix_hw_int_causes { #define BTINTEL_DEFAULT_MAC_ACCESS_TIMEOUT_US 200000
/* Default interrupt timeout in msec */ -#define BTINTEL_DEFAULT_INTR_TIMEOUT 3000 +#define BTINTEL_DEFAULT_INTR_TIMEOUT_MS 3000
/* The number of descriptors in TX/RX queues */ #define BTINTEL_DESCS_COUNT 16 @@ -343,6 +355,7 @@ struct rxq { * @ia: Index Array struct * @txq: TX Queue struct * @rxq: RX Queue struct + * @alive_intr_ctxt: Alive interrupt context */ struct btintel_pcie_data { struct pci_dev *pdev; @@ -389,6 +402,7 @@ struct btintel_pcie_data { struct ia ia; struct txq txq; struct rxq rxq; + u32 alive_intr_ctxt; };
static inline u32 btintel_pcie_rd_reg32(struct btintel_pcie_data *data, diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c index 9bbf20502163..480e4adba9fa 100644 --- a/drivers/bluetooth/btmtk.c +++ b/drivers/bluetooth/btmtk.c @@ -1215,7 +1215,6 @@ static int btmtk_usb_isointf_init(struct hci_dev *hdev) struct sk_buff *skb; int err;
- init_usb_anchor(&btmtk_data->isopkt_anchor); spin_lock_init(&btmtk_data->isorxlock);
__set_mtk_intr_interface(hdev); diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index e9534fbc92e3..4ccaddb46ddd 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -2616,6 +2616,7 @@ static void btusb_mtk_claim_iso_intf(struct btusb_data *data) }
set_bit(BTMTK_ISOPKT_OVER_INTR, &btmtk_data->flags); + init_usb_anchor(&btmtk_data->isopkt_anchor); }
static void btusb_mtk_release_iso_intf(struct btusb_data *data) diff --git a/drivers/bus/mhi/host/trace.h b/drivers/bus/mhi/host/trace.h index 95613c8ebe06..3e0c41777429 100644 --- a/drivers/bus/mhi/host/trace.h +++ b/drivers/bus/mhi/host/trace.h @@ -9,6 +9,7 @@ #if !defined(_TRACE_EVENT_MHI_HOST_H) || defined(TRACE_HEADER_MULTI_READ) #define _TRACE_EVENT_MHI_HOST_H
+#include <linux/byteorder/generic.h> #include <linux/tracepoint.h> #include <linux/trace_seq.h> #include "../common.h" @@ -97,18 +98,18 @@ TRACE_EVENT(mhi_gen_tre, __string(name, mhi_cntrl->mhi_dev->name) __field(int, ch_num) __field(void *, wp) - __field(__le64, tre_ptr) - __field(__le32, dword0) - __field(__le32, dword1) + __field(uint64_t, tre_ptr) + __field(uint32_t, dword0) + __field(uint32_t, dword1) ),
TP_fast_assign( __assign_str(name); __entry->ch_num = mhi_chan->chan; __entry->wp = mhi_tre; - __entry->tre_ptr = mhi_tre->ptr; - __entry->dword0 = mhi_tre->dword[0]; - __entry->dword1 = mhi_tre->dword[1]; + __entry->tre_ptr = le64_to_cpu(mhi_tre->ptr); + __entry->dword0 = le32_to_cpu(mhi_tre->dword[0]); + __entry->dword1 = le32_to_cpu(mhi_tre->dword[1]); ),
TP_printk("%s: Chan: %d TRE: 0x%p TRE buf: 0x%llx DWORD0: 0x%08x DWORD1: 0x%08x\n", @@ -176,19 +177,19 @@ DECLARE_EVENT_CLASS(mhi_process_event_ring,
TP_STRUCT__entry( __string(name, mhi_cntrl->mhi_dev->name) - __field(__le32, dword0) - __field(__le32, dword1) + __field(uint32_t, dword0) + __field(uint32_t, dword1) __field(int, state) - __field(__le64, ptr) + __field(uint64_t, ptr) __field(void *, rp) ),
TP_fast_assign( __assign_str(name); __entry->rp = rp; - __entry->ptr = rp->ptr; - __entry->dword0 = rp->dword[0]; - __entry->dword1 = rp->dword[1]; + __entry->ptr = le64_to_cpu(rp->ptr); + __entry->dword0 = le32_to_cpu(rp->dword[0]); + __entry->dword1 = le32_to_cpu(rp->dword[1]); __entry->state = MHI_TRE_GET_EV_STATE(rp); ),
diff --git a/drivers/clk/.kunitconfig b/drivers/clk/.kunitconfig index 54ece9207055..08e26137f3d9 100644 --- a/drivers/clk/.kunitconfig +++ b/drivers/clk/.kunitconfig @@ -1,5 +1,6 @@ CONFIG_KUNIT=y CONFIG_OF=y +CONFIG_OF_OVERLAY=y CONFIG_COMMON_CLK=y CONFIG_CLK_KUNIT_TEST=y CONFIG_CLK_FIXED_RATE_KUNIT_TEST=y diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig index 299bc678ed1b..0fe07a594b4e 100644 --- a/drivers/clk/Kconfig +++ b/drivers/clk/Kconfig @@ -517,7 +517,6 @@ config CLK_KUNIT_TEST tristate "Basic Clock Framework Kunit Tests" if !KUNIT_ALL_TESTS depends on KUNIT default KUNIT_ALL_TESTS - select OF_OVERLAY if OF select DTC help Kunit tests for the common clock framework. @@ -526,7 +525,6 @@ config CLK_FIXED_RATE_KUNIT_TEST tristate "Basic fixed rate clk type KUnit test" if !KUNIT_ALL_TESTS depends on KUNIT default KUNIT_ALL_TESTS - select OF_OVERLAY if OF select DTC help KUnit tests for the basic fixed rate clk type. diff --git a/drivers/clk/clk-apple-nco.c b/drivers/clk/clk-apple-nco.c index 39472a51530a..457a48d48941 100644 --- a/drivers/clk/clk-apple-nco.c +++ b/drivers/clk/clk-apple-nco.c @@ -297,6 +297,9 @@ static int applnco_probe(struct platform_device *pdev) memset(&init, 0, sizeof(init)); init.name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s-%d", np->name, i); + if (!init.name) + return -ENOMEM; + init.ops = &applnco_ops; init.parent_data = &pdata; init.num_parents = 1; diff --git a/drivers/clk/clk-axi-clkgen.c b/drivers/clk/clk-axi-clkgen.c index bf4d8ddc93ae..934e53a96ddd 100644 --- a/drivers/clk/clk-axi-clkgen.c +++ b/drivers/clk/clk-axi-clkgen.c @@ -7,6 +7,7 @@ */
#include <linux/platform_device.h> +#include <linux/clk.h> #include <linux/clk-provider.h> #include <linux/slab.h> #include <linux/io.h> @@ -512,6 +513,7 @@ static int axi_clkgen_probe(struct platform_device *pdev) struct clk_init_data init; const char *parent_names[2]; const char *clk_name; + struct clk *axi_clk; unsigned int i; int ret;
@@ -528,8 +530,24 @@ static int axi_clkgen_probe(struct platform_device *pdev) return PTR_ERR(axi_clkgen->base);
init.num_parents = of_clk_get_parent_count(pdev->dev.of_node); - if (init.num_parents < 1 || init.num_parents > 2) - return -EINVAL; + + axi_clk = devm_clk_get_enabled(&pdev->dev, "s_axi_aclk"); + if (!IS_ERR(axi_clk)) { + if (init.num_parents < 2 || init.num_parents > 3) + return -EINVAL; + + init.num_parents -= 1; + } else { + /* + * Legacy... So that old DTs which do not have clock-names still + * work. In this case we don't explicitly enable the AXI bus + * clock. + */ + if (PTR_ERR(axi_clk) != -ENOENT) + return PTR_ERR(axi_clk); + if (init.num_parents < 1 || init.num_parents > 2) + return -EINVAL; + }
for (i = 0; i < init.num_parents; i++) { parent_names[i] = of_clk_get_parent_name(pdev->dev.of_node, i); diff --git a/drivers/clk/clk-en7523.c b/drivers/clk/clk-en7523.c index 22fbea61c3dc..fdd8ea989ed2 100644 --- a/drivers/clk/clk-en7523.c +++ b/drivers/clk/clk-en7523.c @@ -3,8 +3,10 @@ #include <linux/delay.h> #include <linux/clk-provider.h> #include <linux/io.h> +#include <linux/mfd/syscon.h> #include <linux/platform_device.h> #include <linux/property.h> +#include <linux/regmap.h> #include <linux/reset-controller.h> #include <dt-bindings/clock/en7523-clk.h> #include <dt-bindings/reset/airoha,en7581-reset.h> @@ -31,16 +33,11 @@ #define REG_RESET_CONTROL_PCIE1 BIT(27) #define REG_RESET_CONTROL_PCIE2 BIT(26) /* EN7581 */ -#define REG_PCIE0_MEM 0x00 -#define REG_PCIE0_MEM_MASK 0x04 -#define REG_PCIE1_MEM 0x08 -#define REG_PCIE1_MEM_MASK 0x0c -#define REG_PCIE2_MEM 0x10 -#define REG_PCIE2_MEM_MASK 0x14 #define REG_NP_SCU_PCIC 0x88 #define REG_NP_SCU_SSTR 0x9c #define REG_PCIE_XSI0_SEL_MASK GENMASK(14, 13) #define REG_PCIE_XSI1_SEL_MASK GENMASK(12, 11) +#define REG_CRYPTO_CLKSRC2 0x20c
#define REG_RST_CTRL2 0x00 #define REG_RST_CTRL1 0x04 @@ -84,7 +81,8 @@ struct en_clk_soc_data { const u16 *idx_map; u16 idx_map_nr; } reset; - int (*hw_init)(struct platform_device *pdev, void __iomem *np_base); + int (*hw_init)(struct platform_device *pdev, + struct clk_hw_onecell_data *clk_data); };
static const u32 gsw_base[] = { 400000000, 500000000 }; @@ -92,6 +90,10 @@ static const u32 emi_base[] = { 333000000, 400000000 }; static const u32 bus_base[] = { 500000000, 540000000 }; static const u32 slic_base[] = { 100000000, 3125000 }; static const u32 npu_base[] = { 333000000, 400000000, 500000000 }; +/* EN7581 */ +static const u32 emi7581_base[] = { 540000000, 480000000, 400000000, 300000000 }; +static const u32 npu7581_base[] = { 800000000, 750000000, 720000000, 600000000 }; +static const u32 crypto_base[] = { 540000000, 480000000 };
static const struct en_clk_desc en7523_base_clks[] = { { @@ -189,6 +191,102 @@ static const struct en_clk_desc en7523_base_clks[] = { } };
+static const struct en_clk_desc en7581_base_clks[] = { + { + .id = EN7523_CLK_GSW, + .name = "gsw", + + .base_reg = REG_GSW_CLK_DIV_SEL, + .base_bits = 1, + .base_shift = 8, + .base_values = gsw_base, + .n_base_values = ARRAY_SIZE(gsw_base), + + .div_bits = 3, + .div_shift = 0, + .div_step = 1, + .div_offset = 1, + }, { + .id = EN7523_CLK_EMI, + .name = "emi", + + .base_reg = REG_EMI_CLK_DIV_SEL, + .base_bits = 2, + .base_shift = 8, + .base_values = emi7581_base, + .n_base_values = ARRAY_SIZE(emi7581_base), + + .div_bits = 3, + .div_shift = 0, + .div_step = 1, + .div_offset = 1, + }, { + .id = EN7523_CLK_BUS, + .name = "bus", + + .base_reg = REG_BUS_CLK_DIV_SEL, + .base_bits = 1, + .base_shift = 8, + .base_values = bus_base, + .n_base_values = ARRAY_SIZE(bus_base), + + .div_bits = 3, + .div_shift = 0, + .div_step = 1, + .div_offset = 1, + }, { + .id = EN7523_CLK_SLIC, + .name = "slic", + + .base_reg = REG_SPI_CLK_FREQ_SEL, + .base_bits = 1, + .base_shift = 0, + .base_values = slic_base, + .n_base_values = ARRAY_SIZE(slic_base), + + .div_reg = REG_SPI_CLK_DIV_SEL, + .div_bits = 5, + .div_shift = 24, + .div_val0 = 20, + .div_step = 2, + }, { + .id = EN7523_CLK_SPI, + .name = "spi", + + .base_reg = REG_SPI_CLK_DIV_SEL, + + .base_value = 400000000, + + .div_bits = 5, + .div_shift = 8, + .div_val0 = 40, + .div_step = 2, + }, { + .id = EN7523_CLK_NPU, + .name = "npu", + + .base_reg = REG_NPU_CLK_DIV_SEL, + .base_bits = 2, + .base_shift = 8, + .base_values = npu7581_base, + .n_base_values = ARRAY_SIZE(npu7581_base), + + .div_bits = 3, + .div_shift = 0, + .div_step = 1, + .div_offset = 1, + }, { + .id = EN7523_CLK_CRYPTO, + .name = "crypto", + + .base_reg = REG_CRYPTO_CLKSRC2, + .base_bits = 1, + .base_shift = 0, + .base_values = crypto_base, + .n_base_values = ARRAY_SIZE(crypto_base), + } +}; + static const u16 en7581_rst_ofs[] = { REG_RST_CTRL2, REG_RST_CTRL1, @@ -252,15 +350,11 @@ static const u16 en7581_rst_map[] = { [EN7581_XPON_MAC_RST] = RST_NR_PER_BANK + 31, };
-static unsigned int en7523_get_base_rate(void __iomem *base, unsigned int i) +static u32 en7523_get_base_rate(const struct en_clk_desc *desc, u32 val) { - const struct en_clk_desc *desc = &en7523_base_clks[i]; - u32 val; - if (!desc->base_bits) return desc->base_value;
- val = readl(base + desc->base_reg); val >>= desc->base_shift; val &= (1 << desc->base_bits) - 1;
@@ -270,16 +364,11 @@ static unsigned int en7523_get_base_rate(void __iomem *base, unsigned int i) return desc->base_values[val]; }
-static u32 en7523_get_div(void __iomem *base, int i) +static u32 en7523_get_div(const struct en_clk_desc *desc, u32 val) { - const struct en_clk_desc *desc = &en7523_base_clks[i]; - u32 reg, val; - if (!desc->div_bits) return 1;
- reg = desc->div_reg ? desc->div_reg : desc->base_reg; - val = readl(base + reg); val >>= desc->div_shift; val &= (1 << desc->div_bits) - 1;
@@ -412,44 +501,83 @@ static void en7581_pci_disable(struct clk_hw *hw) usleep_range(1000, 2000); }
-static int en7581_clk_hw_init(struct platform_device *pdev, - void __iomem *np_base) +static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data, + void __iomem *base, void __iomem *np_base) { - void __iomem *pb_base; - u32 val; + struct clk_hw *hw; + u32 rate; + int i; + + for (i = 0; i < ARRAY_SIZE(en7523_base_clks); i++) { + const struct en_clk_desc *desc = &en7523_base_clks[i]; + u32 reg = desc->div_reg ? desc->div_reg : desc->base_reg; + u32 val = readl(base + desc->base_reg);
- pb_base = devm_platform_ioremap_resource(pdev, 3); - if (IS_ERR(pb_base)) - return PTR_ERR(pb_base); + rate = en7523_get_base_rate(desc, val); + val = readl(base + reg); + rate /= en7523_get_div(desc, val);
- val = readl(np_base + REG_NP_SCU_SSTR); - val &= ~(REG_PCIE_XSI0_SEL_MASK | REG_PCIE_XSI1_SEL_MASK); - writel(val, np_base + REG_NP_SCU_SSTR); - val = readl(np_base + REG_NP_SCU_PCIC); - writel(val | 3, np_base + REG_NP_SCU_PCIC); + hw = clk_hw_register_fixed_rate(dev, desc->name, NULL, 0, rate); + if (IS_ERR(hw)) { + pr_err("Failed to register clk %s: %ld\n", + desc->name, PTR_ERR(hw)); + continue; + } + + clk_data->hws[desc->id] = hw; + } + + hw = en7523_register_pcie_clk(dev, np_base); + clk_data->hws[EN7523_CLK_PCIE] = hw; + + clk_data->num = EN7523_NUM_CLOCKS; +} + +static int en7523_clk_hw_init(struct platform_device *pdev, + struct clk_hw_onecell_data *clk_data) +{ + void __iomem *base, *np_base; + + base = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(base)) + return PTR_ERR(base); + + np_base = devm_platform_ioremap_resource(pdev, 1); + if (IS_ERR(np_base)) + return PTR_ERR(np_base);
- writel(0x20000000, pb_base + REG_PCIE0_MEM); - writel(0xfc000000, pb_base + REG_PCIE0_MEM_MASK); - writel(0x24000000, pb_base + REG_PCIE1_MEM); - writel(0xfc000000, pb_base + REG_PCIE1_MEM_MASK); - writel(0x28000000, pb_base + REG_PCIE2_MEM); - writel(0xfc000000, pb_base + REG_PCIE2_MEM_MASK); + en7523_register_clocks(&pdev->dev, clk_data, base, np_base);
return 0; }
-static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data, - void __iomem *base, void __iomem *np_base) +static void en7581_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data, + struct regmap *map, void __iomem *base) { struct clk_hw *hw; u32 rate; int i;
- for (i = 0; i < ARRAY_SIZE(en7523_base_clks); i++) { - const struct en_clk_desc *desc = &en7523_base_clks[i]; + for (i = 0; i < ARRAY_SIZE(en7581_base_clks); i++) { + const struct en_clk_desc *desc = &en7581_base_clks[i]; + u32 val, reg = desc->div_reg ? desc->div_reg : desc->base_reg; + int err;
- rate = en7523_get_base_rate(base, i); - rate /= en7523_get_div(base, i); + err = regmap_read(map, desc->base_reg, &val); + if (err) { + pr_err("Failed reading fixed clk rate %s: %d\n", + desc->name, err); + continue; + } + rate = en7523_get_base_rate(desc, val); + + err = regmap_read(map, reg, &val); + if (err) { + pr_err("Failed reading fixed clk div %s: %d\n", + desc->name, err); + continue; + } + rate /= en7523_get_div(desc, val);
hw = clk_hw_register_fixed_rate(dev, desc->name, NULL, 0, rate); if (IS_ERR(hw)) { @@ -461,12 +589,38 @@ static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_dat clk_data->hws[desc->id] = hw; }
- hw = en7523_register_pcie_clk(dev, np_base); + hw = en7523_register_pcie_clk(dev, base); clk_data->hws[EN7523_CLK_PCIE] = hw;
clk_data->num = EN7523_NUM_CLOCKS; }
+static int en7581_clk_hw_init(struct platform_device *pdev, + struct clk_hw_onecell_data *clk_data) +{ + void __iomem *np_base; + struct regmap *map; + u32 val; + + map = syscon_regmap_lookup_by_compatible("airoha,en7581-chip-scu"); + if (IS_ERR(map)) + return PTR_ERR(map); + + np_base = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(np_base)) + return PTR_ERR(np_base); + + en7581_register_clocks(&pdev->dev, clk_data, map, np_base); + + val = readl(np_base + REG_NP_SCU_SSTR); + val &= ~(REG_PCIE_XSI0_SEL_MASK | REG_PCIE_XSI1_SEL_MASK); + writel(val, np_base + REG_NP_SCU_SSTR); + val = readl(np_base + REG_NP_SCU_PCIC); + writel(val | 3, np_base + REG_NP_SCU_PCIC); + + return 0; +} + static int en7523_reset_update(struct reset_controller_dev *rcdev, unsigned long id, bool assert) { @@ -533,7 +687,7 @@ static int en7523_reset_register(struct platform_device *pdev, if (!soc_data->reset.idx_map_nr) return 0;
- base = devm_platform_ioremap_resource(pdev, 2); + base = devm_platform_ioremap_resource(pdev, 1); if (IS_ERR(base)) return PTR_ERR(base);
@@ -561,31 +715,18 @@ static int en7523_clk_probe(struct platform_device *pdev) struct device_node *node = pdev->dev.of_node; const struct en_clk_soc_data *soc_data; struct clk_hw_onecell_data *clk_data; - void __iomem *base, *np_base; int r;
- base = devm_platform_ioremap_resource(pdev, 0); - if (IS_ERR(base)) - return PTR_ERR(base); - - np_base = devm_platform_ioremap_resource(pdev, 1); - if (IS_ERR(np_base)) - return PTR_ERR(np_base); - - soc_data = device_get_match_data(&pdev->dev); - if (soc_data->hw_init) { - r = soc_data->hw_init(pdev, np_base); - if (r) - return r; - } - clk_data = devm_kzalloc(&pdev->dev, struct_size(clk_data, hws, EN7523_NUM_CLOCKS), GFP_KERNEL); if (!clk_data) return -ENOMEM;
- en7523_register_clocks(&pdev->dev, clk_data, base, np_base); + soc_data = device_get_match_data(&pdev->dev); + r = soc_data->hw_init(pdev, clk_data); + if (r) + return r;
r = of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data); if (r) @@ -608,6 +749,7 @@ static const struct en_clk_soc_data en7523_data = { .prepare = en7523_pci_prepare, .unprepare = en7523_pci_unprepare, }, + .hw_init = en7523_clk_hw_init, };
static const struct en_clk_soc_data en7581_data = { diff --git a/drivers/clk/clk-loongson2.c b/drivers/clk/clk-loongson2.c index 820bb1e9e3b7..7082b4309c6f 100644 --- a/drivers/clk/clk-loongson2.c +++ b/drivers/clk/clk-loongson2.c @@ -29,8 +29,10 @@ enum loongson2_clk_type { struct loongson2_clk_provider { void __iomem *base; struct device *dev; - struct clk_hw_onecell_data clk_data; spinlock_t clk_lock; /* protect access to DIV registers */ + + /* Must be last --ends in a flexible-array member. */ + struct clk_hw_onecell_data clk_data; };
struct loongson2_clk_data { @@ -304,7 +306,7 @@ static int loongson2_clk_probe(struct platform_device *pdev) return PTR_ERR(clp->base);
spin_lock_init(&clp->clk_lock); - clp->clk_data.num = clks_num + 1; + clp->clk_data.num = clks_num; clp->dev = dev;
for (i = 0; i < clks_num; i++) { diff --git a/drivers/clk/imx/clk-fracn-gppll.c b/drivers/clk/imx/clk-fracn-gppll.c index 591e0364ee5c..85771afd4698 100644 --- a/drivers/clk/imx/clk-fracn-gppll.c +++ b/drivers/clk/imx/clk-fracn-gppll.c @@ -254,9 +254,11 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate, pll_div = FIELD_PREP(PLL_RDIV_MASK, rate->rdiv) | rate->odiv | FIELD_PREP(PLL_MFI_MASK, rate->mfi); writel_relaxed(pll_div, pll->base + PLL_DIV); + readl(pll->base + PLL_DIV); if (pll->flags & CLK_FRACN_GPPLL_FRACN) { writel_relaxed(rate->mfd, pll->base + PLL_DENOMINATOR); writel_relaxed(FIELD_PREP(PLL_MFN_MASK, rate->mfn), pll->base + PLL_NUMERATOR); + readl(pll->base + PLL_NUMERATOR); }
/* Wait for 5us according to fracn mode pll doc */ @@ -265,6 +267,7 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate, /* Enable Powerup */ tmp |= POWERUP_MASK; writel_relaxed(tmp, pll->base + PLL_CTRL); + readl(pll->base + PLL_CTRL);
/* Wait Lock */ ret = clk_fracn_gppll_wait_lock(pll); @@ -302,14 +305,15 @@ static int clk_fracn_gppll_prepare(struct clk_hw *hw)
val |= POWERUP_MASK; writel_relaxed(val, pll->base + PLL_CTRL); - - val |= CLKMUX_EN; - writel_relaxed(val, pll->base + PLL_CTRL); + readl(pll->base + PLL_CTRL);
ret = clk_fracn_gppll_wait_lock(pll); if (ret) return ret;
+ val |= CLKMUX_EN; + writel_relaxed(val, pll->base + PLL_CTRL); + val &= ~CLKMUX_BYPASS; writel_relaxed(val, pll->base + PLL_CTRL);
diff --git a/drivers/clk/imx/clk-imx8-acm.c b/drivers/clk/imx/clk-imx8-acm.c index 6c351050b82a..c169fe53a35f 100644 --- a/drivers/clk/imx/clk-imx8-acm.c +++ b/drivers/clk/imx/clk-imx8-acm.c @@ -294,9 +294,9 @@ static int clk_imx_acm_attach_pm_domains(struct device *dev, DL_FLAG_STATELESS | DL_FLAG_PM_RUNTIME | DL_FLAG_RPM_ACTIVE); - if (IS_ERR(dev_pm->pd_dev_link[i])) { + if (!dev_pm->pd_dev_link[i]) { dev_pm_domain_detach(dev_pm->pd_dev[i], false); - ret = PTR_ERR(dev_pm->pd_dev_link[i]); + ret = -EINVAL; goto detach_pm; } } diff --git a/drivers/clk/imx/clk-lpcg-scu.c b/drivers/clk/imx/clk-lpcg-scu.c index dd5abd09f3e2..620afdf8dc03 100644 --- a/drivers/clk/imx/clk-lpcg-scu.c +++ b/drivers/clk/imx/clk-lpcg-scu.c @@ -6,10 +6,12 @@
#include <linux/bits.h> #include <linux/clk-provider.h> +#include <linux/delay.h> #include <linux/err.h> #include <linux/io.h> #include <linux/slab.h> #include <linux/spinlock.h> +#include <linux/units.h>
#include "clk-scu.h"
@@ -41,6 +43,29 @@ struct clk_lpcg_scu {
#define to_clk_lpcg_scu(_hw) container_of(_hw, struct clk_lpcg_scu, hw)
+/* e10858 -LPCG clock gating register synchronization errata */ +static void lpcg_e10858_writel(unsigned long rate, void __iomem *reg, u32 val) +{ + writel(val, reg); + + if (rate >= 24 * HZ_PER_MHZ || rate == 0) { + /* + * The time taken to access the LPCG registers from the AP core + * through the interconnect is longer than the minimum delay + * of 4 clock cycles required by the errata. + * Adding a readl will provide sufficient delay to prevent + * back-to-back writes. + */ + readl(reg); + } else { + /* + * For clocks running below 24MHz, wait a minimum of + * 4 clock cycles. + */ + ndelay(4 * (DIV_ROUND_UP(1000 * HZ_PER_MHZ, rate))); + } +} + static int clk_lpcg_scu_enable(struct clk_hw *hw) { struct clk_lpcg_scu *clk = to_clk_lpcg_scu(hw); @@ -57,7 +82,8 @@ static int clk_lpcg_scu_enable(struct clk_hw *hw) val |= CLK_GATE_SCU_LPCG_HW_SEL;
reg |= val << clk->bit_idx; - writel(reg, clk->reg); + + lpcg_e10858_writel(clk_hw_get_rate(hw), clk->reg, reg);
spin_unlock_irqrestore(&imx_lpcg_scu_lock, flags);
@@ -74,7 +100,7 @@ static void clk_lpcg_scu_disable(struct clk_hw *hw)
reg = readl_relaxed(clk->reg); reg &= ~(CLK_GATE_SCU_LPCG_MASK << clk->bit_idx); - writel(reg, clk->reg); + lpcg_e10858_writel(clk_hw_get_rate(hw), clk->reg, reg);
spin_unlock_irqrestore(&imx_lpcg_scu_lock, flags); } @@ -145,13 +171,8 @@ static int __maybe_unused imx_clk_lpcg_scu_resume(struct device *dev) { struct clk_lpcg_scu *clk = dev_get_drvdata(dev);
- /* - * FIXME: Sometimes writes don't work unless the CPU issues - * them twice - */ - - writel(clk->state, clk->reg); writel(clk->state, clk->reg); + lpcg_e10858_writel(0, clk->reg, clk->state); dev_dbg(dev, "restore lpcg state 0x%x\n", clk->state);
return 0; diff --git a/drivers/clk/imx/clk-scu.c b/drivers/clk/imx/clk-scu.c index b1dd0c08e091..b27186aaf2a1 100644 --- a/drivers/clk/imx/clk-scu.c +++ b/drivers/clk/imx/clk-scu.c @@ -596,7 +596,7 @@ static int __maybe_unused imx_clk_scu_suspend(struct device *dev) clk->rate = clk_scu_recalc_rate(&clk->hw, 0); else clk->rate = clk_hw_get_rate(&clk->hw); - clk->is_enabled = clk_hw_is_enabled(&clk->hw); + clk->is_enabled = clk_hw_is_prepared(&clk->hw);
if (clk->parent) dev_dbg(dev, "save parent %s idx %u\n", clk_hw_get_name(clk->parent), diff --git a/drivers/clk/mediatek/Kconfig b/drivers/clk/mediatek/Kconfig index 70a005e7e1b1..486401e1f2f1 100644 --- a/drivers/clk/mediatek/Kconfig +++ b/drivers/clk/mediatek/Kconfig @@ -887,13 +887,6 @@ config COMMON_CLK_MT8195_APUSYS help This driver supports MediaTek MT8195 AI Processor Unit System clocks.
-config COMMON_CLK_MT8195_AUDSYS - tristate "Clock driver for MediaTek MT8195 audsys" - depends on COMMON_CLK_MT8195 - default COMMON_CLK_MT8195 - help - This driver supports MediaTek MT8195 audsys clocks. - config COMMON_CLK_MT8195_IMP_IIC_WRAP tristate "Clock driver for MediaTek MT8195 imp_iic_wrap" depends on COMMON_CLK_MT8195 @@ -908,14 +901,6 @@ config COMMON_CLK_MT8195_MFGCFG help This driver supports MediaTek MT8195 mfgcfg clocks.
-config COMMON_CLK_MT8195_MSDC - tristate "Clock driver for MediaTek MT8195 msdc" - depends on COMMON_CLK_MT8195 - default COMMON_CLK_MT8195 - help - This driver supports MediaTek MT8195 MMC and SD Controller's - msdc and msdc_top clocks. - config COMMON_CLK_MT8195_SCP_ADSP tristate "Clock driver for MediaTek MT8195 scp_adsp" depends on COMMON_CLK_MT8195 diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig index a3e2a09e2105..4444dafa4e3d 100644 --- a/drivers/clk/qcom/Kconfig +++ b/drivers/clk/qcom/Kconfig @@ -1230,11 +1230,11 @@ config SM_VIDEOCC_8350 config SM_VIDEOCC_8550 tristate "SM8550 Video Clock Controller" depends on ARM64 || COMPILE_TEST - select SM_GCC_8550 + depends on SM_GCC_8550 || SM_GCC_8650 select QCOM_GDSC help Support for the video clock controller on Qualcomm Technologies, Inc. - SM8550 devices. + SM8550 or SM8650 devices. Say Y if you want to support video devices and functionality such as video encode/decode.
diff --git a/drivers/clk/ralink/clk-mtmips.c b/drivers/clk/ralink/clk-mtmips.c index 50a443bf79ec..76285fbbdeaa 100644 --- a/drivers/clk/ralink/clk-mtmips.c +++ b/drivers/clk/ralink/clk-mtmips.c @@ -263,8 +263,9 @@ static int mtmips_register_pherip_clocks(struct device_node *np, .rate = _rate \ }
-static struct mtmips_clk_fixed rt305x_fixed_clocks[] = { - CLK_FIXED("xtal", NULL, 40000000) +static struct mtmips_clk_fixed rt3883_fixed_clocks[] = { + CLK_FIXED("xtal", NULL, 40000000), + CLK_FIXED("periph", "xtal", 40000000) };
static struct mtmips_clk_fixed rt3352_fixed_clocks[] = { @@ -366,6 +367,12 @@ static inline struct mtmips_clk *to_mtmips_clk(struct clk_hw *hw) return container_of(hw, struct mtmips_clk, hw); }
+static unsigned long rt2880_xtal_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + return 40000000; +} + static unsigned long rt5350_xtal_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) { @@ -677,10 +684,12 @@ static unsigned long mt76x8_cpu_recalc_rate(struct clk_hw *hw, }
static struct mtmips_clk rt2880_clks_base[] = { + { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) }, { CLK_BASE("cpu", "xtal", rt2880_cpu_recalc_rate) } };
static struct mtmips_clk rt305x_clks_base[] = { + { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) }, { CLK_BASE("cpu", "xtal", rt305x_cpu_recalc_rate) } };
@@ -690,6 +699,7 @@ static struct mtmips_clk rt3352_clks_base[] = { };
static struct mtmips_clk rt3883_clks_base[] = { + { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) }, { CLK_BASE("cpu", "xtal", rt3883_cpu_recalc_rate) }, { CLK_BASE("bus", "cpu", rt3883_bus_recalc_rate) } }; @@ -746,8 +756,8 @@ static int mtmips_register_clocks(struct device_node *np, static const struct mtmips_clk_data rt2880_clk_data = { .clk_base = rt2880_clks_base, .num_clk_base = ARRAY_SIZE(rt2880_clks_base), - .clk_fixed = rt305x_fixed_clocks, - .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks), + .clk_fixed = NULL, + .num_clk_fixed = 0, .clk_factor = rt2880_factor_clocks, .num_clk_factor = ARRAY_SIZE(rt2880_factor_clocks), .clk_periph = rt2880_pherip_clks, @@ -757,8 +767,8 @@ static const struct mtmips_clk_data rt2880_clk_data = { static const struct mtmips_clk_data rt305x_clk_data = { .clk_base = rt305x_clks_base, .num_clk_base = ARRAY_SIZE(rt305x_clks_base), - .clk_fixed = rt305x_fixed_clocks, - .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks), + .clk_fixed = NULL, + .num_clk_fixed = 0, .clk_factor = rt305x_factor_clocks, .num_clk_factor = ARRAY_SIZE(rt305x_factor_clocks), .clk_periph = rt305x_pherip_clks, @@ -779,8 +789,8 @@ static const struct mtmips_clk_data rt3352_clk_data = { static const struct mtmips_clk_data rt3883_clk_data = { .clk_base = rt3883_clks_base, .num_clk_base = ARRAY_SIZE(rt3883_clks_base), - .clk_fixed = rt305x_fixed_clocks, - .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks), + .clk_fixed = rt3883_fixed_clocks, + .num_clk_fixed = ARRAY_SIZE(rt3883_fixed_clocks), .clk_factor = NULL, .num_clk_factor = 0, .clk_periph = rt5350_pherip_clks, diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c index 88bf39e8c79c..b43b763dfe18 100644 --- a/drivers/clk/renesas/rzg2l-cpg.c +++ b/drivers/clk/renesas/rzg2l-cpg.c @@ -548,7 +548,7 @@ static unsigned long rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params, unsigned long rate) { - unsigned long foutpostdiv_rate; + unsigned long foutpostdiv_rate, foutvco_rate;
params->pl5_intin = rate / MEGA; params->pl5_fracin = div_u64(((u64)rate % MEGA) << 24, MEGA); @@ -557,10 +557,11 @@ rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params, params->pl5_postdiv2 = 1; params->pl5_spread = 0x16;
- foutpostdiv_rate = - EXTAL_FREQ_IN_MEGA_HZ * MEGA / params->pl5_refdiv * - ((((params->pl5_intin << 24) + params->pl5_fracin)) >> 24) / - (params->pl5_postdiv1 * params->pl5_postdiv2); + foutvco_rate = div_u64(mul_u32_u32(EXTAL_FREQ_IN_MEGA_HZ * MEGA, + (params->pl5_intin << 24) + params->pl5_fracin), + params->pl5_refdiv) >> 24; + foutpostdiv_rate = DIV_ROUND_CLOSEST_ULL(foutvco_rate, + params->pl5_postdiv1 * params->pl5_postdiv2);
return foutpostdiv_rate; } diff --git a/drivers/clk/sophgo/clk-sg2042-pll.c b/drivers/clk/sophgo/clk-sg2042-pll.c index ff9deeef509b..1537f4f05860 100644 --- a/drivers/clk/sophgo/clk-sg2042-pll.c +++ b/drivers/clk/sophgo/clk-sg2042-pll.c @@ -153,7 +153,7 @@ static unsigned long sg2042_pll_recalc_rate(unsigned int reg_value,
sg2042_pll_ctrl_decode(reg_value, &ctrl_table);
- numerator = parent_rate * ctrl_table.fbdiv; + numerator = (u64)parent_rate * ctrl_table.fbdiv; denominator = ctrl_table.refdiv * ctrl_table.postdiv1 * ctrl_table.postdiv2; do_div(numerator, denominator); return numerator; diff --git a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c index 9b5cfac2ee70..3f095515f54f 100644 --- a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c +++ b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c @@ -1371,7 +1371,7 @@ static int sun20i_d1_ccu_probe(struct platform_device *pdev)
/* Enforce m1 = 0, m0 = 0 for PLL_AUDIO0 */ val = readl(reg + SUN20I_D1_PLL_AUDIO0_REG); - val &= ~BIT(1) | BIT(0); + val &= ~(BIT(1) | BIT(0)); writel(val, reg + SUN20I_D1_PLL_AUDIO0_REG);
/* Force fanout-27M factor N to 0. */ diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig index 95dd4660b5b6..d546903dba4f 100644 --- a/drivers/clocksource/Kconfig +++ b/drivers/clocksource/Kconfig @@ -400,7 +400,8 @@ config ARM_GT_INITIAL_PRESCALER_VAL This affects CPU_FREQ max delta from the initial frequency.
config ARM_TIMER_SP804 - bool "Support for Dual Timer SP804 module" if COMPILE_TEST + bool "Support for Dual Timer SP804 module" + depends on ARM || ARM64 || COMPILE_TEST depends on GENERIC_SCHED_CLOCK && HAVE_CLK select CLKSRC_MMIO select TIMER_OF if OF diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c index c2dcd8d68e45..d1c144d6f328 100644 --- a/drivers/clocksource/timer-ti-dm-systimer.c +++ b/drivers/clocksource/timer-ti-dm-systimer.c @@ -686,9 +686,9 @@ subsys_initcall(dmtimer_percpu_timer_startup);
static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa) { - struct device_node *arm_timer; + struct device_node *arm_timer __free(device_node) = + of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
- arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer"); if (of_device_is_available(arm_timer)) { pr_warn_once("ARM architected timer wrap issue i940 detected\n"); return 0; diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c index 1b481731df96..b9df9b19d4bd 100644 --- a/drivers/comedi/comedi_fops.c +++ b/drivers/comedi/comedi_fops.c @@ -2407,6 +2407,18 @@ static int comedi_mmap(struct file *file, struct vm_area_struct *vma)
start += PAGE_SIZE; } + +#ifdef CONFIG_MMU + /* + * Leaving behind a partial mapping of a buffer we're about to + * drop is unsafe, see remap_pfn_range_notrack(). + * We need to zap the range here ourselves instead of relying + * on the automatic zapping in remap_pfn_range() because we call + * remap_pfn_range() in a loop. + */ + if (retval) + zap_vma_ptes(vma, vma->vm_start, size); +#endif }
if (retval == 0) { diff --git a/drivers/counter/stm32-timer-cnt.c b/drivers/counter/stm32-timer-cnt.c index 186e73d6ccb4..87b6ec567b54 100644 --- a/drivers/counter/stm32-timer-cnt.c +++ b/drivers/counter/stm32-timer-cnt.c @@ -214,11 +214,17 @@ static int stm32_count_enable_write(struct counter_device *counter, { struct stm32_timer_cnt *const priv = counter_priv(counter); u32 cr1; + int ret;
if (enable) { regmap_read(priv->regmap, TIM_CR1, &cr1); - if (!(cr1 & TIM_CR1_CEN)) - clk_enable(priv->clk); + if (!(cr1 & TIM_CR1_CEN)) { + ret = clk_enable(priv->clk); + if (ret) { + dev_err(counter->parent, "Cannot enable clock %d\n", ret); + return ret; + } + }
regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, TIM_CR1_CEN); @@ -694,6 +700,7 @@ static int stm32_timer_cnt_probe_encoder(struct device *dev, }
ret = of_property_read_u32(tnode, "reg", &idx); + of_node_put(tnode); if (ret) { dev_err(dev, "Can't get index (%d)\n", ret); return ret; @@ -816,7 +823,11 @@ static int __maybe_unused stm32_timer_cnt_resume(struct device *dev) return ret;
if (priv->enabled) { - clk_enable(priv->clk); + ret = clk_enable(priv->clk); + if (ret) { + dev_err(dev, "Cannot enable clock %d\n", ret); + return ret; + }
/* Restore registers that may have been lost */ regmap_write(priv->regmap, TIM_SMCR, priv->bak.smcr); diff --git a/drivers/counter/ti-ecap-capture.c b/drivers/counter/ti-ecap-capture.c index 675447315caf..b119aeede693 100644 --- a/drivers/counter/ti-ecap-capture.c +++ b/drivers/counter/ti-ecap-capture.c @@ -574,8 +574,13 @@ static int ecap_cnt_resume(struct device *dev) { struct counter_device *counter_dev = dev_get_drvdata(dev); struct ecap_cnt_dev *ecap_dev = counter_priv(counter_dev); + int ret;
- clk_enable(ecap_dev->clk); + ret = clk_enable(ecap_dev->clk); + if (ret) { + dev_err(dev, "Cannot enable clock %d\n", ret); + return ret; + }
ecap_cnt_capture_set_evmode(counter_dev, ecap_dev->pm_ctx.ev_mode);
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index b63863f77c67..91d3c3b1c2d3 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -665,34 +665,12 @@ static void amd_pstate_adjust_perf(unsigned int cpu, static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on) { struct amd_cpudata *cpudata = policy->driver_data; - struct cppc_perf_ctrls perf_ctrls; - u32 highest_perf, nominal_perf, nominal_freq, max_freq; + u32 nominal_freq, max_freq; int ret = 0;
- highest_perf = READ_ONCE(cpudata->highest_perf); - nominal_perf = READ_ONCE(cpudata->nominal_perf); nominal_freq = READ_ONCE(cpudata->nominal_freq); max_freq = READ_ONCE(cpudata->max_freq);
- if (boot_cpu_has(X86_FEATURE_CPPC)) { - u64 value = READ_ONCE(cpudata->cppc_req_cached); - - value &= ~GENMASK_ULL(7, 0); - value |= on ? highest_perf : nominal_perf; - WRITE_ONCE(cpudata->cppc_req_cached, value); - - wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value); - } else { - perf_ctrls.max_perf = on ? highest_perf : nominal_perf; - ret = cppc_set_perf(cpudata->cpu, &perf_ctrls); - if (ret) { - cpufreq_cpu_release(policy); - pr_debug("Failed to set max perf on CPU:%d. ret:%d\n", - cpudata->cpu, ret); - return ret; - } - } - if (on) policy->cpuinfo.max_freq = max_freq; else if (policy->cpuinfo.max_freq > nominal_freq * 1000) @@ -1535,7 +1513,7 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) value = READ_ONCE(cpudata->cppc_req_cached);
if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) - min_perf = max_perf; + min_perf = min(cpudata->nominal_perf, max_perf);
/* Initial min/max values for CPPC Performance Controls Register */ value &= ~AMD_CPPC_MIN_PERF(~0L); diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c index 2b8708475ac7..c1cdf0f4d0dd 100644 --- a/drivers/cpufreq/cppc_cpufreq.c +++ b/drivers/cpufreq/cppc_cpufreq.c @@ -118,6 +118,9 @@ static void cppc_scale_freq_workfn(struct kthread_work *work)
perf = cppc_perf_from_fbctrs(cpu_data, &cppc_fi->prev_perf_fb_ctrs, &fb_ctrs); + if (!perf) + return; + cppc_fi->prev_perf_fb_ctrs = fb_ctrs;
perf <<= SCHED_CAPACITY_SHIFT; @@ -420,6 +423,9 @@ static int cppc_get_cpu_power(struct device *cpu_dev, struct cppc_cpudata *cpu_data;
policy = cpufreq_cpu_get_raw(cpu_dev->id); + if (!policy) + return -EINVAL; + cpu_data = policy->driver_data; perf_caps = &cpu_data->perf_caps; max_cap = arch_scale_cpu_capacity(cpu_dev->id); @@ -487,6 +493,9 @@ static int cppc_get_cpu_cost(struct device *cpu_dev, unsigned long KHz, int step;
policy = cpufreq_cpu_get_raw(cpu_dev->id); + if (!policy) + return -EINVAL; + cpu_data = policy->driver_data; perf_caps = &cpu_data->perf_caps; max_cap = arch_scale_cpu_capacity(cpu_dev->id); @@ -724,13 +733,31 @@ static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data, delta_delivered = get_delta(fb_ctrs_t1->delivered, fb_ctrs_t0->delivered);
- /* Check to avoid divide-by zero and invalid delivered_perf */ + /* + * Avoid divide-by zero and unchanged feedback counters. + * Leave it for callers to handle. + */ if (!delta_reference || !delta_delivered) - return cpu_data->perf_ctrls.desired_perf; + return 0;
return (reference_perf * delta_delivered) / delta_reference; }
+static int cppc_get_perf_ctrs_sample(int cpu, + struct cppc_perf_fb_ctrs *fb_ctrs_t0, + struct cppc_perf_fb_ctrs *fb_ctrs_t1) +{ + int ret; + + ret = cppc_get_perf_ctrs(cpu, fb_ctrs_t0); + if (ret) + return ret; + + udelay(2); /* 2usec delay between sampling */ + + return cppc_get_perf_ctrs(cpu, fb_ctrs_t1); +} + static unsigned int cppc_cpufreq_get_rate(unsigned int cpu) { struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0}; @@ -746,18 +773,32 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
cpufreq_cpu_put(policy);
- ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0); - if (ret) - return 0; - - udelay(2); /* 2usec delay between sampling */ - - ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t1); - if (ret) - return 0; + ret = cppc_get_perf_ctrs_sample(cpu, &fb_ctrs_t0, &fb_ctrs_t1); + if (ret) { + if (ret == -EFAULT) + /* Any of the associated CPPC regs is 0. */ + goto out_invalid_counters; + else + return 0; + }
delivered_perf = cppc_perf_from_fbctrs(cpu_data, &fb_ctrs_t0, &fb_ctrs_t1); + if (!delivered_perf) + goto out_invalid_counters; + + return cppc_perf_to_khz(&cpu_data->perf_caps, delivered_perf); + +out_invalid_counters: + /* + * Feedback counters could be unchanged or 0 when a cpu enters a + * low-power idle state, e.g. clock-gated or power-gated. + * Use desired perf for reflecting frequency. Get the latest register + * value first as some platforms may update the actual delivered perf + * there; if failed, resort to the cached desired perf. + */ + if (cppc_get_desired_perf(cpu, &delivered_perf)) + delivered_perf = cpu_data->perf_ctrls.desired_perf;
return cppc_perf_to_khz(&cpu_data->perf_caps, delivered_perf); } diff --git a/drivers/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c index 6a8e97896d38..ed1a6dbad638 100644 --- a/drivers/cpufreq/loongson2_cpufreq.c +++ b/drivers/cpufreq/loongson2_cpufreq.c @@ -148,7 +148,9 @@ static int __init cpufreq_init(void)
ret = cpufreq_register_driver(&loongson2_cpufreq_driver);
- if (!ret && !nowait) { + if (ret) { + platform_driver_unregister(&platform_driver); + } else if (!nowait) { saved_cpu_wait = cpu_wait; cpu_wait = loongson2_cpu_wait; } diff --git a/drivers/cpufreq/loongson3_cpufreq.c b/drivers/cpufreq/loongson3_cpufreq.c index 6b5e6798d9a2..a923e196ec86 100644 --- a/drivers/cpufreq/loongson3_cpufreq.c +++ b/drivers/cpufreq/loongson3_cpufreq.c @@ -346,8 +346,11 @@ static int loongson3_cpufreq_probe(struct platform_device *pdev) { int i, ret;
- for (i = 0; i < MAX_PACKAGES; i++) - devm_mutex_init(&pdev->dev, &cpufreq_mutex[i]); + for (i = 0; i < MAX_PACKAGES; i++) { + ret = devm_mutex_init(&pdev->dev, &cpufreq_mutex[i]); + if (ret) + return ret; + }
ret = do_service_request(0, 0, CMD_GET_VERSION, 0, 0); if (ret <= 0) diff --git a/drivers/cpufreq/mediatek-cpufreq-hw.c b/drivers/cpufreq/mediatek-cpufreq-hw.c index 8925e096d5b9..aeb5e6304542 100644 --- a/drivers/cpufreq/mediatek-cpufreq-hw.c +++ b/drivers/cpufreq/mediatek-cpufreq-hw.c @@ -62,7 +62,7 @@ mtk_cpufreq_get_cpu_power(struct device *cpu_dev, unsigned long *uW,
policy = cpufreq_cpu_get_raw(cpu_dev->id); if (!policy) - return 0; + return -EINVAL;
data = policy->driver_data;
diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c index 1a3ecd44cbaf..20f6453670aa 100644 --- a/drivers/crypto/bcm/cipher.c +++ b/drivers/crypto/bcm/cipher.c @@ -2415,6 +2415,7 @@ static int ahash_hmac_setkey(struct crypto_ahash *ahash, const u8 *key,
static int ahash_hmac_init(struct ahash_request *req) { + int ret; struct iproc_reqctx_s *rctx = ahash_request_ctx(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct iproc_ctx_s *ctx = crypto_ahash_ctx(tfm); @@ -2424,7 +2425,9 @@ static int ahash_hmac_init(struct ahash_request *req) flow_log("ahash_hmac_init()\n");
/* init the context as a hash */ - ahash_init(req); + ret = ahash_init(req); + if (ret) + return ret;
if (!spu_no_incr_hash(ctx)) { /* SPU-M can do incr hashing but needs sw for outer HMAC */ diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c index 887a5f2fb927..cb001aa1de66 100644 --- a/drivers/crypto/caam/caampkc.c +++ b/drivers/crypto/caam/caampkc.c @@ -984,7 +984,7 @@ static int caam_rsa_set_pub_key(struct crypto_akcipher *tfm, const void *key, return -ENOMEM; }
-static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx, +static int caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx, struct rsa_key *raw_key) { struct caam_rsa_key *rsa_key = &ctx->key; @@ -994,7 +994,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
rsa_key->p = caam_read_raw_data(raw_key->p, &p_sz); if (!rsa_key->p) - return; + return -ENOMEM; rsa_key->p_sz = p_sz;
rsa_key->q = caam_read_raw_data(raw_key->q, &q_sz); @@ -1029,7 +1029,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
rsa_key->priv_form = FORM3;
- return; + return 0;
free_dq: kfree_sensitive(rsa_key->dq); @@ -1043,6 +1043,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx, kfree_sensitive(rsa_key->q); free_p: kfree_sensitive(rsa_key->p); + return -ENOMEM; }
static int caam_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key, @@ -1088,7 +1089,9 @@ static int caam_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key, rsa_key->e_sz = raw_key.e_sz; rsa_key->n_sz = raw_key.n_sz;
- caam_rsa_set_priv_key_form(ctx, &raw_key); + ret = caam_rsa_set_priv_key_form(ctx, &raw_key); + if (ret) + goto err;
return 0;
diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c index f6111ee9ed34..8ed2bb01a619 100644 --- a/drivers/crypto/caam/qi.c +++ b/drivers/crypto/caam/qi.c @@ -794,7 +794,7 @@ int caam_qi_init(struct platform_device *caam_pdev)
caam_debugfs_qi_init(ctrlpriv);
- err = devm_add_action_or_reset(qidev, caam_qi_shutdown, ctrlpriv); + err = devm_add_action_or_reset(qidev, caam_qi_shutdown, qidev); if (err) goto fail2;
diff --git a/drivers/crypto/cavium/cpt/cptpf_main.c b/drivers/crypto/cavium/cpt/cptpf_main.c index 6872ac344001..54de869e5374 100644 --- a/drivers/crypto/cavium/cpt/cptpf_main.c +++ b/drivers/crypto/cavium/cpt/cptpf_main.c @@ -44,7 +44,7 @@ static void cpt_disable_cores(struct cpt_device *cpt, u64 coremask, dev_err(dev, "Cores still busy %llx", coremask); grp = cpt_read_csr64(cpt->reg_base, CPTX_PF_EXEC_BUSY(0)); - if (timeout--) + if (!timeout--) break;
udelay(CSR_DELAY); @@ -302,6 +302,8 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
ret = do_cpt_init(cpt, mcode); if (ret) { + dma_free_coherent(&cpt->pdev->dev, mcode->code_size, + mcode->code, mcode->phys_base); dev_err(dev, "do_cpt_init failed with ret: %d\n", ret); goto fw_release; } @@ -394,7 +396,7 @@ static void cpt_disable_all_cores(struct cpt_device *cpt) dev_err(dev, "Cores still busy"); grp = cpt_read_csr64(cpt->reg_base, CPTX_PF_EXEC_BUSY(0)); - if (timeout--) + if (!timeout--) break;
udelay(CSR_DELAY); diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c index 6b536ad2ada5..34d30b783813 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_main.c +++ b/drivers/crypto/hisilicon/hpre/hpre_main.c @@ -1280,11 +1280,15 @@ static u32 hpre_get_hw_err_status(struct hisi_qm *qm)
static void hpre_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts) { - u32 nfe; - writel(err_sts, qm->io_base + HPRE_HAC_SOURCE_INT); - nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver); - writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB); +} + +static void hpre_disable_error_report(struct hisi_qm *qm, u32 err_type) +{ + u32 nfe_mask; + + nfe_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver); + writel(nfe_mask & (~err_type), qm->io_base + HPRE_RAS_NFE_ENB); }
static void hpre_open_axi_master_ooo(struct hisi_qm *qm) @@ -1298,6 +1302,27 @@ static void hpre_open_axi_master_ooo(struct hisi_qm *qm) qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB); }
+static enum acc_err_result hpre_get_err_result(struct hisi_qm *qm) +{ + u32 err_status; + + err_status = hpre_get_hw_err_status(qm); + if (err_status) { + if (err_status & qm->err_info.ecc_2bits_mask) + qm->err_status.is_dev_ecc_mbit = true; + hpre_log_hw_error(qm, err_status); + + if (err_status & qm->err_info.dev_reset_mask) { + /* Disable the same error reporting until device is recovered. */ + hpre_disable_error_report(qm, err_status); + return ACC_ERR_NEED_RESET; + } + hpre_clear_hw_err_status(qm, err_status); + } + + return ACC_ERR_RECOVERED; +} + static void hpre_err_info_init(struct hisi_qm *qm) { struct hisi_qm_err_info *err_info = &qm->err_info; @@ -1324,12 +1349,12 @@ static const struct hisi_qm_err_ini hpre_err_ini = { .hw_err_disable = hpre_hw_error_disable, .get_dev_hw_err_status = hpre_get_hw_err_status, .clear_dev_hw_err_status = hpre_clear_hw_err_status, - .log_dev_hw_err = hpre_log_hw_error, .open_axi_master_ooo = hpre_open_axi_master_ooo, .open_sva_prefetch = hpre_open_sva_prefetch, .close_sva_prefetch = hpre_close_sva_prefetch, .show_last_dfx_regs = hpre_show_last_dfx_regs, .err_info_init = hpre_err_info_init, + .get_err_result = hpre_get_err_result, };
static int hpre_pf_probe_init(struct hpre *hpre) diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index 07983af9e3e2..b18692ee7fd5 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -271,12 +271,6 @@ enum vft_type { SHAPER_VFT, };
-enum acc_err_result { - ACC_ERR_NONE, - ACC_ERR_NEED_RESET, - ACC_ERR_RECOVERED, -}; - enum qm_alg_type { ALG_TYPE_0, ALG_TYPE_1, @@ -1425,22 +1419,25 @@ static void qm_log_hw_error(struct hisi_qm *qm, u32 error_status)
static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm) { - u32 error_status, tmp; - - /* read err sts */ - tmp = readl(qm->io_base + QM_ABNORMAL_INT_STATUS); - error_status = qm->error_mask & tmp; + u32 error_status;
- if (error_status) { + error_status = qm_get_hw_error_status(qm); + if (error_status & qm->error_mask) { if (error_status & QM_ECC_MBIT) qm->err_status.is_qm_ecc_mbit = true;
qm_log_hw_error(qm, error_status); - if (error_status & qm->err_info.qm_reset_mask) + if (error_status & qm->err_info.qm_reset_mask) { + /* Disable the same error reporting until device is recovered. */ + writel(qm->err_info.nfe & (~error_status), + qm->io_base + QM_RAS_NFE_ENABLE); return ACC_ERR_NEED_RESET; + }
+ /* Clear error source if not need reset. */ writel(error_status, qm->io_base + QM_ABNORMAL_INT_SOURCE); writel(qm->err_info.nfe, qm->io_base + QM_RAS_NFE_ENABLE); + writel(qm->err_info.ce, qm->io_base + QM_RAS_CE_ENABLE); }
return ACC_ERR_RECOVERED; @@ -3861,30 +3858,12 @@ EXPORT_SYMBOL_GPL(hisi_qm_sriov_configure);
static enum acc_err_result qm_dev_err_handle(struct hisi_qm *qm) { - u32 err_sts; - - if (!qm->err_ini->get_dev_hw_err_status) { - dev_err(&qm->pdev->dev, "Device doesn't support get hw error status!\n"); + if (!qm->err_ini->get_err_result) { + dev_err(&qm->pdev->dev, "Device doesn't support reset!\n"); return ACC_ERR_NONE; }
- /* get device hardware error status */ - err_sts = qm->err_ini->get_dev_hw_err_status(qm); - if (err_sts) { - if (err_sts & qm->err_info.ecc_2bits_mask) - qm->err_status.is_dev_ecc_mbit = true; - - if (qm->err_ini->log_dev_hw_err) - qm->err_ini->log_dev_hw_err(qm, err_sts); - - if (err_sts & qm->err_info.dev_reset_mask) - return ACC_ERR_NEED_RESET; - - if (qm->err_ini->clear_dev_hw_err_status) - qm->err_ini->clear_dev_hw_err_status(qm, err_sts); - } - - return ACC_ERR_RECOVERED; + return qm->err_ini->get_err_result(qm); }
static enum acc_err_result qm_process_dev_error(struct hisi_qm *qm) diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c index c35533d8930b..75c25f0d5f2b 100644 --- a/drivers/crypto/hisilicon/sec2/sec_main.c +++ b/drivers/crypto/hisilicon/sec2/sec_main.c @@ -1010,11 +1010,15 @@ static u32 sec_get_hw_err_status(struct hisi_qm *qm)
static void sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts) { - u32 nfe; - writel(err_sts, qm->io_base + SEC_CORE_INT_SOURCE); - nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver); - writel(nfe, qm->io_base + SEC_RAS_NFE_REG); +} + +static void sec_disable_error_report(struct hisi_qm *qm, u32 err_type) +{ + u32 nfe_mask; + + nfe_mask = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver); + writel(nfe_mask & (~err_type), qm->io_base + SEC_RAS_NFE_REG); }
static void sec_open_axi_master_ooo(struct hisi_qm *qm) @@ -1026,6 +1030,27 @@ static void sec_open_axi_master_ooo(struct hisi_qm *qm) writel(val | SEC_AXI_SHUTDOWN_ENABLE, qm->io_base + SEC_CONTROL_REG); }
+static enum acc_err_result sec_get_err_result(struct hisi_qm *qm) +{ + u32 err_status; + + err_status = sec_get_hw_err_status(qm); + if (err_status) { + if (err_status & qm->err_info.ecc_2bits_mask) + qm->err_status.is_dev_ecc_mbit = true; + sec_log_hw_error(qm, err_status); + + if (err_status & qm->err_info.dev_reset_mask) { + /* Disable the same error reporting until device is recovered. */ + sec_disable_error_report(qm, err_status); + return ACC_ERR_NEED_RESET; + } + sec_clear_hw_err_status(qm, err_status); + } + + return ACC_ERR_RECOVERED; +} + static void sec_err_info_init(struct hisi_qm *qm) { struct hisi_qm_err_info *err_info = &qm->err_info; @@ -1052,12 +1077,12 @@ static const struct hisi_qm_err_ini sec_err_ini = { .hw_err_disable = sec_hw_error_disable, .get_dev_hw_err_status = sec_get_hw_err_status, .clear_dev_hw_err_status = sec_clear_hw_err_status, - .log_dev_hw_err = sec_log_hw_error, .open_axi_master_ooo = sec_open_axi_master_ooo, .open_sva_prefetch = sec_open_sva_prefetch, .close_sva_prefetch = sec_close_sva_prefetch, .show_last_dfx_regs = sec_show_last_dfx_regs, .err_info_init = sec_err_info_init, + .get_err_result = sec_get_err_result, };
static int sec_pf_probe_init(struct sec_dev *sec) diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c index d07e47b48be0..80c2fcb1d26d 100644 --- a/drivers/crypto/hisilicon/zip/zip_main.c +++ b/drivers/crypto/hisilicon/zip/zip_main.c @@ -1059,11 +1059,15 @@ static u32 hisi_zip_get_hw_err_status(struct hisi_qm *qm)
static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts) { - u32 nfe; - writel(err_sts, qm->io_base + HZIP_CORE_INT_SOURCE); - nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver); - writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB); +} + +static void hisi_zip_disable_error_report(struct hisi_qm *qm, u32 err_type) +{ + u32 nfe_mask; + + nfe_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver); + writel(nfe_mask & (~err_type), qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB); }
static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm) @@ -1093,6 +1097,27 @@ static void hisi_zip_close_axi_master_ooo(struct hisi_qm *qm) qm->io_base + HZIP_CORE_INT_SET); }
+static enum acc_err_result hisi_zip_get_err_result(struct hisi_qm *qm) +{ + u32 err_status; + + err_status = hisi_zip_get_hw_err_status(qm); + if (err_status) { + if (err_status & qm->err_info.ecc_2bits_mask) + qm->err_status.is_dev_ecc_mbit = true; + hisi_zip_log_hw_error(qm, err_status); + + if (err_status & qm->err_info.dev_reset_mask) { + /* Disable the same error reporting until device is recovered. */ + hisi_zip_disable_error_report(qm, err_status); + return ACC_ERR_NEED_RESET; + } + hisi_zip_clear_hw_err_status(qm, err_status); + } + + return ACC_ERR_RECOVERED; +} + static void hisi_zip_err_info_init(struct hisi_qm *qm) { struct hisi_qm_err_info *err_info = &qm->err_info; @@ -1120,13 +1145,13 @@ static const struct hisi_qm_err_ini hisi_zip_err_ini = { .hw_err_disable = hisi_zip_hw_error_disable, .get_dev_hw_err_status = hisi_zip_get_hw_err_status, .clear_dev_hw_err_status = hisi_zip_clear_hw_err_status, - .log_dev_hw_err = hisi_zip_log_hw_error, .open_axi_master_ooo = hisi_zip_open_axi_master_ooo, .close_axi_master_ooo = hisi_zip_close_axi_master_ooo, .open_sva_prefetch = hisi_zip_open_sva_prefetch, .close_sva_prefetch = hisi_zip_close_sva_prefetch, .show_last_dfx_regs = hisi_zip_show_last_dfx_regs, .err_info_init = hisi_zip_err_info_init, + .get_err_result = hisi_zip_get_err_result, };
static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip) diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c index e17577b785c3..f44c08f5f5ec 100644 --- a/drivers/crypto/inside-secure/safexcel_hash.c +++ b/drivers/crypto/inside-secure/safexcel_hash.c @@ -2093,7 +2093,7 @@ static int safexcel_xcbcmac_cra_init(struct crypto_tfm *tfm)
safexcel_ahash_cra_init(tfm); ctx->aes = kmalloc(sizeof(*ctx->aes), GFP_KERNEL); - return PTR_ERR_OR_ZERO(ctx->aes); + return ctx->aes == NULL ? -ENOMEM : 0; }
static void safexcel_xcbcmac_cra_exit(struct crypto_tfm *tfm) diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c index 78f0ea49254d..9faef33e54bd 100644 --- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c +++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c @@ -375,7 +375,7 @@ static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num, else id = -EINVAL;
- if (id < 0 || id > num_objs) + if (id < 0 || id >= num_objs) return NULL;
return fw_objs[id]; diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c index 9fd7ec53b9f3..bbd92c017c28 100644 --- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c +++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c @@ -334,7 +334,7 @@ static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num, else id = -EINVAL;
- if (id < 0 || id > num_objs) + if (id < 0 || id >= num_objs) return NULL;
return fw_objs[id]; diff --git a/drivers/crypto/intel/qat/qat_common/adf_aer.c b/drivers/crypto/intel/qat/qat_common/adf_aer.c index ec7913ab00a2..4cb8bd83f570 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_aer.c +++ b/drivers/crypto/intel/qat/qat_common/adf_aer.c @@ -281,8 +281,11 @@ int adf_init_aer(void) return -EFAULT;
device_sriov_wq = alloc_workqueue("qat_device_sriov_wq", 0, 0); - if (!device_sriov_wq) + if (!device_sriov_wq) { + destroy_workqueue(device_reset_wq); + device_reset_wq = NULL; return -EFAULT; + }
return 0; } diff --git a/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c b/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c index c42f5c25aabd..4c11ad1ebcf0 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c +++ b/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c @@ -22,18 +22,13 @@ void adf_dbgfs_init(struct adf_accel_dev *accel_dev) { char name[ADF_DEVICE_NAME_LENGTH]; - void *ret;
/* Create dev top level debugfs entry */ snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX, accel_dev->hw_device->dev_class->name, pci_name(accel_dev->accel_pci_dev.pci_dev));
- ret = debugfs_create_dir(name, NULL); - if (IS_ERR_OR_NULL(ret)) - return; - - accel_dev->debugfs_dir = ret; + accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
adf_cfg_dev_dbgfs_add(accel_dev); } @@ -59,9 +54,6 @@ EXPORT_SYMBOL_GPL(adf_dbgfs_exit); */ void adf_dbgfs_add(struct adf_accel_dev *accel_dev) { - if (!accel_dev->debugfs_dir) - return; - if (!accel_dev->is_vf) { adf_fw_counters_dbgfs_add(accel_dev); adf_heartbeat_dbgfs_add(accel_dev); @@ -77,9 +69,6 @@ void adf_dbgfs_add(struct adf_accel_dev *accel_dev) */ void adf_dbgfs_rm(struct adf_accel_dev *accel_dev) { - if (!accel_dev->debugfs_dir) - return; - if (!accel_dev->is_vf) { adf_tl_dbgfs_rm(accel_dev); adf_cnv_dbgfs_rm(accel_dev); diff --git a/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c b/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c index 65bd26b25abc..f93d9cca70ce 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c +++ b/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c @@ -90,10 +90,6 @@ void adf_exit_arb(struct adf_accel_dev *accel_dev)
hw_data->get_arb_info(&info);
- /* Reset arbiter configuration */ - for (i = 0; i < ADF_ARB_NUM; i++) - WRITE_CSR_ARB_SARCONFIG(csr, arb_off, i, 0); - /* Unmap worker threads to service arbiters */ for (i = 0; i < hw_data->num_engines; i++) WRITE_CSR_ARB_WT2SAM(csr, arb_off, wt_off, i, 0); diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c index c82775dbb557..77a6301f37f0 100644 --- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -225,21 +225,22 @@ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx) static int mxs_dcp_run_aes(struct dcp_async_ctx *actx, struct skcipher_request *req, int init) { - dma_addr_t key_phys = 0; - dma_addr_t src_phys, dst_phys; + dma_addr_t key_phys, src_phys, dst_phys; struct dcp *sdcp = global_sdcp; struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan]; struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req); bool key_referenced = actx->key_referenced; int ret;
- if (!key_referenced) { + if (key_referenced) + key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key + AES_KEYSIZE_128, + AES_KEYSIZE_128, DMA_TO_DEVICE); + else key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key, 2 * AES_KEYSIZE_128, DMA_TO_DEVICE); - ret = dma_mapping_error(sdcp->dev, key_phys); - if (ret) - return ret; - } + ret = dma_mapping_error(sdcp->dev, key_phys); + if (ret) + return ret;
src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf, DCP_BUF_SZ, DMA_TO_DEVICE); @@ -300,7 +301,10 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx, err_dst: dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE); err_src: - if (!key_referenced) + if (key_referenced) + dma_unmap_single(sdcp->dev, key_phys, AES_KEYSIZE_128, + DMA_TO_DEVICE); + else dma_unmap_single(sdcp->dev, key_phys, 2 * AES_KEYSIZE_128, DMA_TO_DEVICE); return ret; diff --git a/drivers/dax/pmem/Makefile b/drivers/dax/pmem/Makefile deleted file mode 100644 index 191c31f0d4f0..000000000000 --- a/drivers/dax/pmem/Makefile +++ /dev/null @@ -1,7 +0,0 @@ -# SPDX-License-Identifier: GPL-2.0-only -obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o -obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem_core.o - -dax_pmem-y := pmem.o -dax_pmem_core-y := core.o -dax_pmem_compat-y := compat.o diff --git a/drivers/dax/pmem/pmem.c b/drivers/dax/pmem/pmem.c deleted file mode 100644 index dfe91a2990fe..000000000000 --- a/drivers/dax/pmem/pmem.c +++ /dev/null @@ -1,10 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */ -#include <linux/percpu-refcount.h> -#include <linux/memremap.h> -#include <linux/module.h> -#include <linux/pfn_t.h> -#include <linux/nd.h> -#include "../bus.h" - - diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig index b46eb8a552d7..fee04fdb0822 100644 --- a/drivers/dma-buf/Kconfig +++ b/drivers/dma-buf/Kconfig @@ -36,6 +36,7 @@ config UDMABUF depends on DMA_SHARED_BUFFER depends on MEMFD_CREATE || COMPILE_TEST depends on MMU + select VMAP_PFN help A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers. diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index 047c3cd2ceff..a3638ccc15f5 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -74,21 +74,29 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma) static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map) { struct udmabuf *ubuf = buf->priv; - struct page **pages; + unsigned long *pfns; void *vaddr; pgoff_t pg;
dma_resv_assert_held(buf->resv);
- pages = kmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL); - if (!pages) + /** + * HVO may free tail pages, so just use pfn to map each folio + * into vmalloc area. + */ + pfns = kvmalloc_array(ubuf->pagecount, sizeof(*pfns), GFP_KERNEL); + if (!pfns) return -ENOMEM;
- for (pg = 0; pg < ubuf->pagecount; pg++) - pages[pg] = &ubuf->folios[pg]->page; + for (pg = 0; pg < ubuf->pagecount; pg++) { + unsigned long pfn = folio_pfn(ubuf->folios[pg]);
- vaddr = vm_map_ram(pages, ubuf->pagecount, -1); - kfree(pages); + pfn += ubuf->offsets[pg] >> PAGE_SHIFT; + pfns[pg] = pfn; + } + + vaddr = vmap_pfn(pfns, ubuf->pagecount, PAGE_KERNEL); + kvfree(pfns); if (!vaddr) return -EINVAL;
@@ -196,8 +204,8 @@ static void release_udmabuf(struct dma_buf *buf) put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
unpin_all_folios(&ubuf->unpin_list); - kfree(ubuf->offsets); - kfree(ubuf->folios); + kvfree(ubuf->offsets); + kvfree(ubuf->folios); kfree(ubuf); }
@@ -322,14 +330,14 @@ static long udmabuf_create(struct miscdevice *device, if (!ubuf->pagecount) goto err;
- ubuf->folios = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios), - GFP_KERNEL); + ubuf->folios = kvmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios), + GFP_KERNEL); if (!ubuf->folios) { ret = -ENOMEM; goto err; } - ubuf->offsets = kcalloc(ubuf->pagecount, sizeof(*ubuf->offsets), - GFP_KERNEL); + ubuf->offsets = kvcalloc(ubuf->pagecount, sizeof(*ubuf->offsets), + GFP_KERNEL); if (!ubuf->offsets) { ret = -ENOMEM; goto err; @@ -343,7 +351,7 @@ static long udmabuf_create(struct miscdevice *device, goto err;
pgcnt = list[i].size >> PAGE_SHIFT; - folios = kmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL); + folios = kvmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL); if (!folios) { ret = -ENOMEM; goto err; @@ -353,7 +361,7 @@ static long udmabuf_create(struct miscdevice *device, ret = memfd_pin_folios(memfd, list[i].offset, end, folios, pgcnt, &pgoff); if (ret <= 0) { - kfree(folios); + kvfree(folios); if (!ret) ret = -EINVAL; goto err; @@ -382,7 +390,7 @@ static long udmabuf_create(struct miscdevice *device, } }
- kfree(folios); + kvfree(folios); fput(memfd); memfd = NULL; } @@ -398,8 +406,8 @@ static long udmabuf_create(struct miscdevice *device, if (memfd) fput(memfd); unpin_all_folios(&ubuf->unpin_list); - kfree(ubuf->offsets); - kfree(ubuf->folios); + kvfree(ubuf->offsets); + kvfree(ubuf->folios); kfree(ubuf); return ret; } diff --git a/drivers/edac/bluefield_edac.c b/drivers/edac/bluefield_edac.c index 5b3164560648..0e539c107351 100644 --- a/drivers/edac/bluefield_edac.c +++ b/drivers/edac/bluefield_edac.c @@ -180,7 +180,7 @@ static void bluefield_edac_check(struct mem_ctl_info *mci) static void bluefield_edac_init_dimms(struct mem_ctl_info *mci) { struct bluefield_edac_priv *priv = mci->pvt_info; - int mem_ctrl_idx = mci->mc_idx; + u64 mem_ctrl_idx = mci->mc_idx; struct dimm_info *dimm; u64 smc_info, smc_arg; int is_empty = 1, i; diff --git a/drivers/edac/fsl_ddr_edac.c b/drivers/edac/fsl_ddr_edac.c index d148d262d0d4..339d94b3d04c 100644 --- a/drivers/edac/fsl_ddr_edac.c +++ b/drivers/edac/fsl_ddr_edac.c @@ -328,21 +328,25 @@ static void fsl_mc_check(struct mem_ctl_info *mci) * TODO: Add support for 32-bit wide buses */ if ((err_detect & DDR_EDE_SBE) && (bus_width == 64)) { + u64 cap = (u64)cap_high << 32 | cap_low; + u32 s = syndrome; + sbe_ecc_decode(cap_high, cap_low, syndrome, &bad_data_bit, &bad_ecc_bit);
- if (bad_data_bit != -1) - fsl_mc_printk(mci, KERN_ERR, - "Faulty Data bit: %d\n", bad_data_bit); - if (bad_ecc_bit != -1) - fsl_mc_printk(mci, KERN_ERR, - "Faulty ECC bit: %d\n", bad_ecc_bit); + if (bad_data_bit >= 0) { + fsl_mc_printk(mci, KERN_ERR, "Faulty Data bit: %d\n", bad_data_bit); + cap ^= 1ULL << bad_data_bit; + } + + if (bad_ecc_bit >= 0) { + fsl_mc_printk(mci, KERN_ERR, "Faulty ECC bit: %d\n", bad_ecc_bit); + s ^= 1 << bad_ecc_bit; + }
fsl_mc_printk(mci, KERN_ERR, "Expected Data / ECC:\t%#8.8x_%08x / %#2.2x\n", - cap_high ^ (1 << (bad_data_bit - 32)), - cap_low ^ (1 << bad_data_bit), - syndrome ^ (1 << bad_ecc_bit)); + upper_32_bits(cap), lower_32_bits(cap), s); }
fsl_mc_printk(mci, KERN_ERR, diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c index e2a954de913b..51556c72a967 100644 --- a/drivers/edac/i10nm_base.c +++ b/drivers/edac/i10nm_base.c @@ -1036,6 +1036,7 @@ static int __init i10nm_init(void) return -ENODEV;
cfg = (struct res_config *)id->driver_data; + skx_set_res_cfg(cfg); res_cfg = cfg;
rc = skx_get_hi_lo(0x09a2, off, &tolm, &tohm); diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c index 189a2fc29e74..07dacf8c10be 100644 --- a/drivers/edac/igen6_edac.c +++ b/drivers/edac/igen6_edac.c @@ -1245,6 +1245,7 @@ static int igen6_register_mci(int mc, u64 mchbar, struct pci_dev *pdev) imc->mci = mci; return 0; fail3: + mci->pvt_info = NULL; kfree(mci->ctl_name); fail2: edac_mc_free(mci); @@ -1269,6 +1270,7 @@ static void igen6_unregister_mcis(void)
edac_mc_del_mc(mci->pdev); kfree(mci->ctl_name); + mci->pvt_info = NULL; edac_mc_free(mci); iounmap(imc->window); } diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c index 85713646957b..6cf17af7d911 100644 --- a/drivers/edac/skx_common.c +++ b/drivers/edac/skx_common.c @@ -47,6 +47,7 @@ static skx_show_retry_log_f skx_show_retry_rd_err_log; static u64 skx_tolm, skx_tohm; static LIST_HEAD(dev_edac_list); static bool skx_mem_cfg_2lm; +static struct res_config *skx_res_cfg;
int skx_adxl_get(void) { @@ -119,7 +120,7 @@ void skx_adxl_put(void) } EXPORT_SYMBOL_GPL(skx_adxl_put);
-static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_mem) +static bool skx_adxl_decode(struct decoded_addr *res, enum error_source err_src) { struct skx_dev *d; int i, len = 0; @@ -135,8 +136,24 @@ static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_me return false; }
+ /* + * GNR with a Flat2LM memory configuration may mistakenly classify + * a near-memory error(DDR5) as a far-memory error(CXL), resulting + * in the incorrect selection of decoded ADXL components. + * To address this, prefetch the decoded far-memory controller ID + * and adjust the error source to near-memory if the far-memory + * controller ID is invalid. + */ + if (skx_res_cfg && skx_res_cfg->type == GNR && err_src == ERR_SRC_2LM_FM) { + res->imc = (int)adxl_values[component_indices[INDEX_MEMCTRL]]; + if (res->imc == -1) { + err_src = ERR_SRC_2LM_NM; + edac_dbg(0, "Adjust the error source to near-memory.\n"); + } + } + res->socket = (int)adxl_values[component_indices[INDEX_SOCKET]]; - if (error_in_1st_level_mem) { + if (err_src == ERR_SRC_2LM_NM) { res->imc = (adxl_nm_bitmap & BIT_NM_MEMCTRL) ? (int)adxl_values[component_indices[INDEX_NM_MEMCTRL]] : -1; res->channel = (adxl_nm_bitmap & BIT_NM_CHANNEL) ? @@ -191,6 +208,12 @@ void skx_set_mem_cfg(bool mem_cfg_2lm) } EXPORT_SYMBOL_GPL(skx_set_mem_cfg);
+void skx_set_res_cfg(struct res_config *cfg) +{ + skx_res_cfg = cfg; +} +EXPORT_SYMBOL_GPL(skx_set_res_cfg); + void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log) { driver_decode = decode; @@ -620,31 +643,27 @@ static void skx_mce_output_error(struct mem_ctl_info *mci, optype, skx_msg); }
-static bool skx_error_in_1st_level_mem(const struct mce *m) +static enum error_source skx_error_source(const struct mce *m) { - u32 errcode; + u32 errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
- if (!skx_mem_cfg_2lm) - return false; - - errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK; - - return errcode == MCACOD_EXT_MEM_ERR; -} + if (errcode != MCACOD_MEM_CTL_ERR && errcode != MCACOD_EXT_MEM_ERR) + return ERR_SRC_NOT_MEMORY;
-static bool skx_error_in_mem(const struct mce *m) -{ - u32 errcode; + if (!skx_mem_cfg_2lm) + return ERR_SRC_1LM;
- errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK; + if (errcode == MCACOD_EXT_MEM_ERR) + return ERR_SRC_2LM_NM;
- return (errcode == MCACOD_MEM_CTL_ERR || errcode == MCACOD_EXT_MEM_ERR); + return ERR_SRC_2LM_FM; }
int skx_mce_check_error(struct notifier_block *nb, unsigned long val, void *data) { struct mce *mce = (struct mce *)data; + enum error_source err_src; struct decoded_addr res; struct mem_ctl_info *mci; char *type; @@ -652,8 +671,10 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val, if (mce->kflags & MCE_HANDLED_CEC) return NOTIFY_DONE;
+ err_src = skx_error_source(mce); + /* Ignore unless this is memory related with an address */ - if (!skx_error_in_mem(mce) || !(mce->status & MCI_STATUS_ADDRV)) + if (err_src == ERR_SRC_NOT_MEMORY || !(mce->status & MCI_STATUS_ADDRV)) return NOTIFY_DONE;
memset(&res, 0, sizeof(res)); @@ -667,7 +688,7 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val, /* Try driver decoder first */ if (!(driver_decode && driver_decode(&res))) { /* Then try firmware decoder (ACPI DSM methods) */ - if (!(adxl_component_count && skx_adxl_decode(&res, skx_error_in_1st_level_mem(mce)))) + if (!(adxl_component_count && skx_adxl_decode(&res, err_src))) return NOTIFY_DONE; }
diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h index f945c1bf5ca4..54bba8a62f72 100644 --- a/drivers/edac/skx_common.h +++ b/drivers/edac/skx_common.h @@ -146,6 +146,13 @@ enum { INDEX_MAX };
+enum error_source { + ERR_SRC_1LM, + ERR_SRC_2LM_NM, + ERR_SRC_2LM_FM, + ERR_SRC_NOT_MEMORY, +}; + #define BIT_NM_MEMCTRL BIT_ULL(INDEX_NM_MEMCTRL) #define BIT_NM_CHANNEL BIT_ULL(INDEX_NM_CHANNEL) #define BIT_NM_DIMM BIT_ULL(INDEX_NM_DIMM) @@ -234,6 +241,7 @@ int skx_adxl_get(void); void skx_adxl_put(void); void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log); void skx_set_mem_cfg(bool mem_cfg_2lm); +void skx_set_res_cfg(struct res_config *cfg);
int skx_get_src_id(struct skx_dev *d, int off, u8 *id); int skx_get_node_id(struct skx_dev *d, u8 *id); diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c index 94a6b4e667de..f4d47577f83e 100644 --- a/drivers/firmware/arm_scpi.c +++ b/drivers/firmware/arm_scpi.c @@ -630,6 +630,9 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain) if (ret) return ERR_PTR(ret);
+ if (!buf.opp_count) + return ERR_PTR(-ENOENT); + info = kmalloc(sizeof(*info), GFP_KERNEL); if (!info) return ERR_PTR(-ENOMEM); diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c index 958a680e0660..2a1b43f9e0fa 100644 --- a/drivers/firmware/efi/libstub/efi-stub.c +++ b/drivers/firmware/efi/libstub/efi-stub.c @@ -129,7 +129,7 @@ efi_status_t efi_handle_cmdline(efi_loaded_image_t *image, char **cmdline_ptr)
if (IS_ENABLED(CONFIG_CMDLINE_EXTEND) || IS_ENABLED(CONFIG_CMDLINE_FORCE) || - cmdline_size == 0) { + cmdline[0] == 0) { status = efi_parse_options(CONFIG_CMDLINE); if (status != EFI_SUCCESS) { efi_err("Failed to parse options\n"); diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c index e8d69bd548f3..9c3613e6af15 100644 --- a/drivers/firmware/efi/tpm.c +++ b/drivers/firmware/efi/tpm.c @@ -40,7 +40,8 @@ int __init efi_tpm_eventlog_init(void) { struct linux_efi_tpm_eventlog *log_tbl; struct efi_tcg2_final_events_table *final_tbl; - int tbl_size; + unsigned int tbl_size; + int final_tbl_size; int ret = 0;
if (efi.tpm_log == EFI_INVALID_TABLE_ADDR) { @@ -80,26 +81,26 @@ int __init efi_tpm_eventlog_init(void) goto out; }
- tbl_size = 0; + final_tbl_size = 0; if (final_tbl->nr_events != 0) { void *events = (void *)efi.tpm_final_log + sizeof(final_tbl->version) + sizeof(final_tbl->nr_events);
- tbl_size = tpm2_calc_event_log_size(events, - final_tbl->nr_events, - log_tbl->log); + final_tbl_size = tpm2_calc_event_log_size(events, + final_tbl->nr_events, + log_tbl->log); }
- if (tbl_size < 0) { + if (final_tbl_size < 0) { pr_err(FW_BUG "Failed to parse event in TPM Final Events Log\n"); ret = -EINVAL; goto out_calc; }
memblock_reserve(efi.tpm_final_log, - tbl_size + sizeof(*final_tbl)); - efi_tpm_final_log_size = tbl_size; + final_tbl_size + sizeof(*final_tbl)); + efi_tpm_final_log_size = final_tbl_size;
out_calc: early_memunmap(final_tbl, sizeof(*final_tbl)); diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c index d304913314e4..24e666d5c3d1 100644 --- a/drivers/firmware/google/gsmi.c +++ b/drivers/firmware/google/gsmi.c @@ -918,7 +918,8 @@ static __init int gsmi_init(void) gsmi_dev.pdev = platform_device_register_full(&gsmi_dev_info); if (IS_ERR(gsmi_dev.pdev)) { printk(KERN_ERR "gsmi: unable to register platform device\n"); - return PTR_ERR(gsmi_dev.pdev); + ret = PTR_ERR(gsmi_dev.pdev); + goto out_unregister; }
/* SMI access needs to be serialized */ @@ -1056,10 +1057,11 @@ static __init int gsmi_init(void) gsmi_buf_free(gsmi_dev.name_buf); kmem_cache_destroy(gsmi_dev.mem_pool); platform_device_unregister(gsmi_dev.pdev); - pr_info("gsmi: failed to load: %d\n", ret); +out_unregister: #ifdef CONFIG_PM platform_driver_unregister(&gsmi_driver_info); #endif + pr_info("gsmi: failed to load: %d\n", ret); return ret; }
diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c index 5170fe7599cd..d5909a4f0433 100644 --- a/drivers/gpio/gpio-exar.c +++ b/drivers/gpio/gpio-exar.c @@ -99,11 +99,13 @@ static void exar_set_value(struct gpio_chip *chip, unsigned int offset, struct exar_gpio_chip *exar_gpio = gpiochip_get_data(chip); unsigned int addr = exar_offset_to_lvl_addr(exar_gpio, offset); unsigned int bit = exar_offset_to_bit(exar_gpio, offset); + unsigned int bit_value = value ? BIT(bit) : 0;
- if (value) - regmap_set_bits(exar_gpio->regmap, addr, BIT(bit)); - else - regmap_clear_bits(exar_gpio->regmap, addr, BIT(bit)); + /* + * regmap_write_bits() forces value to be written when an external + * pull up/down might otherwise indicate value was already set. + */ + regmap_write_bits(exar_gpio->regmap, addr, BIT(bit), bit_value); }
static int exar_direction_output(struct gpio_chip *chip, unsigned int offset, diff --git a/drivers/gpio/gpio-zevio.c b/drivers/gpio/gpio-zevio.c index 2de61337ad3b..d7230fd83f5d 100644 --- a/drivers/gpio/gpio-zevio.c +++ b/drivers/gpio/gpio-zevio.c @@ -11,6 +11,7 @@ #include <linux/io.h> #include <linux/mod_devicetable.h> #include <linux/platform_device.h> +#include <linux/property.h> #include <linux/slab.h> #include <linux/spinlock.h>
@@ -169,6 +170,7 @@ static const struct gpio_chip zevio_gpio_chip = { /* Initialization */ static int zevio_gpio_probe(struct platform_device *pdev) { + struct device *dev = &pdev->dev; struct zevio_gpio *controller; int status, i;
@@ -180,6 +182,10 @@ static int zevio_gpio_probe(struct platform_device *pdev) controller->chip = zevio_gpio_chip; controller->chip.parent = &pdev->dev;
+ controller->chip.label = devm_kasprintf(dev, GFP_KERNEL, "%pfw", dev_fwnode(dev)); + if (!controller->chip.label) + return -ENOMEM; + controller->regs = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(controller->regs)) return dev_err_probe(&pdev->dev, PTR_ERR(controller->regs), diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 1cb5a4f19293..cf5bc77e2362 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -152,6 +152,7 @@ config DRM_PANIC_SCREEN config DRM_PANIC_SCREEN_QR_CODE bool "Add a panic screen with a QR code" depends on DRM_PANIC && RUST + select ZLIB_DEFLATE help This option adds a QR code generator, and a panic screen with a QR code. The QR code will contain the last lines of kmsg and other debug diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c index 2ca127173135..9d6345146495 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c @@ -158,7 +158,7 @@ static int aca_smu_get_valid_aca_banks(struct amdgpu_device *adev, enum aca_smu_ return -EINVAL; }
- if (start + count >= max_count) + if (start + count > max_count) return -EINVAL;
count = min_t(int, count, max_count); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c index 4f08b153cb66..e41318bfbf45 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c @@ -834,6 +834,9 @@ int amdgpu_amdkfd_unmap_hiq(struct amdgpu_device *adev, u32 doorbell_off, if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) return -EINVAL;
+ if (!kiq_ring->sched.ready || adev->job_hang) + return 0; + ring_funcs = kzalloc(sizeof(*ring_funcs), GFP_KERNEL); if (!ring_funcs) return -ENOMEM; @@ -858,8 +861,14 @@ int amdgpu_amdkfd_unmap_hiq(struct amdgpu_device *adev, u32 doorbell_off,
kiq->pmf->kiq_unmap_queues(kiq_ring, ring, RESET_QUEUES, 0, 0);
- if (kiq_ring->sched.ready && !adev->job_hang) - r = amdgpu_ring_test_helper(kiq_ring); + /* Submit unmap queue packet */ + amdgpu_ring_commit(kiq_ring); + /* + * Ring test will do a basic scratch register change check. Just run + * this to ensure that unmap queues that is submitted before got + * processed successfully before returning. + */ + r = amdgpu_ring_test_helper(kiq_ring);
spin_unlock(&kiq->ring_lock);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c index 4bd61c169ca8..ca8091fd3a24 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c @@ -1757,11 +1757,13 @@ int amdgpu_discovery_get_nps_info(struct amdgpu_device *adev,
switch (le16_to_cpu(nps_info->v1.header.version_major)) { case 1: + mem_ranges = kvcalloc(nps_info->v1.count, + sizeof(*mem_ranges), + GFP_KERNEL); + if (!mem_ranges) + return -ENOMEM; *nps_type = nps_info->v1.nps_type; *range_cnt = nps_info->v1.count; - mem_ranges = kvzalloc( - *range_cnt * sizeof(struct amdgpu_gmc_memrange), - GFP_KERNEL); for (i = 0; i < *range_cnt; i++) { mem_ranges[i].base_address = nps_info->v1.instance_info[i].base_address; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c index f1ffab5a1eae..156abd2ba5a6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c @@ -525,6 +525,17 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev, int xcc_id) if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) return -EINVAL;
+ if (!kiq_ring->sched.ready || adev->job_hang) + return 0; + /** + * This is workaround: only skip kiq_ring test + * during ras recovery in suspend stage for gfx9.4.3 + */ + if ((amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) || + amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 4)) && + amdgpu_ras_in_recovery(adev)) + return 0; + spin_lock(&kiq->ring_lock); if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size * adev->gfx.num_compute_rings)) { @@ -538,20 +549,15 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev, int xcc_id) &adev->gfx.compute_ring[j], RESET_QUEUES, 0, 0); } - - /** - * This is workaround: only skip kiq_ring test - * during ras recovery in suspend stage for gfx9.4.3 + /* Submit unmap queue packet */ + amdgpu_ring_commit(kiq_ring); + /* + * Ring test will do a basic scratch register change check. Just run + * this to ensure that unmap queues that is submitted before got + * processed successfully before returning. */ - if ((amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) || - amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 4)) && - amdgpu_ras_in_recovery(adev)) { - spin_unlock(&kiq->ring_lock); - return 0; - } + r = amdgpu_ring_test_helper(kiq_ring);
- if (kiq_ring->sched.ready && !adev->job_hang) - r = amdgpu_ring_test_helper(kiq_ring); spin_unlock(&kiq->ring_lock);
return r; @@ -579,8 +585,11 @@ int amdgpu_gfx_disable_kgq(struct amdgpu_device *adev, int xcc_id) if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) return -EINVAL;
- spin_lock(&kiq->ring_lock); + if (!adev->gfx.kiq[0].ring.sched.ready || adev->job_hang) + return 0; + if (amdgpu_gfx_is_master_xcc(adev, xcc_id)) { + spin_lock(&kiq->ring_lock); if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size * adev->gfx.num_gfx_rings)) { spin_unlock(&kiq->ring_lock); @@ -593,11 +602,17 @@ int amdgpu_gfx_disable_kgq(struct amdgpu_device *adev, int xcc_id) &adev->gfx.gfx_ring[j], PREEMPT_QUEUES, 0, 0); } - } + /* Submit unmap queue packet */ + amdgpu_ring_commit(kiq_ring);
- if (adev->gfx.kiq[0].ring.sched.ready && !adev->job_hang) + /* + * Ring test will do a basic scratch register change check. + * Just run this to ensure that unmap queues that is submitted + * before got processed successfully before returning. + */ r = amdgpu_ring_test_helper(kiq_ring); - spin_unlock(&kiq->ring_lock); + spin_unlock(&kiq->ring_lock); + }
return r; } @@ -702,7 +717,13 @@ int amdgpu_gfx_enable_kcq(struct amdgpu_device *adev, int xcc_id) kiq->pmf->kiq_map_queues(kiq_ring, &adev->gfx.compute_ring[j]); } - + /* Submit map queue packet */ + amdgpu_ring_commit(kiq_ring); + /* + * Ring test will do a basic scratch register change check. Just run + * this to ensure that map queues that is submitted before got + * processed successfully before returning. + */ r = amdgpu_ring_test_helper(kiq_ring); spin_unlock(&kiq->ring_lock); if (r) @@ -753,7 +774,13 @@ int amdgpu_gfx_enable_kgq(struct amdgpu_device *adev, int xcc_id) &adev->gfx.gfx_ring[j]); } } - + /* Submit map queue packet */ + amdgpu_ring_commit(kiq_ring); + /* + * Ring test will do a basic scratch register change check. Just run + * this to ensure that map queues that is submitted before got + * processed successfully before returning. + */ r = amdgpu_ring_test_helper(kiq_ring); spin_unlock(&kiq->ring_lock); if (r) diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c index bc8295812cc8..9d741695ca07 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c @@ -4823,6 +4823,13 @@ static int gfx_v8_0_kcq_disable(struct amdgpu_device *adev) amdgpu_ring_write(kiq_ring, 0); amdgpu_ring_write(kiq_ring, 0); } + /* Submit unmap queue packet */ + amdgpu_ring_commit(kiq_ring); + /* + * Ring test will do a basic scratch register change check. Just run + * this to ensure that unmap queues that is submitted before got + * processed successfully before returning. + */ r = amdgpu_ring_test_helper(kiq_ring); if (r) DRM_ERROR("KCQ disable failed\n"); diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c index 23f0573ae47b..785a343a95f0 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c @@ -2418,6 +2418,8 @@ static int gfx_v9_0_sw_fini(void *handle) amdgpu_gfx_kiq_free_ring(&adev->gfx.kiq[0].ring); amdgpu_gfx_kiq_fini(adev, 0);
+ amdgpu_gfx_cleaner_shader_sw_fini(adev); + gfx_v9_0_mec_fini(adev); amdgpu_bo_free_kernel(&adev->gfx.rlc.clear_state_obj, &adev->gfx.rlc.clear_state_gpu_addr, diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c index 86958cb2c2ab..aa5815bd633e 100644 --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c @@ -674,11 +674,12 @@ void jpeg_v4_0_3_dec_ring_insert_start(struct amdgpu_ring *ring) amdgpu_ring_write(ring, PACKETJ(regUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET, 0, 0, PACKETJ_TYPE0)); amdgpu_ring_write(ring, 0x62a04); /* PCTL0_MMHUB_DEEPSLEEP_IB */ - }
- amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR, - 0, 0, PACKETJ_TYPE0)); - amdgpu_ring_write(ring, 0x80004000); + amdgpu_ring_write(ring, + PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR, 0, + 0, PACKETJ_TYPE0)); + amdgpu_ring_write(ring, 0x80004000); + } }
/** @@ -694,11 +695,12 @@ void jpeg_v4_0_3_dec_ring_insert_end(struct amdgpu_ring *ring) amdgpu_ring_write(ring, PACKETJ(regUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET, 0, 0, PACKETJ_TYPE0)); amdgpu_ring_write(ring, 0x62a04); - }
- amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR, - 0, 0, PACKETJ_TYPE0)); - amdgpu_ring_write(ring, 0x00004000); + amdgpu_ring_write(ring, + PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR, 0, + 0, PACKETJ_TYPE0)); + amdgpu_ring_write(ring, 0x00004000); + } }
/** diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c index d4aa843aacfd..ff34bb1ac9db 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c @@ -271,11 +271,9 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer) struct kfd_process *proc = NULL; struct kfd_process_device *pdd = NULL; int i; - struct kfd_cu_occupancy cu_occupancy[AMDGPU_MAX_QUEUES]; + struct kfd_cu_occupancy *cu_occupancy; u32 queue_format;
- memset(cu_occupancy, 0x0, sizeof(cu_occupancy)); - pdd = container_of(attr, struct kfd_process_device, attr_cu_occupancy); dev = pdd->dev; if (dev->kfd2kgd->get_cu_occupancy == NULL) @@ -293,6 +291,10 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer) wave_cnt = 0; max_waves_per_cu = 0;
+ cu_occupancy = kcalloc(AMDGPU_MAX_QUEUES, sizeof(*cu_occupancy), GFP_KERNEL); + if (!cu_occupancy) + return -ENOMEM; + /* * For GFX 9.4.3, fetch the CU occupancy from the first XCC in the partition. * For AQL queues, because of cooperative dispatch we multiply the wave count @@ -318,6 +320,7 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer)
/* Translate wave count to number of compute units */ cu_cnt = (wave_cnt + (max_waves_per_cu - 1)) / max_waves_per_cu; + kfree(cu_occupancy); return snprintf(buffer, PAGE_SIZE, "%d\n", cu_cnt); }
@@ -338,8 +341,8 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr, attr_sdma); struct kfd_sdma_activity_handler_workarea sdma_activity_work_handler;
- INIT_WORK(&sdma_activity_work_handler.sdma_activity_work, - kfd_sdma_activity_worker); + INIT_WORK_ONSTACK(&sdma_activity_work_handler.sdma_activity_work, + kfd_sdma_activity_worker);
sdma_activity_work_handler.pdd = pdd; sdma_activity_work_handler.sdma_activity_counter = 0; @@ -347,6 +350,7 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr, schedule_work(&sdma_activity_work_handler.sdma_activity_work);
flush_work(&sdma_activity_work_handler.sdma_activity_work); + destroy_work_on_stack(&sdma_activity_work_handler.sdma_activity_work);
return snprintf(buffer, PAGE_SIZE, "%llu\n", (sdma_activity_work_handler.sdma_activity_counter)/ diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 8d97f17ffe66..24fbde7dd1c4 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -1696,6 +1696,26 @@ dm_allocate_gpu_mem( return da->cpu_ptr; }
+void +dm_free_gpu_mem( + struct amdgpu_device *adev, + enum dc_gpu_mem_alloc_type type, + void *pvMem) +{ + struct dal_allocation *da; + + /* walk the da list in DM */ + list_for_each_entry(da, &adev->dm.da_list, list) { + if (pvMem == da->cpu_ptr) { + amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr); + list_del(&da->list); + kfree(da); + break; + } + } + +} + static enum dmub_status dm_dmub_send_vbios_gpint_command(struct amdgpu_device *adev, enum dmub_gpint_command command_code, @@ -1762,16 +1782,20 @@ static struct dml2_soc_bb *dm_dmub_get_vbios_bounding_box(struct amdgpu_device * /* Send the chunk */ ret = dm_dmub_send_vbios_gpint_command(adev, send_addrs[i], chunk, 30000); if (ret != DMUB_STATUS_OK) - /* No need to free bb here since it shall be done in dm_sw_fini() */ - return NULL; + goto free_bb; }
/* Now ask DMUB to copy the bb */ ret = dm_dmub_send_vbios_gpint_command(adev, DMUB_GPINT__BB_COPY, 1, 200000); if (ret != DMUB_STATUS_OK) - return NULL; + goto free_bb;
return bb; + +free_bb: + dm_free_gpu_mem(adev, DC_MEM_ALLOC_TYPE_GART, (void *) bb); + return NULL; + }
static enum dmub_ips_disable_type dm_get_default_ips_mode( @@ -2541,11 +2565,11 @@ static int dm_sw_fini(void *handle) amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr); list_del(&da->list); kfree(da); + adev->dm.bb_from_dmub = NULL; break; } }
- adev->dm.bb_from_dmub = NULL;
kfree(adev->dm.dmub_fb_info); adev->dm.dmub_fb_info = NULL; diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h index 90dfffec33cf..a0bc2c0ac04d 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h @@ -1004,6 +1004,9 @@ void *dm_allocate_gpu_mem(struct amdgpu_device *adev, enum dc_gpu_mem_alloc_type type, size_t size, long long *addr); +void dm_free_gpu_mem(struct amdgpu_device *adev, + enum dc_gpu_mem_alloc_type type, + void *addr);
bool amdgpu_dm_is_headless(struct amdgpu_device *adev);
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c index 288be19db7c1..9be87b532517 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c @@ -35,8 +35,8 @@ #include "amdgpu_dm_trace.h" #include "amdgpu_dm_debugfs.h"
-#define HPD_DETECTION_PERIOD_uS 5000000 -#define HPD_DETECTION_TIME_uS 1000 +#define HPD_DETECTION_PERIOD_uS 2000000 +#define HPD_DETECTION_TIME_uS 100000
void amdgpu_dm_crtc_handle_vblank(struct amdgpu_crtc *acrtc) { diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c index eea317dcbe8c..9752548cc5b2 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c @@ -1055,17 +1055,8 @@ void dm_helpers_free_gpu_mem( void *pvMem) { struct amdgpu_device *adev = ctx->driver_context; - struct dal_allocation *da; - - /* walk the da list in DM */ - list_for_each_entry(da, &adev->dm.da_list, list) { - if (pvMem == da->cpu_ptr) { - amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr); - list_del(&da->list); - kfree(da); - break; - } - } + + dm_free_gpu_mem(adev, type, pvMem); }
bool dm_helpers_dmub_outbox_interrupt_control(struct dc_context *ctx, bool enable) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c index a08e8a0b696c..32b025c92c63 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c @@ -1120,6 +1120,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, int i, k, ret; bool debugfs_overwrite = false; uint16_t fec_overhead_multiplier_x1000 = get_fec_overhead_multiplier(dc_link); + struct drm_connector_state *new_conn_state;
memset(params, 0, sizeof(params));
@@ -1127,7 +1128,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, return PTR_ERR(mst_state);
/* Set up params */ - DRM_DEBUG_DRIVER("%s: MST_DSC Set up params for %d streams\n", __func__, dc_state->stream_count); + DRM_DEBUG_DRIVER("%s: MST_DSC Try to set up params from %d streams\n", __func__, dc_state->stream_count); for (i = 0; i < dc_state->stream_count; i++) { struct dc_dsc_policy dsc_policy = {0};
@@ -1143,6 +1144,14 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, if (!aconnector->mst_output_port) continue;
+ new_conn_state = drm_atomic_get_new_connector_state(state, &aconnector->base); + + if (!new_conn_state) { + DRM_DEBUG_DRIVER("%s:%d MST_DSC Skip the stream 0x%p with invalid new_conn_state\n", + __func__, __LINE__, stream); + continue; + } + stream->timing.flags.DSC = 0;
params[count].timing = &stream->timing; @@ -1175,6 +1184,8 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, count++; }
+ DRM_DEBUG_DRIVER("%s: MST_DSC Params set up for %d streams\n", __func__, count); + if (count == 0) { ASSERT(0); return 0; @@ -1302,7 +1313,7 @@ static bool is_dsc_need_re_compute( continue;
aconnector = (struct amdgpu_dm_connector *) stream->dm_stream_context; - if (!aconnector || !aconnector->dsc_aux) + if (!aconnector) continue;
stream_on_link[new_stream_on_link_num] = aconnector; diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c index 7ee2be8f82c4..bb766c2a7417 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c @@ -881,6 +881,9 @@ void hwss_setup_dpp(union block_sequence_params *params) struct dpp *dpp = pipe_ctx->plane_res.dpp; struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+ if (!plane_state) + return; + if (dpp && dpp->funcs->dpp_setup) { // program the input csc dpp->funcs->dpp_setup(dpp, diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c index a80c08582932..36d12db8d022 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c @@ -1923,9 +1923,9 @@ static void dcn20_program_pipe( dc->res_pool->hubbub, pipe_ctx->plane_res.hubp->inst, pipe_ctx->hubp_regs.det_size); }
- if (pipe_ctx->update_flags.raw || - (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.raw) || - pipe_ctx->stream->update_flags.raw) + if (pipe_ctx->plane_state && (pipe_ctx->update_flags.raw || + pipe_ctx->plane_state->update_flags.raw || + pipe_ctx->stream->update_flags.raw)) dcn20_update_dchubp_dpp(dc, pipe_ctx, context);
if (pipe_ctx->plane_state && (pipe_ctx->update_flags.bits.enable || diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c index a2e9bb485c36..a2675b121fe4 100644 --- a/drivers/gpu/drm/bridge/analogix/anx7625.c +++ b/drivers/gpu/drm/bridge/analogix/anx7625.c @@ -2551,6 +2551,8 @@ static int __maybe_unused anx7625_runtime_pm_suspend(struct device *dev) mutex_lock(&ctx->lock);
anx7625_stop_dp_work(ctx); + if (!ctx->pdata.panel_bridge) + anx7625_remove_edid(ctx); anx7625_power_standby(ctx);
mutex_unlock(&ctx->lock); diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c index 87b8545fccc0..e3a9832c742c 100644 --- a/drivers/gpu/drm/bridge/ite-it6505.c +++ b/drivers/gpu/drm/bridge/ite-it6505.c @@ -3107,6 +3107,8 @@ static __maybe_unused int it6505_bridge_suspend(struct device *dev) { struct it6505 *it6505 = dev_get_drvdata(dev);
+ it6505_remove_edid(it6505); + return it6505_poweroff(it6505); }
diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c index f3afdab55c11..47189587643a 100644 --- a/drivers/gpu/drm/bridge/tc358767.c +++ b/drivers/gpu/drm/bridge/tc358767.c @@ -1714,6 +1714,13 @@ static const struct drm_edid *tc_edid_read(struct drm_bridge *bridge, struct drm_connector *connector) { struct tc_data *tc = bridge_to_tc(bridge); + int ret; + + ret = tc_get_display_props(tc); + if (ret < 0) { + dev_err(tc->dev, "failed to read display props: %d\n", ret); + return 0; + }
return drm_edid_read_ddc(connector, &tc->aux.ddc); } diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index ad1dc638c83b..ce82c9451dfe 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -129,7 +129,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev) */ struct drm_file *drm_file_alloc(struct drm_minor *minor) { - static atomic64_t ident = ATOMIC_INIT(0); + static atomic64_t ident = ATOMIC64_INIT(0); struct drm_device *dev = minor->dev; struct drm_file *file; int ret; diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c index 5ace481c1901..1ed68d3cd80b 100644 --- a/drivers/gpu/drm/drm_mm.c +++ b/drivers/gpu/drm/drm_mm.c @@ -151,7 +151,7 @@ static void show_leaks(struct drm_mm *mm) { }
INTERVAL_TREE_DEFINE(struct drm_mm_node, rb, u64, __subtree_last, - START, LAST, static inline, drm_mm_interval_tree) + START, LAST, static inline __maybe_unused, drm_mm_interval_tree)
struct drm_mm_node * __drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c index 6500f3999c5f..19ec67a5a918 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c @@ -538,6 +538,16 @@ static int etnaviv_bind(struct device *dev) priv->num_gpus = 0; priv->shm_gfp_mask = GFP_HIGHUSER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+ /* + * If the GPU is part of a system with DMA addressing limitations, + * request pages for our SHM backend buffers from the DMA32 zone to + * hopefully avoid performance killing SWIOTLB bounce buffering. + */ + if (dma_addressing_limited(dev)) { + priv->shm_gfp_mask |= GFP_DMA32; + priv->shm_gfp_mask &= ~__GFP_HIGHMEM; + } + priv->cmdbuf_suballoc = etnaviv_cmdbuf_suballoc_new(drm->dev); if (IS_ERR(priv->cmdbuf_suballoc)) { dev_err(drm->dev, "Failed to create cmdbuf suballocator\n"); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c index 7c7f97793ddd..df0bc828a234 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c @@ -839,14 +839,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) if (ret) goto fail;
- /* - * If the GPU is part of a system with DMA addressing limitations, - * request pages for our SHM backend buffers from the DMA32 zone to - * hopefully avoid performance killing SWIOTLB bounce buffering. - */ - if (dma_addressing_limited(gpu->dev)) - priv->shm_gfp_mask |= GFP_DMA32; - /* Create buffer: */ ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &gpu->buffer, PAGE_SIZE); @@ -1330,6 +1322,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu, { u32 val;
+ mutex_lock(&gpu->lock); + /* disable clock gating */ val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS); val &= ~VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING; @@ -1341,6 +1335,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu, gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, val);
sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_PRE); + + mutex_unlock(&gpu->lock); }
static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu, @@ -1350,13 +1346,9 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu, unsigned int i; u32 val;
- sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST); - - for (i = 0; i < submit->nr_pmrs; i++) { - const struct etnaviv_perfmon_request *pmr = submit->pmrs + i; + mutex_lock(&gpu->lock);
- *pmr->bo_vma = pmr->sequence; - } + sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
/* disable debug register */ val = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL); @@ -1367,6 +1359,14 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu, val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS); val |= VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING; gpu_write_power(gpu, VIVS_PM_POWER_CONTROLS, val); + + mutex_unlock(&gpu->lock); + + for (i = 0; i < submit->nr_pmrs; i++) { + const struct etnaviv_perfmon_request *pmr = submit->pmrs + i; + + *pmr->bo_vma = pmr->sequence; + } }
diff --git a/drivers/gpu/drm/fsl-dcu/Kconfig b/drivers/gpu/drm/fsl-dcu/Kconfig index 5ca71ef87325..c9ee98693b48 100644 --- a/drivers/gpu/drm/fsl-dcu/Kconfig +++ b/drivers/gpu/drm/fsl-dcu/Kconfig @@ -8,6 +8,7 @@ config DRM_FSL_DCU select DRM_PANEL select REGMAP_MMIO select VIDEOMODE_HELPERS + select MFD_SYSCON if SOC_LS1021A help Choose this option if you have an Freescale DCU chipset. If M is selected the module will be called fsl-dcu-drm. diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c index ab6c0c6cd0e2..c4c3d41ee530 100644 --- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c +++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c @@ -100,6 +100,7 @@ static void fsl_dcu_irq_uninstall(struct drm_device *dev) static int fsl_dcu_load(struct drm_device *dev, unsigned long flags) { struct fsl_dcu_drm_device *fsl_dev = dev->dev_private; + struct regmap *scfg; int ret;
ret = fsl_dcu_drm_modeset_init(fsl_dev); @@ -108,6 +109,20 @@ static int fsl_dcu_load(struct drm_device *dev, unsigned long flags) return ret; }
+ scfg = syscon_regmap_lookup_by_compatible("fsl,ls1021a-scfg"); + if (PTR_ERR(scfg) != -ENODEV) { + /* + * For simplicity, enable the PIXCLK unconditionally, + * resulting in increased power consumption. Disabling + * the clock in PM or on unload could be implemented as + * a future improvement. + */ + ret = regmap_update_bits(scfg, SCFG_PIXCLKCR, SCFG_PIXCLKCR_PXCEN, + SCFG_PIXCLKCR_PXCEN); + if (ret < 0) + return dev_err_probe(dev->dev, ret, "failed to enable pixclk\n"); + } + ret = drm_vblank_init(dev, dev->mode_config.num_crtc); if (ret < 0) { dev_err(dev->dev, "failed to initialize vblank\n"); diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h index e2049a0e8a92..566396013c04 100644 --- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h +++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h @@ -160,6 +160,9 @@ #define FSL_DCU_ARGB4444 12 #define FSL_DCU_YUV422 14
+#define SCFG_PIXCLKCR 0x28 +#define SCFG_PIXCLKCR_PXCEN BIT(31) + #define VF610_LAYER_REG_NUM 9 #define LS1021A_LAYER_REG_NUM 10
diff --git a/drivers/gpu/drm/imagination/pvr_ccb.c b/drivers/gpu/drm/imagination/pvr_ccb.c index 4deeac7ed40a..2bbdc05a3b97 100644 --- a/drivers/gpu/drm/imagination/pvr_ccb.c +++ b/drivers/gpu/drm/imagination/pvr_ccb.c @@ -321,7 +321,7 @@ static int pvr_kccb_reserve_slot_sync(struct pvr_device *pvr_dev) bool reserved = false; u32 retries = 0;
- while ((jiffies - start_timestamp) < (u32)RESERVE_SLOT_TIMEOUT || + while (time_before(jiffies, start_timestamp + RESERVE_SLOT_TIMEOUT) || retries < RESERVE_SLOT_MIN_RETRIES) { reserved = pvr_kccb_try_reserve_slot(pvr_dev); if (reserved) diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c index 7bd6ba4c6e8a..363f885a7098 100644 --- a/drivers/gpu/drm/imagination/pvr_vm.c +++ b/drivers/gpu/drm/imagination/pvr_vm.c @@ -654,9 +654,7 @@ pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
xa_lock(&pvr_file->vm_ctx_handles); vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle); - if (vm_ctx) - kref_get(&vm_ctx->ref_count); - + pvr_vm_context_get(vm_ctx); xa_unlock(&pvr_file->vm_ctx_handles);
return vm_ctx; diff --git a/drivers/gpu/drm/imx/dcss/dcss-crtc.c b/drivers/gpu/drm/imx/dcss/dcss-crtc.c index 31267c00782f..af91e45b5d13 100644 --- a/drivers/gpu/drm/imx/dcss/dcss-crtc.c +++ b/drivers/gpu/drm/imx/dcss/dcss-crtc.c @@ -206,15 +206,13 @@ int dcss_crtc_init(struct dcss_crtc *crtc, struct drm_device *drm) if (crtc->irq < 0) return crtc->irq;
- ret = request_irq(crtc->irq, dcss_crtc_irq_handler, - 0, "dcss_drm", crtc); + ret = request_irq(crtc->irq, dcss_crtc_irq_handler, IRQF_NO_AUTOEN, + "dcss_drm", crtc); if (ret) { dev_err(dcss->dev, "irq request failed with %d.\n", ret); return ret; }
- disable_irq(crtc->irq); - return 0; }
diff --git a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c index ef29c9a61a46..99db53e167bd 100644 --- a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c +++ b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c @@ -410,14 +410,12 @@ static int ipu_drm_bind(struct device *dev, struct device *master, void *data) }
ipu_crtc->irq = ipu_plane_irq(ipu_crtc->plane[0]); - ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler, 0, - "imx_drm", ipu_crtc); + ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler, + IRQF_NO_AUTOEN, "imx_drm", ipu_crtc); if (ret < 0) { dev_err(ipu_crtc->dev, "irq request failed with %d.\n", ret); return ret; } - /* Only enable IRQ when we actually need it to trigger work. */ - disable_irq(ipu_crtc->irq);
return 0; } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 37927bdd6fbe..14db7376c712 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1522,15 +1522,13 @@ static int a6xx_gmu_get_irq(struct a6xx_gmu *gmu, struct platform_device *pdev,
irq = platform_get_irq_byname(pdev, name);
- ret = request_irq(irq, handler, IRQF_TRIGGER_HIGH, name, gmu); + ret = request_irq(irq, handler, IRQF_TRIGGER_HIGH | IRQF_NO_AUTOEN, name, gmu); if (ret) { DRM_DEV_ERROR(&pdev->dev, "Unable to get interrupt %s %d\n", name, ret); return ret; }
- disable_irq(irq); - return irq; }
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h index 1d3e9666c741..64c94e919a69 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h +++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h @@ -156,18 +156,6 @@ static const struct dpu_lm_cfg msm8998_lm[] = { .sblk = &msm8998_lm_sblk, .lm_pair = LM_5, .pingpong = PINGPONG_2, - }, { - .name = "lm_3", .id = LM_3, - .base = 0x47000, .len = 0x320, - .features = MIXER_MSM8998_MASK, - .sblk = &msm8998_lm_sblk, - .pingpong = PINGPONG_NONE, - }, { - .name = "lm_4", .id = LM_4, - .base = 0x48000, .len = 0x320, - .features = MIXER_MSM8998_MASK, - .sblk = &msm8998_lm_sblk, - .pingpong = PINGPONG_NONE, }, { .name = "lm_5", .id = LM_5, .base = 0x49000, .len = 0x320, diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h index 7a23389a5732..72bd4f7e9e50 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h +++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h @@ -155,19 +155,6 @@ static const struct dpu_lm_cfg sdm845_lm[] = { .lm_pair = LM_5, .pingpong = PINGPONG_2, .dspp = DSPP_2, - }, { - .name = "lm_3", .id = LM_3, - .base = 0x0, .len = 0x320, - .features = MIXER_SDM845_MASK, - .sblk = &sdm845_lm_sblk, - .pingpong = PINGPONG_NONE, - .dspp = DSPP_3, - }, { - .name = "lm_4", .id = LM_4, - .base = 0x0, .len = 0x320, - .features = MIXER_SDM845_MASK, - .sblk = &sdm845_lm_sblk, - .pingpong = PINGPONG_NONE, }, { .name = "lm_5", .id = LM_5, .base = 0x49000, .len = 0x320, @@ -175,6 +162,7 @@ static const struct dpu_lm_cfg sdm845_lm[] = { .sblk = &sdm845_lm_sblk, .lm_pair = LM_2, .pingpong = PINGPONG_3, + .dspp = DSPP_3, }, };
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c index 68fae048a9a8..260accc151d4 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c @@ -80,7 +80,7 @@ static u64 _dpu_core_perf_calc_clk(const struct dpu_perf_cfg *perf_cfg,
mode = &state->adjusted_mode;
- crtc_clk = mode->vtotal * mode->hdisplay * drm_mode_vrefresh(mode); + crtc_clk = (u64)mode->vtotal * mode->hdisplay * drm_mode_vrefresh(mode);
drm_atomic_crtc_for_each_plane(plane, crtc) { pstate = to_dpu_plane_state(plane->state); diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c index ea70c1c32d94..6970b0f7f457 100644 --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c @@ -140,6 +140,7 @@ void msm_devfreq_init(struct msm_gpu *gpu) { struct msm_gpu_devfreq *df = &gpu->devfreq; struct msm_drm_private *priv = gpu->dev->dev_private; + int ret;
/* We need target support to do devfreq */ if (!gpu->funcs->gpu_busy) @@ -156,8 +157,12 @@ void msm_devfreq_init(struct msm_gpu *gpu)
mutex_init(&df->lock);
- dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq, - DEV_PM_QOS_MIN_FREQUENCY, 0); + ret = dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq, + DEV_PM_QOS_MIN_FREQUENCY, 0); + if (ret < 0) { + DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize QoS\n"); + return; + }
msm_devfreq_profile.initial_freq = gpu->fast_rate;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c index 060c74a80eb1..3ea447f6a45b 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c @@ -443,6 +443,7 @@ gf100_gr_chan_new(struct nvkm_gr *base, struct nvkm_chan *fifoch, ret = gf100_grctx_generate(gr, chan, fifoch->inst); if (ret) { nvkm_error(&base->engine.subdev, "failed to construct context\n"); + mutex_unlock(&gr->fecs.mutex); return ret; } } diff --git a/drivers/gpu/drm/omapdrm/dss/base.c b/drivers/gpu/drm/omapdrm/dss/base.c index 5f8002f6bb7a..a4ac113e1690 100644 --- a/drivers/gpu/drm/omapdrm/dss/base.c +++ b/drivers/gpu/drm/omapdrm/dss/base.c @@ -139,21 +139,13 @@ static bool omapdss_device_is_connected(struct omap_dss_device *dssdev) }
int omapdss_device_connect(struct dss_device *dss, - struct omap_dss_device *src, struct omap_dss_device *dst) { - dev_dbg(&dss->pdev->dev, "connect(%s, %s)\n", - src ? dev_name(src->dev) : "NULL", + dev_dbg(&dss->pdev->dev, "connect(%s)\n", dst ? dev_name(dst->dev) : "NULL");
- if (!dst) { - /* - * The destination is NULL when the source is connected to a - * bridge instead of a DSS device. Stop here, we will attach - * the bridge later when we will have a DRM encoder. - */ - return src && src->bridge ? 0 : -EINVAL; - } + if (!dst) + return -EINVAL;
if (omapdss_device_is_connected(dst)) return -EBUSY; @@ -163,19 +155,14 @@ int omapdss_device_connect(struct dss_device *dss, return 0; }
-void omapdss_device_disconnect(struct omap_dss_device *src, +void omapdss_device_disconnect(struct dss_device *dss, struct omap_dss_device *dst) { - struct dss_device *dss = src ? src->dss : dst->dss; - - dev_dbg(&dss->pdev->dev, "disconnect(%s, %s)\n", - src ? dev_name(src->dev) : "NULL", + dev_dbg(&dss->pdev->dev, "disconnect(%s)\n", dst ? dev_name(dst->dev) : "NULL");
- if (!dst) { - WARN_ON(!src->bridge); + if (WARN_ON(!dst)) return; - }
if (!dst->id && !omapdss_device_is_connected(dst)) { WARN_ON(1); diff --git a/drivers/gpu/drm/omapdrm/dss/omapdss.h b/drivers/gpu/drm/omapdrm/dss/omapdss.h index 040d5a3e33d6..4c22c09c93d5 100644 --- a/drivers/gpu/drm/omapdrm/dss/omapdss.h +++ b/drivers/gpu/drm/omapdrm/dss/omapdss.h @@ -242,9 +242,8 @@ struct omap_dss_device *omapdss_device_get(struct omap_dss_device *dssdev); void omapdss_device_put(struct omap_dss_device *dssdev); struct omap_dss_device *omapdss_find_device_by_node(struct device_node *node); int omapdss_device_connect(struct dss_device *dss, - struct omap_dss_device *src, struct omap_dss_device *dst); -void omapdss_device_disconnect(struct omap_dss_device *src, +void omapdss_device_disconnect(struct dss_device *dss, struct omap_dss_device *dst);
int omap_dss_get_num_overlay_managers(void); diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c index d3eac4817d76..a982378aa141 100644 --- a/drivers/gpu/drm/omapdrm/omap_drv.c +++ b/drivers/gpu/drm/omapdrm/omap_drv.c @@ -307,7 +307,7 @@ static void omap_disconnect_pipelines(struct drm_device *ddev) for (i = 0; i < priv->num_pipes; i++) { struct omap_drm_pipeline *pipe = &priv->pipes[i];
- omapdss_device_disconnect(NULL, pipe->output); + omapdss_device_disconnect(priv->dss, pipe->output);
omapdss_device_put(pipe->output); pipe->output = NULL; @@ -325,7 +325,7 @@ static int omap_connect_pipelines(struct drm_device *ddev) int r;
for_each_dss_output(output) { - r = omapdss_device_connect(priv->dss, NULL, output); + r = omapdss_device_connect(priv->dss, output); if (r == -EPROBE_DEFER) { omapdss_device_put(output); return r; diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c index fdae677558f3..b9c67e4ca360 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem.c +++ b/drivers/gpu/drm/omapdrm/omap_gem.c @@ -1402,8 +1402,6 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
omap_obj = to_omap_bo(obj);
- mutex_lock(&omap_obj->lock); - omap_obj->sgt = sgt;
if (omap_gem_sgt_is_contiguous(sgt, size)) { @@ -1418,21 +1416,17 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size, pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL); if (!pages) { omap_gem_free_object(obj); - obj = ERR_PTR(-ENOMEM); - goto done; + return ERR_PTR(-ENOMEM); }
omap_obj->pages = pages; ret = drm_prime_sg_to_page_array(sgt, pages, npages); if (ret) { omap_gem_free_object(obj); - obj = ERR_PTR(-ENOMEM); - goto done; + return ERR_PTR(-ENOMEM); } }
-done: - mutex_unlock(&omap_obj->lock); return obj; }
diff --git a/drivers/gpu/drm/panel/panel-newvision-nv3052c.c b/drivers/gpu/drm/panel/panel-newvision-nv3052c.c index d3baccfe6286..06e16a7c14a7 100644 --- a/drivers/gpu/drm/panel/panel-newvision-nv3052c.c +++ b/drivers/gpu/drm/panel/panel-newvision-nv3052c.c @@ -917,7 +917,7 @@ static const struct nv3052c_panel_info wl_355608_a8_panel_info = { static const struct spi_device_id nv3052c_ids[] = { { "ltk035c5444t", }, { "fs035vg158", }, - { "wl-355608-a8", }, + { "rg35xx-plus-panel", }, { /* sentinel */ } }; MODULE_DEVICE_TABLE(spi, nv3052c_ids); diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35510.c b/drivers/gpu/drm/panel/panel-novatek-nt35510.c index 57686340de49..549b86f2cc28 100644 --- a/drivers/gpu/drm/panel/panel-novatek-nt35510.c +++ b/drivers/gpu/drm/panel/panel-novatek-nt35510.c @@ -38,6 +38,7 @@
#define NT35510_CMD_CORRECT_GAMMA BIT(0) #define NT35510_CMD_CONTROL_DISPLAY BIT(1) +#define NT35510_CMD_SETVCMOFF BIT(2)
#define MCS_CMD_MAUCCTR 0xF0 /* Manufacturer command enable */ #define MCS_CMD_READ_ID1 0xDA @@ -721,11 +722,13 @@ static int nt35510_setup_power(struct nt35510 *nt) if (ret) return ret;
- ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVCMOFF, - NT35510_P1_VCMOFF_LEN, - nt->conf->vcmoff); - if (ret) - return ret; + if (nt->conf->cmds & NT35510_CMD_SETVCMOFF) { + ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVCMOFF, + NT35510_P1_VCMOFF_LEN, + nt->conf->vcmoff); + if (ret) + return ret; + }
/* Typically 10 ms */ usleep_range(10000, 20000); @@ -1319,7 +1322,7 @@ static const struct nt35510_config nt35510_frida_frd400b25025 = { }, .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | MIPI_DSI_MODE_LPM, - .cmds = NT35510_CMD_CONTROL_DISPLAY, + .cmds = NT35510_CMD_CONTROL_DISPLAY | NT35510_CMD_SETVCMOFF, /* 0x03: AVDD = 6.2V */ .avdd = { 0x03, 0x03, 0x03 }, /* 0x46: PCK = 2 x Hsync, BTP = 2.5 x VDDB */ diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c index 2d30da38c2c3..3385fd3ef41a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c +++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c @@ -38,7 +38,7 @@ static int panfrost_devfreq_target(struct device *dev, unsigned long *freq, return PTR_ERR(opp); dev_pm_opp_put(opp);
- err = dev_pm_opp_set_rate(dev, *freq); + err = dev_pm_opp_set_rate(dev, *freq); if (!err) ptdev->pfdevfreq.current_frequency = *freq;
@@ -182,6 +182,7 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev) * if any and will avoid a switch off by regulator_late_cleanup() */ ret = dev_pm_opp_set_opp(dev, opp); + dev_pm_opp_put(opp); if (ret) { DRM_DEV_ERROR(dev, "Couldn't set recommended OPP\n"); return ret; diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c index fd8e44992184..b52dd510e036 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gpu.c +++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c @@ -177,7 +177,6 @@ static void panfrost_gpu_init_quirks(struct panfrost_device *pfdev) struct panfrost_model { const char *name; u32 id; - u32 id_mask; u64 features; u64 issues; struct { diff --git a/drivers/gpu/drm/panthor/panthor_devfreq.c b/drivers/gpu/drm/panthor/panthor_devfreq.c index c6d3c327cc24..ecc7a52bd688 100644 --- a/drivers/gpu/drm/panthor/panthor_devfreq.c +++ b/drivers/gpu/drm/panthor/panthor_devfreq.c @@ -62,14 +62,20 @@ static void panthor_devfreq_update_utilization(struct panthor_devfreq *pdevfreq) static int panthor_devfreq_target(struct device *dev, unsigned long *freq, u32 flags) { + struct panthor_device *ptdev = dev_get_drvdata(dev); struct dev_pm_opp *opp; + int err;
opp = devfreq_recommended_opp(dev, freq, flags); if (IS_ERR(opp)) return PTR_ERR(opp); dev_pm_opp_put(opp);
- return dev_pm_opp_set_rate(dev, *freq); + err = dev_pm_opp_set_rate(dev, *freq); + if (!err) + ptdev->current_frequency = *freq; + + return err; }
static void panthor_devfreq_reset(struct panthor_devfreq *pdevfreq) @@ -130,6 +136,7 @@ int panthor_devfreq_init(struct panthor_device *ptdev) struct panthor_devfreq *pdevfreq; struct dev_pm_opp *opp; unsigned long cur_freq; + unsigned long freq = ULONG_MAX; int ret;
pdevfreq = drmm_kzalloc(&ptdev->base, sizeof(*ptdev->devfreq), GFP_KERNEL); @@ -156,12 +163,6 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
cur_freq = clk_get_rate(ptdev->clks.core);
- opp = devfreq_recommended_opp(dev, &cur_freq, 0); - if (IS_ERR(opp)) - return PTR_ERR(opp); - - panthor_devfreq_profile.initial_freq = cur_freq; - /* Regulator coupling only takes care of synchronizing/balancing voltage * updates, but the coupled regulator needs to be enabled manually. * @@ -192,16 +193,30 @@ int panthor_devfreq_init(struct panthor_device *ptdev) return ret; }
+ opp = devfreq_recommended_opp(dev, &cur_freq, 0); + if (IS_ERR(opp)) + return PTR_ERR(opp); + + panthor_devfreq_profile.initial_freq = cur_freq; + ptdev->current_frequency = cur_freq; + /* * Set the recommend OPP this will enable and configure the regulator * if any and will avoid a switch off by regulator_late_cleanup() */ ret = dev_pm_opp_set_opp(dev, opp); + dev_pm_opp_put(opp); if (ret) { DRM_DEV_ERROR(dev, "Couldn't set recommended OPP\n"); return ret; }
+ /* Find the fastest defined rate */ + opp = dev_pm_opp_find_freq_floor(dev, &freq); + if (IS_ERR(opp)) + return PTR_ERR(opp); + ptdev->fast_rate = freq; + dev_pm_opp_put(opp);
/* diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h index e388c0472ba7..2109905813e8 100644 --- a/drivers/gpu/drm/panthor/panthor_device.h +++ b/drivers/gpu/drm/panthor/panthor_device.h @@ -66,6 +66,25 @@ struct panthor_irq { atomic_t suspended; };
+/** + * enum panthor_device_profiling_mode - Profiling state + */ +enum panthor_device_profiling_flags { + /** @PANTHOR_DEVICE_PROFILING_DISABLED: Profiling is disabled. */ + PANTHOR_DEVICE_PROFILING_DISABLED = 0, + + /** @PANTHOR_DEVICE_PROFILING_CYCLES: Sampling job cycles. */ + PANTHOR_DEVICE_PROFILING_CYCLES = BIT(0), + + /** @PANTHOR_DEVICE_PROFILING_TIMESTAMP: Sampling job timestamp. */ + PANTHOR_DEVICE_PROFILING_TIMESTAMP = BIT(1), + + /** @PANTHOR_DEVICE_PROFILING_ALL: Sampling everything. */ + PANTHOR_DEVICE_PROFILING_ALL = + PANTHOR_DEVICE_PROFILING_CYCLES | + PANTHOR_DEVICE_PROFILING_TIMESTAMP, +}; + /** * struct panthor_device - Panthor device */ @@ -162,6 +181,15 @@ struct panthor_device { */ struct page *dummy_latest_flush; } pm; + + /** @profile_mask: User-set profiling flags for job accounting. */ + u32 profile_mask; + + /** @current_frequency: Device clock frequency at present. Set by DVFS*/ + unsigned long current_frequency; + + /** @fast_rate: Maximum device clock frequency. Set by DVFS */ + unsigned long fast_rate; };
/** diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c index 9929e22f4d8d..20135a9bc026 100644 --- a/drivers/gpu/drm/panthor/panthor_sched.c +++ b/drivers/gpu/drm/panthor/panthor_sched.c @@ -93,6 +93,9 @@ #define MIN_CSGS 3 #define MAX_CSG_PRIO 0xf
+#define NUM_INSTRS_PER_CACHE_LINE (64 / sizeof(u64)) +#define MAX_INSTRS_PER_JOB 24 + struct panthor_group;
/** @@ -476,6 +479,18 @@ struct panthor_queue { */ struct list_head in_flight_jobs; } fence_ctx; + + /** @profiling: Job profiling data slots and access information. */ + struct { + /** @slots: Kernel BO holding the slots. */ + struct panthor_kernel_bo *slots; + + /** @slot_count: Number of jobs ringbuffer can hold at once. */ + u32 slot_count; + + /** @seqno: Index of the next available profiling information slot. */ + u32 seqno; + } profiling; };
/** @@ -662,6 +677,18 @@ struct panthor_group { struct list_head wait_node; };
+struct panthor_job_profiling_data { + struct { + u64 before; + u64 after; + } cycles; + + struct { + u64 before; + u64 after; + } time; +}; + /** * group_queue_work() - Queue a group work * @group: Group to queue the work for. @@ -775,6 +802,15 @@ struct panthor_job {
/** @done_fence: Fence signaled when the job is finished or cancelled. */ struct dma_fence *done_fence; + + /** @profiling: Job profiling information. */ + struct { + /** @mask: Current device job profiling enablement bitmask. */ + u32 mask; + + /** @slot: Job index in the profiling slots BO. */ + u32 slot; + } profiling; };
static void @@ -839,6 +875,7 @@ static void group_free_queue(struct panthor_group *group, struct panthor_queue *
panthor_kernel_bo_destroy(queue->ringbuf); panthor_kernel_bo_destroy(queue->iface.mem); + panthor_kernel_bo_destroy(queue->profiling.slots);
/* Release the last_fence we were holding, if any. */ dma_fence_put(queue->fence_ctx.last_fence); @@ -1989,8 +2026,6 @@ tick_ctx_init(struct panthor_scheduler *sched, } }
-#define NUM_INSTRS_PER_SLOT 16 - static void group_term_post_processing(struct panthor_group *group) { @@ -2829,65 +2864,198 @@ static void group_sync_upd_work(struct work_struct *work) group_put(group); }
-static struct dma_fence * -queue_run_job(struct drm_sched_job *sched_job) +struct panthor_job_ringbuf_instrs { + u64 buffer[MAX_INSTRS_PER_JOB]; + u32 count; +}; + +struct panthor_job_instr { + u32 profile_mask; + u64 instr; +}; + +#define JOB_INSTR(__prof, __instr) \ + { \ + .profile_mask = __prof, \ + .instr = __instr, \ + } + +static void +copy_instrs_to_ringbuf(struct panthor_queue *queue, + struct panthor_job *job, + struct panthor_job_ringbuf_instrs *instrs) +{ + u64 ringbuf_size = panthor_kernel_bo_size(queue->ringbuf); + u64 start = job->ringbuf.start & (ringbuf_size - 1); + u64 size, written; + + /* + * We need to write a whole slot, including any trailing zeroes + * that may come at the end of it. Also, because instrs.buffer has + * been zero-initialised, there's no need to pad it with 0's + */ + instrs->count = ALIGN(instrs->count, NUM_INSTRS_PER_CACHE_LINE); + size = instrs->count * sizeof(u64); + WARN_ON(size > ringbuf_size); + written = min(ringbuf_size - start, size); + + memcpy(queue->ringbuf->kmap + start, instrs->buffer, written); + + if (written < size) + memcpy(queue->ringbuf->kmap, + &instrs->buffer[written / sizeof(u64)], + size - written); +} + +struct panthor_job_cs_params { + u32 profile_mask; + u64 addr_reg; u64 val_reg; + u64 cycle_reg; u64 time_reg; + u64 sync_addr; u64 times_addr; + u64 cs_start; u64 cs_size; + u32 last_flush; u32 waitall_mask; +}; + +static void +get_job_cs_params(struct panthor_job *job, struct panthor_job_cs_params *params) { - struct panthor_job *job = container_of(sched_job, struct panthor_job, base); struct panthor_group *group = job->group; struct panthor_queue *queue = group->queues[job->queue_idx]; struct panthor_device *ptdev = group->ptdev; struct panthor_scheduler *sched = ptdev->scheduler; - u32 ringbuf_size = panthor_kernel_bo_size(queue->ringbuf); - u32 ringbuf_insert = queue->iface.input->insert & (ringbuf_size - 1); - u64 addr_reg = ptdev->csif_info.cs_reg_count - - ptdev->csif_info.unpreserved_cs_reg_count; - u64 val_reg = addr_reg + 2; - u64 sync_addr = panthor_kernel_bo_gpuva(group->syncobjs) + - job->queue_idx * sizeof(struct panthor_syncobj_64b); - u32 waitall_mask = GENMASK(sched->sb_slot_count - 1, 0); - struct dma_fence *done_fence; - int ret;
- u64 call_instrs[NUM_INSTRS_PER_SLOT] = { - /* MOV32 rX+2, cs.latest_flush */ - (2ull << 56) | (val_reg << 48) | job->call_info.latest_flush, + params->addr_reg = ptdev->csif_info.cs_reg_count - + ptdev->csif_info.unpreserved_cs_reg_count; + params->val_reg = params->addr_reg + 2; + params->cycle_reg = params->addr_reg; + params->time_reg = params->val_reg;
- /* FLUSH_CACHE2.clean_inv_all.no_wait.signal(0) rX+2 */ - (36ull << 56) | (0ull << 48) | (val_reg << 40) | (0 << 16) | 0x233, + params->sync_addr = panthor_kernel_bo_gpuva(group->syncobjs) + + job->queue_idx * sizeof(struct panthor_syncobj_64b); + params->times_addr = panthor_kernel_bo_gpuva(queue->profiling.slots) + + (job->profiling.slot * sizeof(struct panthor_job_profiling_data)); + params->waitall_mask = GENMASK(sched->sb_slot_count - 1, 0);
- /* MOV48 rX:rX+1, cs.start */ - (1ull << 56) | (addr_reg << 48) | job->call_info.start, + params->cs_start = job->call_info.start; + params->cs_size = job->call_info.size; + params->last_flush = job->call_info.latest_flush;
- /* MOV32 rX+2, cs.size */ - (2ull << 56) | (val_reg << 48) | job->call_info.size, + params->profile_mask = job->profiling.mask; +}
- /* WAIT(0) => waits for FLUSH_CACHE2 instruction */ - (3ull << 56) | (1 << 16), +#define JOB_INSTR_ALWAYS(instr) \ + JOB_INSTR(PANTHOR_DEVICE_PROFILING_DISABLED, (instr)) +#define JOB_INSTR_TIMESTAMP(instr) \ + JOB_INSTR(PANTHOR_DEVICE_PROFILING_TIMESTAMP, (instr)) +#define JOB_INSTR_CYCLES(instr) \ + JOB_INSTR(PANTHOR_DEVICE_PROFILING_CYCLES, (instr))
+static void +prepare_job_instrs(const struct panthor_job_cs_params *params, + struct panthor_job_ringbuf_instrs *instrs) +{ + const struct panthor_job_instr instr_seq[] = { + /* MOV32 rX+2, cs.latest_flush */ + JOB_INSTR_ALWAYS((2ull << 56) | (params->val_reg << 48) | params->last_flush), + /* FLUSH_CACHE2.clean_inv_all.no_wait.signal(0) rX+2 */ + JOB_INSTR_ALWAYS((36ull << 56) | (0ull << 48) | (params->val_reg << 40) | + (0 << 16) | 0x233), + /* MOV48 rX:rX+1, cycles_offset */ + JOB_INSTR_CYCLES((1ull << 56) | (params->cycle_reg << 48) | + (params->times_addr + + offsetof(struct panthor_job_profiling_data, cycles.before))), + /* STORE_STATE cycles */ + JOB_INSTR_CYCLES((40ull << 56) | (params->cycle_reg << 40) | (1ll << 32)), + /* MOV48 rX:rX+1, time_offset */ + JOB_INSTR_TIMESTAMP((1ull << 56) | (params->time_reg << 48) | + (params->times_addr + + offsetof(struct panthor_job_profiling_data, time.before))), + /* STORE_STATE timer */ + JOB_INSTR_TIMESTAMP((40ull << 56) | (params->time_reg << 40) | (0ll << 32)), + /* MOV48 rX:rX+1, cs.start */ + JOB_INSTR_ALWAYS((1ull << 56) | (params->addr_reg << 48) | params->cs_start), + /* MOV32 rX+2, cs.size */ + JOB_INSTR_ALWAYS((2ull << 56) | (params->val_reg << 48) | params->cs_size), + /* WAIT(0) => waits for FLUSH_CACHE2 instruction */ + JOB_INSTR_ALWAYS((3ull << 56) | (1 << 16)), /* CALL rX:rX+1, rX+2 */ - (32ull << 56) | (addr_reg << 40) | (val_reg << 32), - + JOB_INSTR_ALWAYS((32ull << 56) | (params->addr_reg << 40) | + (params->val_reg << 32)), + /* MOV48 rX:rX+1, cycles_offset */ + JOB_INSTR_CYCLES((1ull << 56) | (params->cycle_reg << 48) | + (params->times_addr + + offsetof(struct panthor_job_profiling_data, cycles.after))), + /* STORE_STATE cycles */ + JOB_INSTR_CYCLES((40ull << 56) | (params->cycle_reg << 40) | (1ll << 32)), + /* MOV48 rX:rX+1, time_offset */ + JOB_INSTR_TIMESTAMP((1ull << 56) | (params->time_reg << 48) | + (params->times_addr + + offsetof(struct panthor_job_profiling_data, time.after))), + /* STORE_STATE timer */ + JOB_INSTR_TIMESTAMP((40ull << 56) | (params->time_reg << 40) | (0ll << 32)), /* MOV48 rX:rX+1, sync_addr */ - (1ull << 56) | (addr_reg << 48) | sync_addr, - + JOB_INSTR_ALWAYS((1ull << 56) | (params->addr_reg << 48) | params->sync_addr), /* MOV48 rX+2, #1 */ - (1ull << 56) | (val_reg << 48) | 1, - + JOB_INSTR_ALWAYS((1ull << 56) | (params->val_reg << 48) | 1), /* WAIT(all) */ - (3ull << 56) | (waitall_mask << 16), - + JOB_INSTR_ALWAYS((3ull << 56) | (params->waitall_mask << 16)), /* SYNC_ADD64.system_scope.propage_err.nowait rX:rX+1, rX+2*/ - (51ull << 56) | (0ull << 48) | (addr_reg << 40) | (val_reg << 32) | (0 << 16) | 1, + JOB_INSTR_ALWAYS((51ull << 56) | (0ull << 48) | (params->addr_reg << 40) | + (params->val_reg << 32) | (0 << 16) | 1), + /* ERROR_BARRIER, so we can recover from faults at job boundaries. */ + JOB_INSTR_ALWAYS((47ull << 56)), + }; + u32 pad;
- /* ERROR_BARRIER, so we can recover from faults at job - * boundaries. - */ - (47ull << 56), + instrs->count = 0; + + /* NEED to be cacheline aligned to please the prefetcher. */ + static_assert(sizeof(instrs->buffer) % 64 == 0, + "panthor_job_ringbuf_instrs::buffer is not aligned on a cacheline"); + + /* Make sure we have enough storage to store the whole sequence. */ + static_assert(ALIGN(ARRAY_SIZE(instr_seq), NUM_INSTRS_PER_CACHE_LINE) == + ARRAY_SIZE(instrs->buffer), + "instr_seq vs panthor_job_ringbuf_instrs::buffer size mismatch"); + + for (u32 i = 0; i < ARRAY_SIZE(instr_seq); i++) { + /* If the profile mask of this instruction is not enabled, skip it. */ + if (instr_seq[i].profile_mask && + !(instr_seq[i].profile_mask & params->profile_mask)) + continue; + + instrs->buffer[instrs->count++] = instr_seq[i].instr; + } + + pad = ALIGN(instrs->count, NUM_INSTRS_PER_CACHE_LINE); + memset(&instrs->buffer[instrs->count], 0, + (pad - instrs->count) * sizeof(instrs->buffer[0])); + instrs->count = pad; +} + +static u32 calc_job_credits(u32 profile_mask) +{ + struct panthor_job_ringbuf_instrs instrs; + struct panthor_job_cs_params params = { + .profile_mask = profile_mask, };
- /* Need to be cacheline aligned to please the prefetcher. */ - static_assert(sizeof(call_instrs) % 64 == 0, - "call_instrs is not aligned on a cacheline"); + prepare_job_instrs(¶ms, &instrs); + return instrs.count; +} + +static struct dma_fence * +queue_run_job(struct drm_sched_job *sched_job) +{ + struct panthor_job *job = container_of(sched_job, struct panthor_job, base); + struct panthor_group *group = job->group; + struct panthor_queue *queue = group->queues[job->queue_idx]; + struct panthor_device *ptdev = group->ptdev; + struct panthor_scheduler *sched = ptdev->scheduler; + struct panthor_job_ringbuf_instrs instrs; + struct panthor_job_cs_params cs_params; + struct dma_fence *done_fence; + int ret;
/* Stream size is zero, nothing to do except making sure all previously * submitted jobs are done before we signal the @@ -2914,17 +3082,23 @@ queue_run_job(struct drm_sched_job *sched_job) queue->fence_ctx.id, atomic64_inc_return(&queue->fence_ctx.seqno));
- memcpy(queue->ringbuf->kmap + ringbuf_insert, - call_instrs, sizeof(call_instrs)); + job->profiling.slot = queue->profiling.seqno++; + if (queue->profiling.seqno == queue->profiling.slot_count) + queue->profiling.seqno = 0; + + job->ringbuf.start = queue->iface.input->insert; + + get_job_cs_params(job, &cs_params); + prepare_job_instrs(&cs_params, &instrs); + copy_instrs_to_ringbuf(queue, job, &instrs); + + job->ringbuf.end = job->ringbuf.start + (instrs.count * sizeof(u64));
panthor_job_get(&job->base); spin_lock(&queue->fence_ctx.lock); list_add_tail(&job->node, &queue->fence_ctx.in_flight_jobs); spin_unlock(&queue->fence_ctx.lock);
- job->ringbuf.start = queue->iface.input->insert; - job->ringbuf.end = job->ringbuf.start + sizeof(call_instrs); - /* Make sure the ring buffer is updated before the INSERT * register. */ @@ -3017,6 +3191,33 @@ static const struct drm_sched_backend_ops panthor_queue_sched_ops = { .free_job = queue_free_job, };
+static u32 calc_profiling_ringbuf_num_slots(struct panthor_device *ptdev, + u32 cs_ringbuf_size) +{ + u32 min_profiled_job_instrs = U32_MAX; + u32 last_flag = fls(PANTHOR_DEVICE_PROFILING_ALL); + + /* + * We want to calculate the minimum size of a profiled job's CS, + * because since they need additional instructions for the sampling + * of performance metrics, they might take up further slots in + * the queue's ringbuffer. This means we might not need as many job + * slots for keeping track of their profiling information. What we + * need is the maximum number of slots we should allocate to this end, + * which matches the maximum number of profiled jobs we can place + * simultaneously in the queue's ring buffer. + * That has to be calculated separately for every single job profiling + * flag, but not in the case job profiling is disabled, since unprofiled + * jobs don't need to keep track of this at all. + */ + for (u32 i = 0; i < last_flag; i++) { + min_profiled_job_instrs = + min(min_profiled_job_instrs, calc_job_credits(BIT(i))); + } + + return DIV_ROUND_UP(cs_ringbuf_size, min_profiled_job_instrs * sizeof(u64)); +} + static struct panthor_queue * group_create_queue(struct panthor_group *group, const struct drm_panthor_queue_create *args) @@ -3070,9 +3271,35 @@ group_create_queue(struct panthor_group *group, goto err_free_queue; }
+ queue->profiling.slot_count = + calc_profiling_ringbuf_num_slots(group->ptdev, args->ringbuf_size); + + queue->profiling.slots = + panthor_kernel_bo_create(group->ptdev, group->vm, + queue->profiling.slot_count * + sizeof(struct panthor_job_profiling_data), + DRM_PANTHOR_BO_NO_MMAP, + DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | + DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED, + PANTHOR_VM_KERNEL_AUTO_VA); + + if (IS_ERR(queue->profiling.slots)) { + ret = PTR_ERR(queue->profiling.slots); + goto err_free_queue; + } + + ret = panthor_kernel_bo_vmap(queue->profiling.slots); + if (ret) + goto err_free_queue; + + /* + * Credit limit argument tells us the total number of instructions + * across all CS slots in the ringbuffer, with some jobs requiring + * twice as many as others, depending on their profiling status. + */ ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops, group->ptdev->scheduler->wq, 1, - args->ringbuf_size / (NUM_INSTRS_PER_SLOT * sizeof(u64)), + args->ringbuf_size / sizeof(u64), 0, msecs_to_jiffies(JOB_TIMEOUT_MS), group->ptdev->reset.wq, NULL, "panthor-queue", group->ptdev->base.dev); @@ -3380,6 +3607,7 @@ panthor_job_create(struct panthor_file *pfile, { struct panthor_group_pool *gpool = pfile->groups; struct panthor_job *job; + u32 credits; int ret;
if (qsubmit->pad) @@ -3438,9 +3666,16 @@ panthor_job_create(struct panthor_file *pfile, } }
+ job->profiling.mask = pfile->ptdev->profile_mask; + credits = calc_job_credits(job->profiling.mask); + if (credits == 0) { + ret = -EINVAL; + goto err_put_job; + } + ret = drm_sched_job_init(&job->base, &job->group->queues[job->queue_idx]->entity, - 1, job->group); + credits, job->group); if (ret) goto err_put_job;
diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c index 47aa06a9a942..5b69cc8011b4 100644 --- a/drivers/gpu/drm/radeon/radeon_audio.c +++ b/drivers/gpu/drm/radeon/radeon_audio.c @@ -760,16 +760,20 @@ static int radeon_audio_component_get_eld(struct device *kdev, int port, if (!rdev->audio.enabled || !rdev->mode_info.mode_config_initialized) return 0;
- list_for_each_entry(encoder, &rdev_to_drm(rdev)->mode_config.encoder_list, head) { + list_for_each_entry(connector, &dev->mode_config.connector_list, head) { + const struct drm_connector_helper_funcs *connector_funcs = + connector->helper_private; + encoder = connector_funcs->best_encoder(connector); + + if (!encoder) + continue; + if (!radeon_encoder_is_digital(encoder)) continue; radeon_encoder = to_radeon_encoder(encoder); dig = radeon_encoder->enc_priv; if (!dig->pin || dig->pin->id != port) continue; - connector = radeon_get_connector_for_encoder(encoder); - if (!connector) - continue; *enabled = true; ret = drm_eld_size(connector->eld); memcpy(buf, connector->eld, min(max_bytes, ret)); diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index cf4b23369dc4..75b4725d49c7 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -553,6 +553,7 @@ void v3d_irq_disable(struct v3d_dev *v3d); void v3d_irq_reset(struct v3d_dev *v3d);
/* v3d_mmu.c */ +int v3d_mmu_flush_all(struct v3d_dev *v3d); int v3d_mmu_set_page_table(struct v3d_dev *v3d); void v3d_mmu_insert_ptes(struct v3d_bo *bo); void v3d_mmu_remove_ptes(struct v3d_bo *bo); diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c index d469bda52c1a..20bf33702c3c 100644 --- a/drivers/gpu/drm/v3d/v3d_irq.c +++ b/drivers/gpu/drm/v3d/v3d_irq.c @@ -70,6 +70,8 @@ v3d_overflow_mem_work(struct work_struct *work) list_add_tail(&bo->unref_head, &v3d->bin_job->render->unref_list); spin_unlock_irqrestore(&v3d->job_lock, irqflags);
+ v3d_mmu_flush_all(v3d); + V3D_CORE_WRITE(0, V3D_PTB_BPOA, bo->node.start << V3D_MMU_PAGE_SHIFT); V3D_CORE_WRITE(0, V3D_PTB_BPOS, obj->size);
diff --git a/drivers/gpu/drm/v3d/v3d_mmu.c b/drivers/gpu/drm/v3d/v3d_mmu.c index 14f3af40d6f6..5bb7821c0243 100644 --- a/drivers/gpu/drm/v3d/v3d_mmu.c +++ b/drivers/gpu/drm/v3d/v3d_mmu.c @@ -28,36 +28,27 @@ #define V3D_PTE_WRITEABLE BIT(29) #define V3D_PTE_VALID BIT(28)
-static int v3d_mmu_flush_all(struct v3d_dev *v3d) +int v3d_mmu_flush_all(struct v3d_dev *v3d) { int ret;
- /* Make sure that another flush isn't already running when we - * start this one. - */ - ret = wait_for(!(V3D_READ(V3D_MMU_CTL) & - V3D_MMU_CTL_TLB_CLEARING), 100); - if (ret) - dev_err(v3d->drm.dev, "TLB clear wait idle pre-wait failed\n"); - - V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) | - V3D_MMU_CTL_TLB_CLEAR); - - V3D_WRITE(V3D_MMUC_CONTROL, - V3D_MMUC_CONTROL_FLUSH | + V3D_WRITE(V3D_MMUC_CONTROL, V3D_MMUC_CONTROL_FLUSH | V3D_MMUC_CONTROL_ENABLE);
- ret = wait_for(!(V3D_READ(V3D_MMU_CTL) & - V3D_MMU_CTL_TLB_CLEARING), 100); + ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) & + V3D_MMUC_CONTROL_FLUSHING), 100); if (ret) { - dev_err(v3d->drm.dev, "TLB clear wait idle failed\n"); + dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n"); return ret; }
- ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) & - V3D_MMUC_CONTROL_FLUSHING), 100); + V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) | + V3D_MMU_CTL_TLB_CLEAR); + + ret = wait_for(!(V3D_READ(V3D_MMU_CTL) & + V3D_MMU_CTL_TLB_CLEARING), 100); if (ret) - dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n"); + dev_err(v3d->drm.dev, "MMU TLB clear wait idle failed\n");
return ret; } diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index 08d2a2739582..4f935f1d50a9 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -135,8 +135,31 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue) struct v3d_stats *global_stats = &v3d->queue[queue].stats; struct v3d_stats *local_stats = &file->stats[queue]; u64 now = local_clock(); - - preempt_disable(); + unsigned long flags; + + /* + * We only need to disable local interrupts to appease lockdep who + * otherwise would think v3d_job_start_stats vs v3d_stats_update has an + * unsafe in-irq vs no-irq-off usage problem. This is a false positive + * because all the locks are per queue and stats type, and all jobs are + * completely one at a time serialised. More specifically: + * + * 1. Locks for GPU queues are updated from interrupt handlers under a + * spin lock and started here with preemption disabled. + * + * 2. Locks for CPU queues are updated from the worker with preemption + * disabled and equally started here with preemption disabled. + * + * Therefore both are consistent. + * + * 3. Because next job can only be queued after the previous one has + * been signaled, and locks are per queue, there is also no scope for + * the start part to race with the update part. + */ + if (IS_ENABLED(CONFIG_LOCKDEP)) + local_irq_save(flags); + else + preempt_disable();
write_seqcount_begin(&local_stats->lock); local_stats->start_ns = now; @@ -146,7 +169,10 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue) global_stats->start_ns = now; write_seqcount_end(&global_stats->lock);
- preempt_enable(); + if (IS_ENABLED(CONFIG_LOCKDEP)) + local_irq_restore(flags); + else + preempt_enable(); }
static void @@ -167,11 +193,21 @@ v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue) struct v3d_stats *global_stats = &v3d->queue[queue].stats; struct v3d_stats *local_stats = &file->stats[queue]; u64 now = local_clock(); + unsigned long flags; + + /* See comment in v3d_job_start_stats() */ + if (IS_ENABLED(CONFIG_LOCKDEP)) + local_irq_save(flags); + else + preempt_disable();
- preempt_disable(); v3d_stats_update(local_stats, now); v3d_stats_update(global_stats, now); - preempt_enable(); + + if (IS_ENABLED(CONFIG_LOCKDEP)) + local_irq_restore(flags); + else + preempt_enable(); }
static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job) diff --git a/drivers/gpu/drm/vc4/tests/vc4_mock.c b/drivers/gpu/drm/vc4/tests/vc4_mock.c index 0731a7d85d7a..922849dd4b47 100644 --- a/drivers/gpu/drm/vc4/tests/vc4_mock.c +++ b/drivers/gpu/drm/vc4/tests/vc4_mock.c @@ -155,11 +155,11 @@ KUNIT_DEFINE_ACTION_WRAPPER(kunit_action_drm_dev_unregister, drm_dev_unregister, struct drm_device *);
-static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5) +static struct vc4_dev *__mock_device(struct kunit *test, enum vc4_gen gen) { struct drm_device *drm; - const struct drm_driver *drv = is_vc5 ? &vc5_drm_driver : &vc4_drm_driver; - const struct vc4_mock_desc *desc = is_vc5 ? &vc5_mock : &vc4_mock; + const struct drm_driver *drv = (gen == VC4_GEN_5) ? &vc5_drm_driver : &vc4_drm_driver; + const struct vc4_mock_desc *desc = (gen == VC4_GEN_5) ? &vc5_mock : &vc4_mock; struct vc4_dev *vc4; struct device *dev; int ret; @@ -173,7 +173,7 @@ static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5) KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4);
vc4->dev = dev; - vc4->is_vc5 = is_vc5; + vc4->gen = gen;
vc4->hvs = __vc4_hvs_alloc(vc4, NULL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4->hvs); @@ -198,10 +198,10 @@ static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5)
struct vc4_dev *vc4_mock_device(struct kunit *test) { - return __mock_device(test, false); + return __mock_device(test, VC4_GEN_4); }
struct vc4_dev *vc5_mock_device(struct kunit *test) { - return __mock_device(test, true); + return __mock_device(test, VC4_GEN_5); } diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c index 3f72be7490d5..2a85d08b1985 100644 --- a/drivers/gpu/drm/vc4/vc4_bo.c +++ b/drivers/gpu/drm/vc4/vc4_bo.c @@ -251,7 +251,7 @@ void vc4_bo_add_to_purgeable_pool(struct vc4_bo *bo) { struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
mutex_lock(&vc4->purgeable.lock); @@ -265,7 +265,7 @@ static void vc4_bo_remove_from_purgeable_pool_locked(struct vc4_bo *bo) { struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
/* list_del_init() is used here because the caller might release @@ -396,7 +396,7 @@ struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size) struct vc4_dev *vc4 = to_vc4_dev(dev); struct vc4_bo *bo;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return ERR_PTR(-ENODEV);
bo = kzalloc(sizeof(*bo), GFP_KERNEL); @@ -427,7 +427,7 @@ struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size, struct drm_gem_dma_object *dma_obj; struct vc4_bo *bo;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return ERR_PTR(-ENODEV);
if (size == 0) @@ -496,7 +496,7 @@ int vc4_bo_dumb_create(struct drm_file *file_priv, struct vc4_bo *bo = NULL; int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
ret = vc4_dumb_fixup_args(args); @@ -622,7 +622,7 @@ int vc4_bo_inc_usecnt(struct vc4_bo *bo) struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev); int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
/* Fast path: if the BO is already retained by someone, no need to @@ -661,7 +661,7 @@ void vc4_bo_dec_usecnt(struct vc4_bo *bo) { struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
/* Fast path: if the BO is still retained by someone, no need to test @@ -783,7 +783,7 @@ int vc4_create_bo_ioctl(struct drm_device *dev, void *data, struct vc4_bo *bo = NULL; int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
ret = vc4_grab_bin_bo(vc4, vc4file); @@ -813,7 +813,7 @@ int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data, struct drm_vc4_mmap_bo *args = data; struct drm_gem_object *gem_obj;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
gem_obj = drm_gem_object_lookup(file_priv, args->handle); @@ -839,7 +839,7 @@ vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data, struct vc4_bo *bo = NULL; int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (args->size == 0) @@ -918,7 +918,7 @@ int vc4_set_tiling_ioctl(struct drm_device *dev, void *data, struct vc4_bo *bo; bool t_format;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (args->flags != 0) @@ -964,7 +964,7 @@ int vc4_get_tiling_ioctl(struct drm_device *dev, void *data, struct drm_gem_object *gem_obj; struct vc4_bo *bo;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (args->flags != 0 || args->modifier != 0) @@ -1007,7 +1007,7 @@ int vc4_bo_cache_init(struct drm_device *dev) int ret; int i;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
/* Create the initial set of BO labels that the kernel will @@ -1071,7 +1071,7 @@ int vc4_label_bo_ioctl(struct drm_device *dev, void *data, struct drm_gem_object *gem_obj; int ret = 0, label;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (!args->len) diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c index 8b5a7e5eb146..26a7cf7f6465 100644 --- a/drivers/gpu/drm/vc4/vc4_crtc.c +++ b/drivers/gpu/drm/vc4/vc4_crtc.c @@ -263,7 +263,7 @@ static u32 vc4_get_fifo_full_level(struct vc4_crtc *vc4_crtc, u32 format) * Removing 1 from the FIFO full level however * seems to completely remove that issue. */ - if (!vc4->is_vc5) + if (vc4->gen == VC4_GEN_4) return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1;
return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX; @@ -428,7 +428,7 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode if (is_dsi) CRTC_WRITE(PV_HACT_ACT, mode->hdisplay * pixel_rep);
- if (vc4->is_vc5) + if (vc4->gen == VC4_GEN_5) CRTC_WRITE(PV_MUX_CFG, VC4_SET_FIELD(PV_MUX_CFG_RGB_PIXEL_MUX_MODE_NO_SWAP, PV_MUX_CFG_RGB_PIXEL_MUX_MODE)); @@ -913,7 +913,7 @@ static int vc4_async_set_fence_cb(struct drm_device *dev, struct dma_fence *fence; int ret;
- if (!vc4->is_vc5) { + if (vc4->gen == VC4_GEN_4) { struct vc4_bo *bo = to_vc4_bo(&dma_bo->base);
return vc4_queue_seqno_cb(dev, &flip_state->cb.seqno, bo->seqno, @@ -1000,7 +1000,7 @@ static int vc4_async_page_flip(struct drm_crtc *crtc, struct vc4_bo *bo = to_vc4_bo(&dma_bo->base); int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
/* @@ -1043,7 +1043,7 @@ int vc4_page_flip(struct drm_crtc *crtc, struct drm_device *dev = crtc->dev; struct vc4_dev *vc4 = to_vc4_dev(dev);
- if (vc4->is_vc5) + if (vc4->gen == VC4_GEN_5) return vc5_async_page_flip(crtc, fb, event, flags); else return vc4_async_page_flip(crtc, fb, event, flags); @@ -1338,9 +1338,8 @@ int __vc4_crtc_init(struct drm_device *drm,
drm_crtc_helper_add(crtc, crtc_helper_funcs);
- if (!vc4->is_vc5) { + if (vc4->gen == VC4_GEN_4) { drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r)); - drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size);
/* We support CTM, but only for one CRTC at a time. It's therefore diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c index c133e96b8aca..550324819f37 100644 --- a/drivers/gpu/drm/vc4/vc4_drv.c +++ b/drivers/gpu/drm/vc4/vc4_drv.c @@ -98,7 +98,7 @@ static int vc4_get_param_ioctl(struct drm_device *dev, void *data, if (args->pad != 0) return -EINVAL;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (!vc4->v3d) @@ -147,7 +147,7 @@ static int vc4_open(struct drm_device *dev, struct drm_file *file) struct vc4_dev *vc4 = to_vc4_dev(dev); struct vc4_file *vc4file;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
vc4file = kzalloc(sizeof(*vc4file), GFP_KERNEL); @@ -165,7 +165,7 @@ static void vc4_close(struct drm_device *dev, struct drm_file *file) struct vc4_dev *vc4 = to_vc4_dev(dev); struct vc4_file *vc4file = file->driver_priv;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
if (vc4file->bin_bo_used) @@ -291,13 +291,17 @@ static int vc4_drm_bind(struct device *dev) struct vc4_dev *vc4; struct device_node *node; struct drm_crtc *crtc; - bool is_vc5; + enum vc4_gen gen; int ret = 0;
dev->coherent_dma_mask = DMA_BIT_MASK(32);
- is_vc5 = of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5"); - if (is_vc5) + if (of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5")) + gen = VC4_GEN_5; + else + gen = VC4_GEN_4; + + if (gen == VC4_GEN_5) driver = &vc5_drm_driver; else driver = &vc4_drm_driver; @@ -315,13 +319,13 @@ static int vc4_drm_bind(struct device *dev) vc4 = devm_drm_dev_alloc(dev, driver, struct vc4_dev, base); if (IS_ERR(vc4)) return PTR_ERR(vc4); - vc4->is_vc5 = is_vc5; + vc4->gen = gen; vc4->dev = dev;
drm = &vc4->base; platform_set_drvdata(pdev, drm);
- if (!is_vc5) { + if (gen == VC4_GEN_4) { ret = drmm_mutex_init(drm, &vc4->bin_bo_lock); if (ret) goto err; @@ -335,7 +339,7 @@ static int vc4_drm_bind(struct device *dev) if (ret) goto err;
- if (!is_vc5) { + if (gen == VC4_GEN_4) { ret = vc4_gem_init(drm); if (ret) goto err; diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h index 08e29fa82563..dd452e6a1143 100644 --- a/drivers/gpu/drm/vc4/vc4_drv.h +++ b/drivers/gpu/drm/vc4/vc4_drv.h @@ -80,11 +80,16 @@ struct vc4_perfmon { u64 counters[] __counted_by(ncounters); };
+enum vc4_gen { + VC4_GEN_4, + VC4_GEN_5, +}; + struct vc4_dev { struct drm_device base; struct device *dev;
- bool is_vc5; + enum vc4_gen gen;
unsigned int irq;
@@ -315,6 +320,7 @@ struct vc4_hvs { struct platform_device *pdev; void __iomem *regs; u32 __iomem *dlist; + unsigned int dlist_mem_size;
struct clk *core_clk;
diff --git a/drivers/gpu/drm/vc4/vc4_gem.c b/drivers/gpu/drm/vc4/vc4_gem.c index 24fb1b57e1dd..be9c0b72ebe8 100644 --- a/drivers/gpu/drm/vc4/vc4_gem.c +++ b/drivers/gpu/drm/vc4/vc4_gem.c @@ -76,7 +76,7 @@ vc4_get_hang_state_ioctl(struct drm_device *dev, void *data, u32 i; int ret = 0;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (!vc4->v3d) { @@ -389,7 +389,7 @@ vc4_wait_for_seqno(struct drm_device *dev, uint64_t seqno, uint64_t timeout_ns, unsigned long timeout_expire; DEFINE_WAIT(wait);
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (vc4->finished_seqno >= seqno) @@ -474,7 +474,7 @@ vc4_submit_next_bin_job(struct drm_device *dev) struct vc4_dev *vc4 = to_vc4_dev(dev); struct vc4_exec_info *exec;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
again: @@ -522,7 +522,7 @@ vc4_submit_next_render_job(struct drm_device *dev) if (!exec) return;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
/* A previous RCL may have written to one of our textures, and @@ -543,7 +543,7 @@ vc4_move_job_to_render(struct drm_device *dev, struct vc4_exec_info *exec) struct vc4_dev *vc4 = to_vc4_dev(dev); bool was_empty = list_empty(&vc4->render_job_list);
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
list_move_tail(&exec->head, &vc4->render_job_list); @@ -970,7 +970,7 @@ vc4_job_handle_completed(struct vc4_dev *vc4) unsigned long irqflags; struct vc4_seqno_cb *cb, *cb_temp;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
spin_lock_irqsave(&vc4->job_lock, irqflags); @@ -1009,7 +1009,7 @@ int vc4_queue_seqno_cb(struct drm_device *dev, struct vc4_dev *vc4 = to_vc4_dev(dev); unsigned long irqflags;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
cb->func = func; @@ -1065,7 +1065,7 @@ vc4_wait_seqno_ioctl(struct drm_device *dev, void *data, struct vc4_dev *vc4 = to_vc4_dev(dev); struct drm_vc4_wait_seqno *args = data;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
return vc4_wait_for_seqno_ioctl_helper(dev, args->seqno, @@ -1082,7 +1082,7 @@ vc4_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_gem_object *gem_obj; struct vc4_bo *bo;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (args->pad != 0) @@ -1131,7 +1131,7 @@ vc4_submit_cl_ioctl(struct drm_device *dev, void *data, args->shader_rec_size, args->bo_handle_count);
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (!vc4->v3d) { @@ -1267,7 +1267,7 @@ int vc4_gem_init(struct drm_device *dev) struct vc4_dev *vc4 = to_vc4_dev(dev); int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
vc4->dma_fence_context = dma_fence_context_alloc(1); @@ -1326,7 +1326,7 @@ int vc4_gem_madvise_ioctl(struct drm_device *dev, void *data, struct vc4_bo *bo; int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
switch (args->madv) { diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c index 6611ab7c26a6..2d7d3e90f3be 100644 --- a/drivers/gpu/drm/vc4/vc4_hdmi.c +++ b/drivers/gpu/drm/vc4/vc4_hdmi.c @@ -147,6 +147,8 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused) if (!drm_dev_enter(drm, &idx)) return -ENODEV;
+ WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev)); + drm_print_regset32(&p, &vc4_hdmi->hdmi_regset); drm_print_regset32(&p, &vc4_hdmi->hd_regset); drm_print_regset32(&p, &vc4_hdmi->cec_regset); @@ -156,6 +158,8 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused) drm_print_regset32(&p, &vc4_hdmi->ram_regset); drm_print_regset32(&p, &vc4_hdmi->rm_regset);
+ pm_runtime_put(&vc4_hdmi->pdev->dev); + drm_dev_exit(idx);
return 0; @@ -2047,6 +2051,7 @@ static int vc4_hdmi_audio_prepare(struct device *dev, void *data, struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev); struct drm_device *drm = vc4_hdmi->connector.dev; struct drm_connector *connector = &vc4_hdmi->connector; + struct vc4_dev *vc4 = to_vc4_dev(drm); unsigned int sample_rate = params->sample_rate; unsigned int channels = params->channels; unsigned long flags; @@ -2104,11 +2109,18 @@ static int vc4_hdmi_audio_prepare(struct device *dev, void *data, VC4_HDMI_AUDIO_PACKET_CEA_MASK);
/* Set the MAI threshold */ - HDMI_WRITE(HDMI_MAI_THR, - VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICHIGH) | - VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICLOW) | - VC4_SET_FIELD(0x06, VC4_HD_MAI_THR_DREQHIGH) | - VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_DREQLOW)); + if (vc4->gen >= VC4_GEN_5) + HDMI_WRITE(HDMI_MAI_THR, + VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICHIGH) | + VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICLOW) | + VC4_SET_FIELD(0x1c, VC4_HD_MAI_THR_DREQHIGH) | + VC4_SET_FIELD(0x1c, VC4_HD_MAI_THR_DREQLOW)); + else + HDMI_WRITE(HDMI_MAI_THR, + VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_PANICHIGH) | + VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_PANICLOW) | + VC4_SET_FIELD(0x6, VC4_HD_MAI_THR_DREQHIGH) | + VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_DREQLOW));
HDMI_WRITE(HDMI_MAI_CONFIG, VC4_HDMI_MAI_CONFIG_BIT_REVERSE | diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c index 2a835a5cff9d..863539e1f7e0 100644 --- a/drivers/gpu/drm/vc4/vc4_hvs.c +++ b/drivers/gpu/drm/vc4/vc4_hvs.c @@ -110,7 +110,8 @@ static int vc4_hvs_debugfs_dlist(struct seq_file *m, void *data) struct vc4_dev *vc4 = to_vc4_dev(dev); struct vc4_hvs *hvs = vc4->hvs; struct drm_printer p = drm_seq_file_printer(m); - unsigned int next_entry_start = 0; + unsigned int dlist_mem_size = hvs->dlist_mem_size; + unsigned int next_entry_start; unsigned int i, j; u32 dlist_word, dispstat;
@@ -124,8 +125,9 @@ static int vc4_hvs_debugfs_dlist(struct seq_file *m, void *data) }
drm_printf(&p, "HVS chan %u:\n", i); + next_entry_start = 0;
- for (j = HVS_READ(SCALER_DISPLISTX(i)); j < 256; j++) { + for (j = HVS_READ(SCALER_DISPLISTX(i)); j < dlist_mem_size; j++) { dlist_word = readl((u32 __iomem *)vc4->hvs->dlist + j); drm_printf(&p, "dlist: %02d: 0x%08x\n", j, dlist_word); @@ -222,6 +224,9 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs, if (!drm_dev_enter(drm, &idx)) return;
+ if (hvs->vc4->gen != VC4_GEN_4) + goto exit; + /* The LUT memory is laid out with each HVS channel in order, * each of which takes 256 writes for R, 256 for G, then 256 * for B. @@ -237,6 +242,7 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs, for (i = 0; i < crtc->gamma_size; i++) HVS_WRITE(SCALER_GAMDATA, vc4_crtc->lut_b[i]);
+exit: drm_dev_exit(idx); }
@@ -291,7 +297,7 @@ int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output) u32 reg; int ret;
- if (!vc4->is_vc5) + if (vc4->gen == VC4_GEN_4) return output;
/* @@ -372,7 +378,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc, dispctrl = SCALER_DISPCTRLX_ENABLE; dispbkgndx = HVS_READ(SCALER_DISPBKGNDX(chan));
- if (!vc4->is_vc5) { + if (vc4->gen == VC4_GEN_4) { dispctrl |= VC4_SET_FIELD(mode->hdisplay, SCALER_DISPCTRLX_WIDTH) | VC4_SET_FIELD(mode->vdisplay, @@ -394,7 +400,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc, dispbkgndx &= ~SCALER_DISPBKGND_INTERLACE;
HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx | - ((!vc4->is_vc5) ? SCALER_DISPBKGND_GAMMA : 0) | + ((vc4->gen == VC4_GEN_4) ? SCALER_DISPBKGND_GAMMA : 0) | (interlace ? SCALER_DISPBKGND_INTERLACE : 0));
/* Reload the LUT, since the SRAMs would have been disabled if @@ -415,13 +421,11 @@ void vc4_hvs_stop_channel(struct vc4_hvs *hvs, unsigned int chan) if (!drm_dev_enter(drm, &idx)) return;
- if (HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE) + if (!(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE)) goto out;
- HVS_WRITE(SCALER_DISPCTRLX(chan), - HVS_READ(SCALER_DISPCTRLX(chan)) | SCALER_DISPCTRLX_RESET); - HVS_WRITE(SCALER_DISPCTRLX(chan), - HVS_READ(SCALER_DISPCTRLX(chan)) & ~SCALER_DISPCTRLX_ENABLE); + HVS_WRITE(SCALER_DISPCTRLX(chan), SCALER_DISPCTRLX_RESET); + HVS_WRITE(SCALER_DISPCTRLX(chan), 0);
/* Once we leave, the scaler should be disabled and its fifo empty. */ WARN_ON_ONCE(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_RESET); @@ -580,7 +584,7 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc, }
if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED) - return; + goto exit;
if (debug_dump_regs) { DRM_INFO("CRTC %d HVS before:\n", drm_crtc_index(crtc)); @@ -663,12 +667,14 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc, vc4_hvs_dump_state(hvs); }
+exit: drm_dev_exit(idx); }
void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel) { - struct drm_device *drm = &hvs->vc4->base; + struct vc4_dev *vc4 = hvs->vc4; + struct drm_device *drm = &vc4->base; u32 dispctrl; int idx;
@@ -676,8 +682,9 @@ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel) return;
dispctrl = HVS_READ(SCALER_DISPCTRL); - dispctrl &= ~(hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) : - SCALER_DISPCTRL_DSPEISLUR(channel)); + dispctrl &= ~((vc4->gen == VC4_GEN_5) ? + SCALER5_DISPCTRL_DSPEISLUR(channel) : + SCALER_DISPCTRL_DSPEISLUR(channel));
HVS_WRITE(SCALER_DISPCTRL, dispctrl);
@@ -686,7 +693,8 @@ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel) { - struct drm_device *drm = &hvs->vc4->base; + struct vc4_dev *vc4 = hvs->vc4; + struct drm_device *drm = &vc4->base; u32 dispctrl; int idx;
@@ -694,8 +702,9 @@ void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel) return;
dispctrl = HVS_READ(SCALER_DISPCTRL); - dispctrl |= (hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) : - SCALER_DISPCTRL_DSPEISLUR(channel)); + dispctrl |= ((vc4->gen == VC4_GEN_5) ? + SCALER5_DISPCTRL_DSPEISLUR(channel) : + SCALER_DISPCTRL_DSPEISLUR(channel));
HVS_WRITE(SCALER_DISPSTAT, SCALER_DISPSTAT_EUFLOW(channel)); @@ -738,8 +747,10 @@ static irqreturn_t vc4_hvs_irq_handler(int irq, void *data) control = HVS_READ(SCALER_DISPCTRL);
for (channel = 0; channel < SCALER_CHANNELS_COUNT; channel++) { - dspeislur = vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) : - SCALER_DISPCTRL_DSPEISLUR(channel); + dspeislur = (vc4->gen == VC4_GEN_5) ? + SCALER5_DISPCTRL_DSPEISLUR(channel) : + SCALER_DISPCTRL_DSPEISLUR(channel); + /* Interrupt masking is not always honored, so check it here. */ if (status & SCALER_DISPSTAT_EUFLOW(channel) && control & dspeislur) { @@ -767,7 +778,7 @@ int vc4_hvs_debugfs_init(struct drm_minor *minor) if (!vc4->hvs) return -ENODEV;
- if (!vc4->is_vc5) + if (vc4->gen == VC4_GEN_4) debugfs_create_bool("hvs_load_tracker", S_IRUGO | S_IWUSR, minor->debugfs_root, &vc4->load_tracker_enabled); @@ -800,16 +811,17 @@ struct vc4_hvs *__vc4_hvs_alloc(struct vc4_dev *vc4, struct platform_device *pde * our 16K), since we don't want to scramble the screen when * transitioning from the firmware's boot setup to runtime. */ + hvs->dlist_mem_size = (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END; drm_mm_init(&hvs->dlist_mm, HVS_BOOTLOADER_DLIST_END, - (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END); + hvs->dlist_mem_size);
/* Set up the HVS LBM memory manager. We could have some more * complicated data structure that allowed reuse of LBM areas * between planes when they don't overlap on the screen, but * for now we just allocate globally. */ - if (!vc4->is_vc5) + if (vc4->gen == VC4_GEN_4) /* 48k words of 2x12-bit pixels */ drm_mm_init(&hvs->lbm_mm, 0, 48 * 1024); else @@ -843,7 +855,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data) hvs->regset.regs = hvs_regs; hvs->regset.nregs = ARRAY_SIZE(hvs_regs);
- if (vc4->is_vc5) { + if (vc4->gen == VC4_GEN_5) { struct rpi_firmware *firmware; struct device_node *node; unsigned int max_rate; @@ -881,7 +893,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data) } }
- if (!vc4->is_vc5) + if (vc4->gen == VC4_GEN_4) hvs->dlist = hvs->regs + SCALER_DLIST_START; else hvs->dlist = hvs->regs + SCALER5_DLIST_START; @@ -922,7 +934,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data) SCALER_DISPCTRL_DISPEIRQ(1) | SCALER_DISPCTRL_DISPEIRQ(2);
- if (!vc4->is_vc5) + if (vc4->gen == VC4_GEN_4) dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ | SCALER_DISPCTRL_SLVWREIRQ | SCALER_DISPCTRL_SLVRDEIRQ | @@ -966,7 +978,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
/* Recompute Composite Output Buffer (COB) allocations for the displays */ - if (!vc4->is_vc5) { + if (vc4->gen == VC4_GEN_4) { /* The COB is 20736 pixels, or just over 10 lines at 2048 wide. * The bottom 2048 pixels are full 32bpp RGBA (intended for the * TXP composing RGBA to memory), whilst the remainder are only diff --git a/drivers/gpu/drm/vc4/vc4_irq.c b/drivers/gpu/drm/vc4/vc4_irq.c index ef93d8e22a35..968356d1b91d 100644 --- a/drivers/gpu/drm/vc4/vc4_irq.c +++ b/drivers/gpu/drm/vc4/vc4_irq.c @@ -263,7 +263,7 @@ vc4_irq_enable(struct drm_device *dev) { struct vc4_dev *vc4 = to_vc4_dev(dev);
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
if (!vc4->v3d) @@ -280,7 +280,7 @@ vc4_irq_disable(struct drm_device *dev) { struct vc4_dev *vc4 = to_vc4_dev(dev);
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
if (!vc4->v3d) @@ -303,7 +303,7 @@ int vc4_irq_install(struct drm_device *dev, int irq) struct vc4_dev *vc4 = to_vc4_dev(dev); int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (irq == IRQ_NOTCONNECTED) @@ -324,7 +324,7 @@ void vc4_irq_uninstall(struct drm_device *dev) { struct vc4_dev *vc4 = to_vc4_dev(dev);
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
vc4_irq_disable(dev); @@ -337,7 +337,7 @@ void vc4_irq_reset(struct drm_device *dev) struct vc4_dev *vc4 = to_vc4_dev(dev); unsigned long irqflags;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
/* Acknowledge any stale IRQs. */ diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c index 5495f2a94fa9..bddfcad10950 100644 --- a/drivers/gpu/drm/vc4/vc4_kms.c +++ b/drivers/gpu/drm/vc4/vc4_kms.c @@ -369,7 +369,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state) old_hvs_state->fifo_state[channel].pending_commit = NULL; }
- if (vc4->is_vc5) { + if (vc4->gen == VC4_GEN_5) { unsigned long state_rate = max(old_hvs_state->core_clock_rate, new_hvs_state->core_clock_rate); unsigned long core_rate = clamp_t(unsigned long, state_rate, @@ -388,7 +388,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
vc4_ctm_commit(vc4, state);
- if (vc4->is_vc5) + if (vc4->gen == VC4_GEN_5) vc5_hvs_pv_muxing_commit(vc4, state); else vc4_hvs_pv_muxing_commit(vc4, state); @@ -406,7 +406,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
drm_atomic_helper_cleanup_planes(dev, state);
- if (vc4->is_vc5) { + if (vc4->gen == VC4_GEN_5) { unsigned long core_rate = min_t(unsigned long, hvs->max_core_rate, new_hvs_state->core_clock_rate); @@ -461,7 +461,7 @@ static struct drm_framebuffer *vc4_fb_create(struct drm_device *dev, struct vc4_dev *vc4 = to_vc4_dev(dev); struct drm_mode_fb_cmd2 mode_cmd_local;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return ERR_PTR(-ENODEV);
/* If the user didn't specify a modifier, use the @@ -1040,7 +1040,7 @@ int vc4_kms_load(struct drm_device *dev) * the BCM2711, but the load tracker computations are used for * the core clock rate calculation. */ - if (!vc4->is_vc5) { + if (vc4->gen == VC4_GEN_4) { /* Start with the load tracker enabled. Can be * disabled through the debugfs load_tracker file. */ @@ -1056,7 +1056,7 @@ int vc4_kms_load(struct drm_device *dev) return ret; }
- if (vc4->is_vc5) { + if (vc4->gen == VC4_GEN_5) { dev->mode_config.max_width = 7680; dev->mode_config.max_height = 7680; } else { @@ -1064,7 +1064,7 @@ int vc4_kms_load(struct drm_device *dev) dev->mode_config.max_height = 2048; }
- dev->mode_config.funcs = vc4->is_vc5 ? &vc5_mode_funcs : &vc4_mode_funcs; + dev->mode_config.funcs = (vc4->gen > VC4_GEN_4) ? &vc5_mode_funcs : &vc4_mode_funcs; dev->mode_config.helper_private = &vc4_mode_config_helpers; dev->mode_config.preferred_depth = 24; dev->mode_config.async_page_flip = true; diff --git a/drivers/gpu/drm/vc4/vc4_perfmon.c b/drivers/gpu/drm/vc4/vc4_perfmon.c index c00a5cc2316d..e4fda72c19f9 100644 --- a/drivers/gpu/drm/vc4/vc4_perfmon.c +++ b/drivers/gpu/drm/vc4/vc4_perfmon.c @@ -23,7 +23,7 @@ void vc4_perfmon_get(struct vc4_perfmon *perfmon) return;
vc4 = perfmon->dev; - if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
refcount_inc(&perfmon->refcnt); @@ -37,7 +37,7 @@ void vc4_perfmon_put(struct vc4_perfmon *perfmon) return;
vc4 = perfmon->dev; - if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
if (refcount_dec_and_test(&perfmon->refcnt)) @@ -49,7 +49,7 @@ void vc4_perfmon_start(struct vc4_dev *vc4, struct vc4_perfmon *perfmon) unsigned int i; u32 mask;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
if (WARN_ON_ONCE(!perfmon || vc4->active_perfmon)) @@ -69,7 +69,7 @@ void vc4_perfmon_stop(struct vc4_dev *vc4, struct vc4_perfmon *perfmon, { unsigned int i;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
if (WARN_ON_ONCE(!vc4->active_perfmon || @@ -90,7 +90,7 @@ struct vc4_perfmon *vc4_perfmon_find(struct vc4_file *vc4file, int id) struct vc4_dev *vc4 = vc4file->dev; struct vc4_perfmon *perfmon;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return NULL;
mutex_lock(&vc4file->perfmon.lock); @@ -105,7 +105,7 @@ void vc4_perfmon_open_file(struct vc4_file *vc4file) { struct vc4_dev *vc4 = vc4file->dev;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
mutex_init(&vc4file->perfmon.lock); @@ -131,7 +131,7 @@ void vc4_perfmon_close_file(struct vc4_file *vc4file) { struct vc4_dev *vc4 = vc4file->dev;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
mutex_lock(&vc4file->perfmon.lock); @@ -151,7 +151,7 @@ int vc4_perfmon_create_ioctl(struct drm_device *dev, void *data, unsigned int i; int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (!vc4->v3d) { @@ -205,7 +205,7 @@ int vc4_perfmon_destroy_ioctl(struct drm_device *dev, void *data, struct drm_vc4_perfmon_destroy *req = data; struct vc4_perfmon *perfmon;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (!vc4->v3d) { @@ -233,7 +233,7 @@ int vc4_perfmon_get_values_ioctl(struct drm_device *dev, void *data, struct vc4_perfmon *perfmon; int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (!vc4->v3d) { diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c index 07caf2a47c6c..866bc46ee6d5 100644 --- a/drivers/gpu/drm/vc4/vc4_plane.c +++ b/drivers/gpu/drm/vc4/vc4_plane.c @@ -587,10 +587,10 @@ static u32 vc4_lbm_size(struct drm_plane_state *state) }
/* Align it to 64 or 128 (hvs5) bytes */ - lbm = roundup(lbm, vc4->is_vc5 ? 128 : 64); + lbm = roundup(lbm, vc4->gen == VC4_GEN_5 ? 128 : 64);
/* Each "word" of the LBM memory contains 2 or 4 (hvs5) pixels */ - lbm /= vc4->is_vc5 ? 4 : 2; + lbm /= vc4->gen == VC4_GEN_5 ? 4 : 2;
return lbm; } @@ -706,7 +706,7 @@ static int vc4_plane_allocate_lbm(struct drm_plane_state *state) ret = drm_mm_insert_node_generic(&vc4->hvs->lbm_mm, &vc4_state->lbm, lbm_size, - vc4->is_vc5 ? 64 : 32, + vc4->gen == VC4_GEN_5 ? 64 : 32, 0, 0); spin_unlock_irqrestore(&vc4->hvs->mm_lock, irqflags);
@@ -1057,7 +1057,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane, mix_plane_alpha = state->alpha != DRM_BLEND_ALPHA_OPAQUE && fb->format->has_alpha;
- if (!vc4->is_vc5) { + if (vc4->gen == VC4_GEN_4) { /* Control word */ vc4_dlist_write(vc4_state, SCALER_CTL0_VALID | @@ -1632,7 +1632,7 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev, };
for (i = 0; i < ARRAY_SIZE(hvs_formats); i++) { - if (!hvs_formats[i].hvs5_only || vc4->is_vc5) { + if (!hvs_formats[i].hvs5_only || vc4->gen == VC4_GEN_5) { formats[num_formats] = hvs_formats[i].drm; num_formats++; } @@ -1647,7 +1647,7 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev, return ERR_CAST(vc4_plane); plane = &vc4_plane->base;
- if (vc4->is_vc5) + if (vc4->gen == VC4_GEN_5) drm_plane_helper_add(plane, &vc5_plane_helper_funcs); else drm_plane_helper_add(plane, &vc4_plane_helper_funcs); diff --git a/drivers/gpu/drm/vc4/vc4_render_cl.c b/drivers/gpu/drm/vc4/vc4_render_cl.c index 1bda5010f15a..ae4ad956f04f 100644 --- a/drivers/gpu/drm/vc4/vc4_render_cl.c +++ b/drivers/gpu/drm/vc4/vc4_render_cl.c @@ -599,7 +599,7 @@ int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec) bool has_bin = args->bin_cl_size != 0; int ret;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
if (args->min_x_tile > args->max_x_tile || diff --git a/drivers/gpu/drm/vc4/vc4_v3d.c b/drivers/gpu/drm/vc4/vc4_v3d.c index bf5c4e36c94e..43f69d74e876 100644 --- a/drivers/gpu/drm/vc4/vc4_v3d.c +++ b/drivers/gpu/drm/vc4/vc4_v3d.c @@ -127,7 +127,7 @@ static int vc4_v3d_debugfs_ident(struct seq_file *m, void *unused) int vc4_v3d_pm_get(struct vc4_dev *vc4) { - if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
mutex_lock(&vc4->power_lock); @@ -148,7 +148,7 @@ vc4_v3d_pm_get(struct vc4_dev *vc4) void vc4_v3d_pm_put(struct vc4_dev *vc4) { - if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
mutex_lock(&vc4->power_lock); @@ -178,7 +178,7 @@ int vc4_v3d_get_bin_slot(struct vc4_dev *vc4) uint64_t seqno = 0; struct vc4_exec_info *exec;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
try_again: @@ -325,7 +325,7 @@ int vc4_v3d_bin_bo_get(struct vc4_dev *vc4, bool *used) { int ret = 0;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
mutex_lock(&vc4->bin_bo_lock); @@ -360,7 +360,7 @@ static void bin_bo_release(struct kref *ref)
void vc4_v3d_bin_bo_put(struct vc4_dev *vc4) { - if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return;
mutex_lock(&vc4->bin_bo_lock); diff --git a/drivers/gpu/drm/vc4/vc4_validate.c b/drivers/gpu/drm/vc4/vc4_validate.c index 0c17284bf6f5..f3d7fdbe9083 100644 --- a/drivers/gpu/drm/vc4/vc4_validate.c +++ b/drivers/gpu/drm/vc4/vc4_validate.c @@ -109,7 +109,7 @@ vc4_use_bo(struct vc4_exec_info *exec, uint32_t hindex) struct drm_gem_dma_object *obj; struct vc4_bo *bo;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return NULL;
if (hindex >= exec->bo_count) { @@ -169,7 +169,7 @@ vc4_check_tex_size(struct vc4_exec_info *exec, struct drm_gem_dma_object *fbo, uint32_t utile_w = utile_width(cpp); uint32_t utile_h = utile_height(cpp);
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return false;
/* The shaded vertex format stores signed 12.4 fixed point @@ -495,7 +495,7 @@ vc4_validate_bin_cl(struct drm_device *dev, uint32_t dst_offset = 0; uint32_t src_offset = 0;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
while (src_offset < len) { @@ -942,7 +942,7 @@ vc4_validate_shader_recs(struct drm_device *dev, uint32_t i; int ret = 0;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return -ENODEV;
for (i = 0; i < exec->shader_state_count; i++) { diff --git a/drivers/gpu/drm/vc4/vc4_validate_shaders.c b/drivers/gpu/drm/vc4/vc4_validate_shaders.c index 9745f8810eca..afb1a4d82684 100644 --- a/drivers/gpu/drm/vc4/vc4_validate_shaders.c +++ b/drivers/gpu/drm/vc4/vc4_validate_shaders.c @@ -786,7 +786,7 @@ vc4_validate_shader(struct drm_gem_dma_object *shader_obj) struct vc4_validated_shader_info *validated_shader = NULL; struct vc4_shader_validation_state validation_state;
- if (WARN_ON_ONCE(vc4->is_vc5)) + if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5)) return NULL;
memset(&validation_state, 0, sizeof(validation_state)); diff --git a/drivers/gpu/drm/vkms/vkms_output.c b/drivers/gpu/drm/vkms/vkms_output.c index 5ce70dd946aa..24589b947dea 100644 --- a/drivers/gpu/drm/vkms/vkms_output.c +++ b/drivers/gpu/drm/vkms/vkms_output.c @@ -84,7 +84,7 @@ int vkms_output_init(struct vkms_device *vkmsdev, int index) DRM_MODE_CONNECTOR_VIRTUAL); if (ret) { DRM_ERROR("Failed to init connector\n"); - goto err_connector; + return ret; }
drm_connector_helper_add(connector, &vkms_conn_helper_funcs); @@ -119,8 +119,5 @@ int vkms_output_init(struct vkms_device *vkmsdev, int index) err_encoder: drm_connector_cleanup(connector);
-err_connector: - drm_crtc_cleanup(crtc); - return ret; } diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c index 6619a40aed15..f4332f06b6c8 100644 --- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c +++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c @@ -42,7 +42,7 @@ bool intel_hdcp_gsc_check_status(struct xe_device *xe) struct xe_gsc *gsc = >->uc.gsc; bool ret = true;
- if (!gsc && !xe_uc_fw_is_enabled(&gsc->fw)) { + if (!gsc || !xe_uc_fw_is_enabled(&gsc->fw)) { drm_dbg_kms(&xe->drm, "GSC Components not ready for HDCP2.x\n"); return false; diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c index 2e72c06fd40d..b0684e6d2047 100644 --- a/drivers/gpu/drm/xe/xe_sync.c +++ b/drivers/gpu/drm/xe/xe_sync.c @@ -85,8 +85,12 @@ static void user_fence_worker(struct work_struct *w) mmput(ufence->mm); }
- wake_up_all(&ufence->xe->ufence_wq); + /* + * Wake up waiters only after updating the ufence state, allowing the UMD + * to safely reuse the same ufence without encountering -EBUSY errors. + */ WRITE_ONCE(ufence->signalled, 1); + wake_up_all(&ufence->xe->ufence_wq); user_fence_put(ufence); }
diff --git a/drivers/gpu/drm/xlnx/zynqmp_disp.c b/drivers/gpu/drm/xlnx/zynqmp_disp.c index 9368acf56eaf..e4e0e299e8a7 100644 --- a/drivers/gpu/drm/xlnx/zynqmp_disp.c +++ b/drivers/gpu/drm/xlnx/zynqmp_disp.c @@ -1200,6 +1200,9 @@ static void zynqmp_disp_layer_release_dma(struct zynqmp_disp *disp, { unsigned int i;
+ if (!layer->info) + return; + for (i = 0; i < layer->info->num_channels; i++) { struct zynqmp_disp_layer_dma *dma = &layer->dmas[i];
diff --git a/drivers/gpu/drm/xlnx/zynqmp_kms.c b/drivers/gpu/drm/xlnx/zynqmp_kms.c index bd1368df7870..4556af2faa0f 100644 --- a/drivers/gpu/drm/xlnx/zynqmp_kms.c +++ b/drivers/gpu/drm/xlnx/zynqmp_kms.c @@ -536,7 +536,7 @@ void zynqmp_dpsub_drm_cleanup(struct zynqmp_dpsub *dpsub) { struct drm_device *drm = &dpsub->drm->dev;
- drm_dev_unregister(drm); + drm_dev_unplug(drm); drm_atomic_helper_shutdown(drm); drm_encoder_cleanup(&dpsub->drm->encoder); drm_kms_helper_poll_fini(drm); diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c index f33485d83d24..0fb210e40a41 100644 --- a/drivers/hid/hid-hyperv.c +++ b/drivers/hid/hid-hyperv.c @@ -422,6 +422,25 @@ static int mousevsc_hid_raw_request(struct hid_device *hid, return 0; }
+static int mousevsc_hid_probe(struct hid_device *hid_dev, const struct hid_device_id *id) +{ + int ret; + + ret = hid_parse(hid_dev); + if (ret) { + hid_err(hid_dev, "parse failed\n"); + return ret; + } + + ret = hid_hw_start(hid_dev, HID_CONNECT_HIDINPUT | HID_CONNECT_HIDDEV); + if (ret) { + hid_err(hid_dev, "hw start failed\n"); + return ret; + } + + return 0; +} + static const struct hid_ll_driver mousevsc_ll_driver = { .parse = mousevsc_hid_parse, .open = mousevsc_hid_open, @@ -431,7 +450,16 @@ static const struct hid_ll_driver mousevsc_ll_driver = { .raw_request = mousevsc_hid_raw_request, };
-static struct hid_driver mousevsc_hid_driver; +static const struct hid_device_id mousevsc_devices[] = { + { HID_DEVICE(BUS_VIRTUAL, HID_GROUP_ANY, 0x045E, 0x0621) }, + { } +}; + +static struct hid_driver mousevsc_hid_driver = { + .name = "hid-hyperv", + .id_table = mousevsc_devices, + .probe = mousevsc_hid_probe, +};
static int mousevsc_probe(struct hv_device *device, const struct hv_vmbus_device_id *dev_id) @@ -473,7 +501,6 @@ static int mousevsc_probe(struct hv_device *device, }
hid_dev->ll_driver = &mousevsc_ll_driver; - hid_dev->driver = &mousevsc_hid_driver; hid_dev->bus = BUS_VIRTUAL; hid_dev->vendor = input_dev->hid_dev_info.vendor; hid_dev->product = input_dev->hid_dev_info.product; @@ -488,20 +515,6 @@ static int mousevsc_probe(struct hv_device *device, if (ret) goto probe_err2;
- - ret = hid_parse(hid_dev); - if (ret) { - hid_err(hid_dev, "parse failed\n"); - goto probe_err2; - } - - ret = hid_hw_start(hid_dev, HID_CONNECT_HIDINPUT | HID_CONNECT_HIDDEV); - - if (ret) { - hid_err(hid_dev, "hw start failed\n"); - goto probe_err2; - } - device_init_wakeup(&device->device, true);
input_dev->connected = true; @@ -579,12 +592,23 @@ static struct hv_driver mousevsc_drv = {
static int __init mousevsc_init(void) { - return vmbus_driver_register(&mousevsc_drv); + int ret; + + ret = hid_register_driver(&mousevsc_hid_driver); + if (ret) + return ret; + + ret = vmbus_driver_register(&mousevsc_drv); + if (ret) + hid_unregister_driver(&mousevsc_hid_driver); + + return ret; }
static void __exit mousevsc_exit(void) { vmbus_driver_unregister(&mousevsc_drv); + hid_unregister_driver(&mousevsc_hid_driver); }
MODULE_LICENSE("GPL"); diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c index 413606bdf476..5a599c90e7a2 100644 --- a/drivers/hid/wacom_wac.c +++ b/drivers/hid/wacom_wac.c @@ -1353,9 +1353,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom) rotation -= 1800;
input_report_abs(pen_input, ABS_TILT_X, - (char)frame[7]); + (signed char)frame[7]); input_report_abs(pen_input, ABS_TILT_Y, - (char)frame[8]); + (signed char)frame[8]); input_report_abs(pen_input, ABS_Z, rotation); input_report_abs(pen_input, ABS_WHEEL, get_unaligned_le16(&frame[11])); diff --git a/drivers/hwmon/aquacomputer_d5next.c b/drivers/hwmon/aquacomputer_d5next.c index 34cac27e4dde..0dcb8a3a691d 100644 --- a/drivers/hwmon/aquacomputer_d5next.c +++ b/drivers/hwmon/aquacomputer_d5next.c @@ -597,7 +597,7 @@ struct aqc_data {
/* Sensor values */ s32 temp_input[20]; /* Max 4 physical and 16 virtual or 8 physical and 12 virtual */ - s32 speed_input[8]; + s32 speed_input[9]; u32 speed_input_min[1]; u32 speed_input_target[1]; u32 speed_input_max[1]; diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c index 934fed3dd586..ee04795b98aa 100644 --- a/drivers/hwmon/nct6775-core.c +++ b/drivers/hwmon/nct6775-core.c @@ -2878,8 +2878,7 @@ store_target_temp(struct device *dev, struct device_attribute *attr, if (err < 0) return err;
- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, - data->target_temp_mask); + val = DIV_ROUND_CLOSEST(clamp_val(val, 0, data->target_temp_mask * 1000), 1000);
mutex_lock(&data->update_lock); data->target_temp[nr] = val; @@ -2959,7 +2958,7 @@ store_temp_tolerance(struct device *dev, struct device_attribute *attr, return err;
/* Limit tolerance as needed */ - val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, data->tolerance_mask); + val = DIV_ROUND_CLOSEST(clamp_val(val, 0, data->tolerance_mask * 1000), 1000);
mutex_lock(&data->update_lock); data->temp_tolerance[index][nr] = val; @@ -3085,7 +3084,7 @@ store_weight_temp(struct device *dev, struct device_attribute *attr, if (err < 0) return err;
- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 255); + val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 255000), 1000);
mutex_lock(&data->update_lock); data->weight_temp[index][nr] = val; diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c index ce7fd4ca9d89..a68b0a98e8d4 100644 --- a/drivers/hwmon/pmbus/pmbus_core.c +++ b/drivers/hwmon/pmbus/pmbus_core.c @@ -3279,7 +3279,17 @@ static int pmbus_regulator_notify(struct pmbus_data *data, int page, int event)
static int pmbus_write_smbalert_mask(struct i2c_client *client, u8 page, u8 reg, u8 val) { - return _pmbus_write_word_data(client, page, PMBUS_SMBALERT_MASK, reg | (val << 8)); + int ret; + + ret = _pmbus_write_word_data(client, page, PMBUS_SMBALERT_MASK, reg | (val << 8)); + + /* + * Clear fault systematically in case writing PMBUS_SMBALERT_MASK + * is not supported by the chip. + */ + pmbus_clear_fault_page(client, page); + + return ret; }
static irqreturn_t pmbus_fault_handler(int irq, void *pdata) diff --git a/drivers/hwmon/tps23861.c b/drivers/hwmon/tps23861.c index dfcfb09d9f3c..80fb03f30c30 100644 --- a/drivers/hwmon/tps23861.c +++ b/drivers/hwmon/tps23861.c @@ -132,7 +132,7 @@ static int tps23861_read_temp(struct tps23861_data *data, long *val) if (err < 0) return err;
- *val = (regval * TEMPERATURE_LSB) - 20000; + *val = ((long)regval * TEMPERATURE_LSB) - 20000;
return 0; } diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c index 61f7c4003d2f..e9577f920286 100644 --- a/drivers/i2c/i2c-dev.c +++ b/drivers/i2c/i2c-dev.c @@ -251,10 +251,8 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client, return -EOPNOTSUPP;
data_ptrs = kmalloc_array(nmsgs, sizeof(u8 __user *), GFP_KERNEL); - if (data_ptrs == NULL) { - kfree(msgs); + if (!data_ptrs) return -ENOMEM; - }
res = 0; for (i = 0; i < nmsgs; i++) { @@ -302,7 +300,6 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client, for (j = 0; j < i; ++j) kfree(msgs[j].buf); kfree(data_ptrs); - kfree(msgs); return res; }
@@ -316,7 +313,6 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client, kfree(msgs[i].buf); } kfree(data_ptrs); - kfree(msgs); return res; }
@@ -446,6 +442,7 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg) case I2C_RDWR: { struct i2c_rdwr_ioctl_data rdwr_arg; struct i2c_msg *rdwr_pa; + int res;
if (copy_from_user(&rdwr_arg, (struct i2c_rdwr_ioctl_data __user *)arg, @@ -467,7 +464,9 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg) if (IS_ERR(rdwr_pa)) return PTR_ERR(rdwr_pa);
- return i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa); + res = i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa); + kfree(rdwr_pa); + return res; }
case I2C_SMBUS: { @@ -540,7 +539,7 @@ static long compat_i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned lo struct i2c_rdwr_ioctl_data32 rdwr_arg; struct i2c_msg32 __user *p; struct i2c_msg *rdwr_pa; - int i; + int i, res;
if (copy_from_user(&rdwr_arg, (struct i2c_rdwr_ioctl_data32 __user *)arg, @@ -573,7 +572,9 @@ static long compat_i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned lo }; }
- return i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa); + res = i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa); + kfree(rdwr_pa); + return res; } case I2C_SMBUS: { struct i2c_smbus_ioctl_data32 data32; diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c index 6f3eb710a75d..ffe99f0c6ace 100644 --- a/drivers/i3c/master.c +++ b/drivers/i3c/master.c @@ -2051,11 +2051,16 @@ int i3c_master_add_i3c_dev_locked(struct i3c_master_controller *master, ibireq.max_payload_len = olddev->ibi->max_payload_len; ibireq.num_slots = olddev->ibi->num_slots;
- if (olddev->ibi->enabled) { + if (olddev->ibi->enabled) enable_ibi = true; - i3c_dev_disable_ibi_locked(olddev); - } - + /* + * The olddev should not receive any commands on the + * i3c bus as it does not exist and has been assigned + * a new address. This will result in NACK or timeout. + * So, update the olddev->ibi->enabled flag to false + * to avoid DISEC with OldAddr. + */ + olddev->ibi->enabled = false; i3c_dev_free_ibi_locked(olddev); } mutex_unlock(&olddev->ibi_lock); diff --git a/drivers/iio/accel/adxl380.c b/drivers/iio/accel/adxl380.c index f80527d899be..b19ee37df7f1 100644 --- a/drivers/iio/accel/adxl380.c +++ b/drivers/iio/accel/adxl380.c @@ -1181,7 +1181,7 @@ static int adxl380_read_raw(struct iio_dev *indio_dev,
ret = adxl380_read_chn(st, chan->address); iio_device_release_direct_mode(indio_dev); - if (ret) + if (ret < 0) return ret;
*val = sign_extend32(ret >> chan->scan_type.shift, diff --git a/drivers/iio/adc/ad4000.c b/drivers/iio/adc/ad4000.c index 6ea491245084..b3b82535f5c1 100644 --- a/drivers/iio/adc/ad4000.c +++ b/drivers/iio/adc/ad4000.c @@ -344,6 +344,8 @@ static int ad4000_single_conversion(struct iio_dev *indio_dev,
if (chan->scan_type.sign == 's') *val = sign_extend32(sample, chan->scan_type.realbits - 1); + else + *val = sample;
return IIO_VAL_INT; } @@ -637,7 +639,9 @@ static int ad4000_probe(struct spi_device *spi) indio_dev->name = chip->dev_name; indio_dev->num_channels = 1;
- devm_mutex_init(dev, &st->lock); + ret = devm_mutex_init(dev, &st->lock); + if (ret) + return ret;
st->gain_milli = 1000; if (chip->has_hardware_gain) { diff --git a/drivers/iio/adc/pac1921.c b/drivers/iio/adc/pac1921.c index 36e813d9c73f..fe1d9e07fce2 100644 --- a/drivers/iio/adc/pac1921.c +++ b/drivers/iio/adc/pac1921.c @@ -1171,7 +1171,9 @@ static int pac1921_probe(struct i2c_client *client) return dev_err_probe(dev, (int)PTR_ERR(priv->regmap), "Cannot initialize register map\n");
- devm_mutex_init(dev, &priv->lock); + ret = devm_mutex_init(dev, &priv->lock); + if (ret) + return ret;
priv->dv_gain = PAC1921_DEFAULT_DV_GAIN; priv->di_gain = PAC1921_DEFAULT_DI_GAIN; diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c index 0cb00f3bec04..b8b4171b8043 100644 --- a/drivers/iio/dac/adi-axi-dac.c +++ b/drivers/iio/dac/adi-axi-dac.c @@ -46,7 +46,7 @@ #define AXI_DAC_REG_CNTRL_1 0x0044 #define AXI_DAC_SYNC BIT(0) #define AXI_DAC_REG_CNTRL_2 0x0048 -#define ADI_DAC_R1_MODE BIT(4) +#define ADI_DAC_R1_MODE BIT(5) #define AXI_DAC_DRP_STATUS 0x0074 #define AXI_DAC_DRP_LOCKED BIT(17) /* DAC Channel controls */ diff --git a/drivers/iio/industrialio-backend.c b/drivers/iio/industrialio-backend.c index 20b3b5212da7..fb34a8e4d04e 100644 --- a/drivers/iio/industrialio-backend.c +++ b/drivers/iio/industrialio-backend.c @@ -737,8 +737,8 @@ static struct iio_backend *__devm_iio_backend_fwnode_get(struct device *dev, con }
fwnode_back = fwnode_find_reference(fwnode, "io-backends", index); - if (IS_ERR(fwnode)) - return dev_err_cast_probe(dev, fwnode, + if (IS_ERR(fwnode_back)) + return dev_err_cast_probe(dev, fwnode_back, "Cannot get Firmware reference\n");
guard(mutex)(&iio_back_lock); diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c index 5f131bc1a01e..4ad949672210 100644 --- a/drivers/iio/industrialio-gts-helper.c +++ b/drivers/iio/industrialio-gts-helper.c @@ -167,7 +167,7 @@ static int iio_gts_gain_cmp(const void *a, const void *b)
static int gain_to_scaletables(struct iio_gts *gts, int **gains, int **scales) { - int ret, i, j, new_idx, time_idx; + int i, j, new_idx, time_idx, ret = 0; int *all_gains; size_t gain_bytes;
diff --git a/drivers/iio/light/al3010.c b/drivers/iio/light/al3010.c index 53569587ccb7..7cbb8b203300 100644 --- a/drivers/iio/light/al3010.c +++ b/drivers/iio/light/al3010.c @@ -87,7 +87,12 @@ static int al3010_init(struct al3010_data *data) int ret;
ret = al3010_set_pwr(data->client, true); + if (ret < 0) + return ret;
+ ret = devm_add_action_or_reset(&data->client->dev, + al3010_set_pwr_off, + data); if (ret < 0) return ret;
@@ -190,12 +195,6 @@ static int al3010_probe(struct i2c_client *client) return ret; }
- ret = devm_add_action_or_reset(&client->dev, - al3010_set_pwr_off, - data); - if (ret < 0) - return ret; - return devm_iio_device_register(&client->dev, indio_dev); }
diff --git a/drivers/infiniband/core/roce_gid_mgmt.c b/drivers/infiniband/core/roce_gid_mgmt.c index d5131b3ba8ab..a9f2c6b1b29e 100644 --- a/drivers/infiniband/core/roce_gid_mgmt.c +++ b/drivers/infiniband/core/roce_gid_mgmt.c @@ -515,6 +515,27 @@ void rdma_roce_rescan_device(struct ib_device *ib_dev) } EXPORT_SYMBOL(rdma_roce_rescan_device);
+/** + * rdma_roce_rescan_port - Rescan all of the network devices in the system + * and add their gids if relevant to the port of the RoCE device. + * + * @ib_dev: IB device + * @port: Port number + */ +void rdma_roce_rescan_port(struct ib_device *ib_dev, u32 port) +{ + struct net_device *ndev = NULL; + + if (rdma_protocol_roce(ib_dev, port)) { + ndev = ib_device_get_netdev(ib_dev, port); + if (!ndev) + return; + enum_all_gids_of_dev_cb(ib_dev, port, ndev, ndev); + dev_put(ndev); + } +} +EXPORT_SYMBOL(rdma_roce_rescan_port); + static void callback_for_addr_gid_device_scan(struct ib_device *device, u32 port, struct net_device *rdma_ndev, @@ -575,16 +596,17 @@ static void handle_netdev_upper(struct ib_device *ib_dev, u32 port, } }
-static void _roce_del_all_netdev_gids(struct ib_device *ib_dev, u32 port, - struct net_device *event_ndev) +void roce_del_all_netdev_gids(struct ib_device *ib_dev, + u32 port, struct net_device *ndev) { - ib_cache_gid_del_all_netdev_gids(ib_dev, port, event_ndev); + ib_cache_gid_del_all_netdev_gids(ib_dev, port, ndev); } +EXPORT_SYMBOL(roce_del_all_netdev_gids);
static void del_netdev_upper_ips(struct ib_device *ib_dev, u32 port, struct net_device *rdma_ndev, void *cookie) { - handle_netdev_upper(ib_dev, port, cookie, _roce_del_all_netdev_gids); + handle_netdev_upper(ib_dev, port, cookie, roce_del_all_netdev_gids); }
static void add_netdev_upper_ips(struct ib_device *ib_dev, u32 port, diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h index 821d93c8f712..dfd2e5a86e6f 100644 --- a/drivers/infiniband/core/uverbs.h +++ b/drivers/infiniband/core/uverbs.h @@ -160,6 +160,8 @@ struct ib_uverbs_file { struct page *disassociate_page;
struct xarray idr; + + struct mutex disassociation_lock; };
struct ib_uverbs_event { diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c index 94454186ed81..85cfc790a7bb 100644 --- a/drivers/infiniband/core/uverbs_main.c +++ b/drivers/infiniband/core/uverbs_main.c @@ -76,6 +76,7 @@ static dev_t dynamic_uverbs_dev; static DEFINE_IDA(uverbs_ida); static int ib_uverbs_add_one(struct ib_device *device); static void ib_uverbs_remove_one(struct ib_device *device, void *client_data); +static struct ib_client uverbs_client;
static char *uverbs_devnode(const struct device *dev, umode_t *mode) { @@ -217,6 +218,7 @@ void ib_uverbs_release_file(struct kref *ref)
if (file->disassociate_page) __free_pages(file->disassociate_page, 0); + mutex_destroy(&file->disassociation_lock); mutex_destroy(&file->umap_lock); mutex_destroy(&file->ucontext_lock); kfree(file); @@ -698,8 +700,13 @@ static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma) ret = PTR_ERR(ucontext); goto out; } + + mutex_lock(&file->disassociation_lock); + vma->vm_ops = &rdma_umap_ops; ret = ucontext->device->ops.mmap(ucontext, vma); + + mutex_unlock(&file->disassociation_lock); out: srcu_read_unlock(&file->device->disassociate_srcu, srcu_key); return ret; @@ -721,6 +728,8 @@ static void rdma_umap_open(struct vm_area_struct *vma) /* We are racing with disassociation */ if (!down_read_trylock(&ufile->hw_destroy_rwsem)) goto out_zap; + mutex_lock(&ufile->disassociation_lock); + /* * Disassociation already completed, the VMA should already be zapped. */ @@ -732,10 +741,12 @@ static void rdma_umap_open(struct vm_area_struct *vma) goto out_unlock; rdma_umap_priv_init(priv, vma, opriv->entry);
+ mutex_unlock(&ufile->disassociation_lock); up_read(&ufile->hw_destroy_rwsem); return;
out_unlock: + mutex_unlock(&ufile->disassociation_lock); up_read(&ufile->hw_destroy_rwsem); out_zap: /* @@ -819,7 +830,7 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile) { struct rdma_umap_priv *priv, *next_priv;
- lockdep_assert_held(&ufile->hw_destroy_rwsem); + mutex_lock(&ufile->disassociation_lock);
while (1) { struct mm_struct *mm = NULL; @@ -845,8 +856,10 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile) break; } mutex_unlock(&ufile->umap_lock); - if (!mm) + if (!mm) { + mutex_unlock(&ufile->disassociation_lock); return; + }
/* * The umap_lock is nested under mmap_lock since it used within @@ -876,7 +889,31 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile) mmap_read_unlock(mm); mmput(mm); } + + mutex_unlock(&ufile->disassociation_lock); +} + +/** + * rdma_user_mmap_disassociate() - Revoke mmaps for a device + * @device: device to revoke + * + * This function should be called by drivers that need to disable mmaps for the + * device, for instance because it is going to be reset. + */ +void rdma_user_mmap_disassociate(struct ib_device *device) +{ + struct ib_uverbs_device *uverbs_dev = + ib_get_client_data(device, &uverbs_client); + struct ib_uverbs_file *ufile; + + mutex_lock(&uverbs_dev->lists_mutex); + list_for_each_entry(ufile, &uverbs_dev->uverbs_file_list, list) { + if (ufile->ucontext) + uverbs_user_mmap_disassociate(ufile); + } + mutex_unlock(&uverbs_dev->lists_mutex); } +EXPORT_SYMBOL(rdma_user_mmap_disassociate);
/* * ib_uverbs_open() does not need the BKL: @@ -947,6 +984,8 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp) mutex_init(&file->umap_lock); INIT_LIST_HEAD(&file->umaps);
+ mutex_init(&file->disassociation_lock); + filp->private_data = file; list_add_tail(&file->list, &dev->uverbs_file_list); mutex_unlock(&dev->lists_mutex); diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index e66ae9f22c71..160096792224 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -3633,7 +3633,7 @@ static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *gsi_sqp, wc->byte_len = orig_cqe->length; wc->qp = &gsi_qp->ib_qp;
- wc->ex.imm_data = cpu_to_be32(le32_to_cpu(orig_cqe->immdata)); + wc->ex.imm_data = cpu_to_be32(orig_cqe->immdata); wc->src_qp = orig_cqe->src_qp; memcpy(wc->smac, orig_cqe->smac, ETH_ALEN); if (bnxt_re_is_vlan_pkt(orig_cqe, &vlan_id, &sl)) { @@ -3778,7 +3778,10 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc) (unsigned long)(cqe->qp_handle), struct bnxt_re_qp, qplib_qp); wc->qp = &qp->ib_qp; - wc->ex.imm_data = cpu_to_be32(le32_to_cpu(cqe->immdata)); + if (cqe->flags & CQ_RES_RC_FLAGS_IMM) + wc->ex.imm_data = cpu_to_be32(cqe->immdata); + else + wc->ex.invalidate_rkey = cqe->invrkey; wc->src_qp = cqe->src_qp; memcpy(wc->smac, cqe->smac, ETH_ALEN); wc->port_num = 1; diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c index 9eb290ec71a8..2ac8ddbed576 100644 --- a/drivers/infiniband/hw/bnxt_re/main.c +++ b/drivers/infiniband/hw/bnxt_re/main.c @@ -2033,12 +2033,6 @@ static int bnxt_re_suspend(struct auxiliary_device *adev, pm_message_t state) rdev = en_info->rdev; en_dev = en_info->en_dev; mutex_lock(&bnxt_re_mutex); - /* L2 driver may invoke this callback during device error/crash or device - * reset. Current RoCE driver doesn't recover the device in case of - * error. Handle the error by dispatching fatal events to all qps - * ie. by calling bnxt_re_dev_stop and release the MSIx vectors as - * L2 driver want to modify the MSIx table. - */
ibdev_info(&rdev->ibdev, "Handle device suspend call"); /* Check the current device state from bnxt_en_dev and move the @@ -2046,17 +2040,12 @@ static int bnxt_re_suspend(struct auxiliary_device *adev, pm_message_t state) * This prevents more commands to HW during clean-up, * in case the device is already in error. */ - if (test_bit(BNXT_STATE_FW_FATAL_COND, &rdev->en_dev->en_state)) + if (test_bit(BNXT_STATE_FW_FATAL_COND, &rdev->en_dev->en_state)) { set_bit(ERR_DEVICE_DETACHED, &rdev->rcfw.cmdq.flags); - - bnxt_re_dev_stop(rdev); - bnxt_re_stop_irq(adev); - /* Move the device states to detached and avoid sending any more - * commands to HW - */ - set_bit(BNXT_RE_FLAG_ERR_DEVICE_DETACHED, &rdev->flags); - set_bit(ERR_DEVICE_DETACHED, &rdev->rcfw.cmdq.flags); - wake_up_all(&rdev->rcfw.cmdq.waitq); + set_bit(BNXT_RE_FLAG_ERR_DEVICE_DETACHED, &rdev->flags); + wake_up_all(&rdev->rcfw.cmdq.waitq); + bnxt_re_dev_stop(rdev); + }
if (rdev->pacing.dbr_pacing) bnxt_re_set_pacing_dev_state(rdev); @@ -2075,13 +2064,6 @@ static int bnxt_re_resume(struct auxiliary_device *adev) struct bnxt_re_dev *rdev;
mutex_lock(&bnxt_re_mutex); - /* L2 driver may invoke this callback during device recovery, resume. - * reset. Current RoCE driver doesn't recover the device in case of - * error. Handle the error by dispatching fatal events to all qps - * ie. by calling bnxt_re_dev_stop and release the MSIx vectors as - * L2 driver want to modify the MSIx table. - */ - bnxt_re_add_device(adev, BNXT_RE_POST_RECOVERY_INIT); rdev = en_info->rdev; ibdev_info(&rdev->ibdev, "Device resume completed"); diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h index 820611a23943..f55958e5fddb 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h @@ -391,7 +391,7 @@ struct bnxt_qplib_cqe { u16 cfa_meta; u64 wr_id; union { - __le32 immdata; + u32 immdata; u32 invrkey; }; u64 qp_handle; diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c index 4ec66611a143..4106423a1b39 100644 --- a/drivers/infiniband/hw/hns/hns_roce_cq.c +++ b/drivers/infiniband/hw/hns/hns_roce_cq.c @@ -179,8 +179,8 @@ static void free_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq) ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_CQC, hr_cq->cqn); if (ret) - dev_err(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n", ret, - hr_cq->cqn); + dev_err_ratelimited(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n", + ret, hr_cq->cqn);
xa_erase_irq(&cq_table->array, hr_cq->cqn);
diff --git a/drivers/infiniband/hw/hns/hns_roce_debugfs.c b/drivers/infiniband/hw/hns/hns_roce_debugfs.c index e8febb40f645..b869cdc54118 100644 --- a/drivers/infiniband/hw/hns/hns_roce_debugfs.c +++ b/drivers/infiniband/hw/hns/hns_roce_debugfs.c @@ -5,6 +5,7 @@
#include <linux/debugfs.h> #include <linux/device.h> +#include <linux/pci.h>
#include "hns_roce_device.h"
@@ -86,7 +87,7 @@ void hns_roce_register_debugfs(struct hns_roce_dev *hr_dev) { struct hns_roce_dev_debugfs *dbgfs = &hr_dev->dbgfs;
- dbgfs->root = debugfs_create_dir(dev_name(&hr_dev->ib_dev.dev), + dbgfs->root = debugfs_create_dir(pci_name(hr_dev->pci_dev), hns_roce_dbgfs_root);
create_sw_stat_debugfs(hr_dev, dbgfs->root); diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 0b1e21cb6d2d..560a1d9de408 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -489,12 +489,6 @@ struct hns_roce_bank { u32 next; /* Next ID to allocate. */ };
-struct hns_roce_idx_table { - u32 *spare_idx; - u32 head; - u32 tail; -}; - struct hns_roce_qp_table { struct hns_roce_hem_table qp_table; struct hns_roce_hem_table irrl_table; @@ -503,7 +497,7 @@ struct hns_roce_qp_table { struct mutex scc_mutex; struct hns_roce_bank bank[HNS_ROCE_QP_BANK_NUM]; struct mutex bank_mutex; - struct hns_roce_idx_table idx_table; + struct xarray dip_xa; };
struct hns_roce_cq_table { @@ -593,6 +587,7 @@ struct hns_roce_dev;
enum { HNS_ROCE_FLUSH_FLAG = 0, + HNS_ROCE_STOP_FLUSH_FLAG = 1, };
struct hns_roce_work { @@ -656,6 +651,8 @@ struct hns_roce_qp { enum hns_roce_cong_type cong_type; u8 tc_mode; u8 priority; + spinlock_t flush_lock; + struct hns_roce_dip *dip; };
struct hns_roce_ib_iboe { @@ -982,8 +979,6 @@ struct hns_roce_dev { enum hns_roce_device_state state; struct list_head qp_list; /* list of all qps on this dev */ spinlock_t qp_list_lock; /* protect qp_list */ - struct list_head dip_list; /* list of all dest ips on this dev */ - spinlock_t dip_list_lock; /* protect dip_list */
struct list_head pgdir_list; struct mutex pgdir_mutex; @@ -1289,6 +1284,7 @@ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn); void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type); void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp); void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type); +void hns_roce_flush_cqe(struct hns_roce_dev *hr_dev, u32 qpn); void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type); void hns_roce_handle_device_err(struct hns_roce_dev *hr_dev); int hns_roce_init(struct hns_roce_dev *hr_dev); diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c index c7c167e2a045..f84521be3bea 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hem.c +++ b/drivers/infiniband/hw/hns/hns_roce_hem.c @@ -300,7 +300,7 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev, struct hns_roce_hem_mhop *mhop, struct hns_roce_hem_index *index) { - struct ib_device *ibdev = &hr_dev->ib_dev; + struct device *dev = hr_dev->dev; unsigned long mhop_obj = obj; u32 l0_idx, l1_idx, l2_idx; u32 chunk_ba_num; @@ -331,14 +331,14 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev, index->buf = l0_idx; break; default: - ibdev_err(ibdev, "table %u not support mhop.hop_num = %u!\n", - table->type, mhop->hop_num); + dev_err(dev, "table %u not support mhop.hop_num = %u!\n", + table->type, mhop->hop_num); return -EINVAL; }
if (unlikely(index->buf >= table->num_hem)) { - ibdev_err(ibdev, "table %u exceed hem limt idx %llu, max %lu!\n", - table->type, index->buf, table->num_hem); + dev_err(dev, "table %u exceed hem limt idx %llu, max %lu!\n", + table->type, index->buf, table->num_hem); return -EINVAL; }
@@ -448,14 +448,14 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev, struct hns_roce_hem_mhop *mhop, struct hns_roce_hem_index *index) { - struct ib_device *ibdev = &hr_dev->ib_dev; + struct device *dev = hr_dev->dev; u32 step_idx; int ret = 0;
if (index->inited & HEM_INDEX_L0) { ret = hr_dev->hw->set_hem(hr_dev, table, obj, 0); if (ret) { - ibdev_err(ibdev, "set HEM step 0 failed!\n"); + dev_err(dev, "set HEM step 0 failed!\n"); goto out; } } @@ -463,7 +463,7 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev, if (index->inited & HEM_INDEX_L1) { ret = hr_dev->hw->set_hem(hr_dev, table, obj, 1); if (ret) { - ibdev_err(ibdev, "set HEM step 1 failed!\n"); + dev_err(dev, "set HEM step 1 failed!\n"); goto out; } } @@ -475,7 +475,7 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev, step_idx = mhop->hop_num; ret = hr_dev->hw->set_hem(hr_dev, table, obj, step_idx); if (ret) - ibdev_err(ibdev, "set HEM step last failed!\n"); + dev_err(dev, "set HEM step last failed!\n"); } out: return ret; @@ -485,14 +485,14 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev, struct hns_roce_hem_table *table, unsigned long obj) { - struct ib_device *ibdev = &hr_dev->ib_dev; struct hns_roce_hem_index index = {}; struct hns_roce_hem_mhop mhop = {}; + struct device *dev = hr_dev->dev; int ret;
ret = calc_hem_config(hr_dev, table, obj, &mhop, &index); if (ret) { - ibdev_err(ibdev, "calc hem config failed!\n"); + dev_err(dev, "calc hem config failed!\n"); return ret; }
@@ -504,7 +504,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
ret = alloc_mhop_hem(hr_dev, table, &mhop, &index); if (ret) { - ibdev_err(ibdev, "alloc mhop hem failed!\n"); + dev_err(dev, "alloc mhop hem failed!\n"); goto out; }
@@ -512,7 +512,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev, if (table->type < HEM_TYPE_MTT) { ret = set_mhop_hem(hr_dev, table, obj, &mhop, &index); if (ret) { - ibdev_err(ibdev, "set HEM address to HW failed!\n"); + dev_err(dev, "set HEM address to HW failed!\n"); goto err_alloc; } } @@ -575,7 +575,7 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev, struct hns_roce_hem_mhop *mhop, struct hns_roce_hem_index *index) { - struct ib_device *ibdev = &hr_dev->ib_dev; + struct device *dev = hr_dev->dev; u32 hop_num = mhop->hop_num; u32 chunk_ba_num; u32 step_idx; @@ -605,21 +605,21 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev,
ret = hr_dev->hw->clear_hem(hr_dev, table, obj, step_idx); if (ret) - ibdev_warn(ibdev, "failed to clear hop%u HEM, ret = %d.\n", - hop_num, ret); + dev_warn(dev, "failed to clear hop%u HEM, ret = %d.\n", + hop_num, ret);
if (index->inited & HEM_INDEX_L1) { ret = hr_dev->hw->clear_hem(hr_dev, table, obj, 1); if (ret) - ibdev_warn(ibdev, "failed to clear HEM step 1, ret = %d.\n", - ret); + dev_warn(dev, "failed to clear HEM step 1, ret = %d.\n", + ret); }
if (index->inited & HEM_INDEX_L0) { ret = hr_dev->hw->clear_hem(hr_dev, table, obj, 0); if (ret) - ibdev_warn(ibdev, "failed to clear HEM step 0, ret = %d.\n", - ret); + dev_warn(dev, "failed to clear HEM step 0, ret = %d.\n", + ret); } } } @@ -629,14 +629,14 @@ static void hns_roce_table_mhop_put(struct hns_roce_dev *hr_dev, unsigned long obj, int check_refcount) { - struct ib_device *ibdev = &hr_dev->ib_dev; struct hns_roce_hem_index index = {}; struct hns_roce_hem_mhop mhop = {}; + struct device *dev = hr_dev->dev; int ret;
ret = calc_hem_config(hr_dev, table, obj, &mhop, &index); if (ret) { - ibdev_err(ibdev, "calc hem config failed!\n"); + dev_err(dev, "calc hem config failed!\n"); return; }
@@ -672,8 +672,8 @@ void hns_roce_table_put(struct hns_roce_dev *hr_dev,
ret = hr_dev->hw->clear_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT); if (ret) - dev_warn(dev, "failed to clear HEM base address, ret = %d.\n", - ret); + dev_warn_ratelimited(dev, "failed to clear HEM base address, ret = %d.\n", + ret);
hns_roce_free_hem(hr_dev, table->hem[i]); table->hem[i] = NULL; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 24e906b9d3ae..697b17cca02e 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -373,19 +373,12 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr, static int check_send_valid(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp) { - struct ib_device *ibdev = &hr_dev->ib_dev; - if (unlikely(hr_qp->state == IB_QPS_RESET || hr_qp->state == IB_QPS_INIT || - hr_qp->state == IB_QPS_RTR)) { - ibdev_err(ibdev, "failed to post WQE, QP state %u!\n", - hr_qp->state); + hr_qp->state == IB_QPS_RTR)) return -EINVAL; - } else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN)) { - ibdev_err(ibdev, "failed to post WQE, dev state %d!\n", - hr_dev->state); + else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN)) return -EIO; - }
return 0; } @@ -582,7 +575,7 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp, if (WARN_ON(ret)) return ret;
- hr_reg_write(rc_sq_wqe, RC_SEND_WQE_FENCE, + hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SO, (wr->send_flags & IB_SEND_FENCE) ? 1 : 0);
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SE, @@ -2560,20 +2553,19 @@ static void hns_roce_free_link_table(struct hns_roce_dev *hr_dev) free_link_table_buf(hr_dev, &priv->ext_llm); }
-static void free_dip_list(struct hns_roce_dev *hr_dev) +static void free_dip_entry(struct hns_roce_dev *hr_dev) { struct hns_roce_dip *hr_dip; - struct hns_roce_dip *tmp; - unsigned long flags; + unsigned long idx;
- spin_lock_irqsave(&hr_dev->dip_list_lock, flags); + xa_lock(&hr_dev->qp_table.dip_xa);
- list_for_each_entry_safe(hr_dip, tmp, &hr_dev->dip_list, node) { - list_del(&hr_dip->node); + xa_for_each(&hr_dev->qp_table.dip_xa, idx, hr_dip) { + __xa_erase(&hr_dev->qp_table.dip_xa, hr_dip->dip_idx); kfree(hr_dip); }
- spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags); + xa_unlock(&hr_dev->qp_table.dip_xa); }
static struct ib_pd *free_mr_init_pd(struct hns_roce_dev *hr_dev) @@ -2775,8 +2767,8 @@ static int free_mr_modify_rsv_qp(struct hns_roce_dev *hr_dev, ret = hr_dev->hw->modify_qp(&hr_qp->ibqp, attr, mask, IB_QPS_INIT, IB_QPS_INIT, NULL); if (ret) { - ibdev_err(ibdev, "failed to modify qp to init, ret = %d.\n", - ret); + ibdev_err_ratelimited(ibdev, "failed to modify qp to init, ret = %d.\n", + ret); return ret; }
@@ -2981,7 +2973,7 @@ static void hns_roce_v2_exit(struct hns_roce_dev *hr_dev) hns_roce_free_link_table(hr_dev);
if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP09) - free_dip_list(hr_dev); + free_dip_entry(hr_dev); }
static int hns_roce_mbox_post(struct hns_roce_dev *hr_dev, @@ -3421,8 +3413,8 @@ static int free_mr_post_send_lp_wqe(struct hns_roce_qp *hr_qp)
ret = hns_roce_v2_post_send(&hr_qp->ibqp, send_wr, &bad_wr); if (ret) { - ibdev_err(ibdev, "failed to post wqe for free mr, ret = %d.\n", - ret); + ibdev_err_ratelimited(ibdev, "failed to post wqe for free mr, ret = %d.\n", + ret); return ret; }
@@ -3461,9 +3453,9 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
ret = free_mr_post_send_lp_wqe(hr_qp); if (ret) { - ibdev_err(ibdev, - "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n", - hr_qp->qpn, ret); + ibdev_err_ratelimited(ibdev, + "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n", + hr_qp->qpn, ret); break; }
@@ -3474,16 +3466,16 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev) while (cqe_cnt) { npolled = hns_roce_v2_poll_cq(&free_mr->rsv_cq->ib_cq, cqe_cnt, wc); if (npolled < 0) { - ibdev_err(ibdev, - "failed to poll cqe for free mr, remain %d cqe.\n", - cqe_cnt); + ibdev_err_ratelimited(ibdev, + "failed to poll cqe for free mr, remain %d cqe.\n", + cqe_cnt); goto out; }
if (time_after(jiffies, end)) { - ibdev_err(ibdev, - "failed to poll cqe for free mr and timeout, remain %d cqe.\n", - cqe_cnt); + ibdev_err_ratelimited(ibdev, + "failed to poll cqe for free mr and timeout, remain %d cqe.\n", + cqe_cnt); goto out; } cqe_cnt -= npolled; @@ -4701,26 +4693,49 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp, int attr_mask, return 0; }
+static int alloc_dip_entry(struct xarray *dip_xa, u32 qpn) +{ + struct hns_roce_dip *hr_dip; + int ret; + + hr_dip = xa_load(dip_xa, qpn); + if (hr_dip) + return 0; + + hr_dip = kzalloc(sizeof(*hr_dip), GFP_KERNEL); + if (!hr_dip) + return -ENOMEM; + + ret = xa_err(xa_store(dip_xa, qpn, hr_dip, GFP_KERNEL)); + if (ret) + kfree(hr_dip); + + return ret; +} + static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr, u32 *dip_idx) { const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr); struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); - u32 *spare_idx = hr_dev->qp_table.idx_table.spare_idx; - u32 *head = &hr_dev->qp_table.idx_table.head; - u32 *tail = &hr_dev->qp_table.idx_table.tail; + struct xarray *dip_xa = &hr_dev->qp_table.dip_xa; + struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); struct hns_roce_dip *hr_dip; - unsigned long flags; + unsigned long idx; int ret = 0;
- spin_lock_irqsave(&hr_dev->dip_list_lock, flags); + ret = alloc_dip_entry(dip_xa, ibqp->qp_num); + if (ret) + return ret;
- spare_idx[*tail] = ibqp->qp_num; - *tail = (*tail == hr_dev->caps.num_qps - 1) ? 0 : (*tail + 1); + xa_lock(dip_xa);
- list_for_each_entry(hr_dip, &hr_dev->dip_list, node) { - if (!memcmp(grh->dgid.raw, hr_dip->dgid, GID_LEN_V2)) { + xa_for_each(dip_xa, idx, hr_dip) { + if (hr_dip->qp_cnt && + !memcmp(grh->dgid.raw, hr_dip->dgid, GID_LEN_V2)) { *dip_idx = hr_dip->dip_idx; + hr_dip->qp_cnt++; + hr_qp->dip = hr_dip; goto out; } } @@ -4728,19 +4743,24 @@ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr, /* If no dgid is found, a new dip and a mapping between dgid and * dip_idx will be created. */ - hr_dip = kzalloc(sizeof(*hr_dip), GFP_ATOMIC); - if (!hr_dip) { - ret = -ENOMEM; - goto out; + xa_for_each(dip_xa, idx, hr_dip) { + if (hr_dip->qp_cnt) + continue; + + *dip_idx = idx; + memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw)); + hr_dip->dip_idx = idx; + hr_dip->qp_cnt++; + hr_qp->dip = hr_dip; + break; }
- memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw)); - hr_dip->dip_idx = *dip_idx = spare_idx[*head]; - *head = (*head == hr_dev->caps.num_qps - 1) ? 0 : (*head + 1); - list_add_tail(&hr_dip->node, &hr_dev->dip_list); + /* This should never happen. */ + if (WARN_ON_ONCE(!hr_qp->dip)) + ret = -ENOSPC;
out: - spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags); + xa_unlock(dip_xa); return ret; }
@@ -5061,10 +5081,8 @@ static int hns_roce_v2_set_abs_fields(struct ib_qp *ibqp, struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); int ret = 0;
- if (!check_qp_state(cur_state, new_state)) { - ibdev_err(&hr_dev->ib_dev, "Illegal state for QP!\n"); + if (!check_qp_state(cur_state, new_state)) return -EINVAL; - }
if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) { memset(qpc_mask, 0, hr_dev->caps.qpc_sz); @@ -5325,7 +5343,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, /* SW pass context to HW */ ret = hns_roce_v2_qp_modify(hr_dev, context, qpc_mask, hr_qp); if (ret) { - ibdev_err(ibdev, "failed to modify QP, ret = %d.\n", ret); + ibdev_err_ratelimited(ibdev, "failed to modify QP, ret = %d.\n", ret); goto out; }
@@ -5463,7 +5481,9 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
ret = hns_roce_v2_query_qpc(hr_dev, hr_qp->qpn, &context); if (ret) { - ibdev_err(ibdev, "failed to query QPC, ret = %d.\n", ret); + ibdev_err_ratelimited(ibdev, + "failed to query QPC, ret = %d.\n", + ret); ret = -EINVAL; goto out; } @@ -5471,7 +5491,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, state = hr_reg_read(&context, QPC_QP_ST); tmp_qp_state = to_ib_qp_st((enum hns_roce_v2_qp_state)state); if (tmp_qp_state == -1) { - ibdev_err(ibdev, "Illegal ib_qp_state\n"); + ibdev_err_ratelimited(ibdev, "Illegal ib_qp_state\n"); ret = -EINVAL; goto out; } @@ -5564,9 +5584,9 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev, ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, NULL, 0, hr_qp->state, IB_QPS_RESET, udata); if (ret) - ibdev_err(ibdev, - "failed to modify QP to RST, ret = %d.\n", - ret); + ibdev_err_ratelimited(ibdev, + "failed to modify QP to RST, ret = %d.\n", + ret); }
send_cq = hr_qp->ibqp.send_cq ? to_hr_cq(hr_qp->ibqp.send_cq) : NULL; @@ -5594,17 +5614,41 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev, return ret; }
+static void put_dip_ctx_idx(struct hns_roce_dev *hr_dev, + struct hns_roce_qp *hr_qp) +{ + struct hns_roce_dip *hr_dip = hr_qp->dip; + + xa_lock(&hr_dev->qp_table.dip_xa); + + hr_dip->qp_cnt--; + if (!hr_dip->qp_cnt) + memset(hr_dip->dgid, 0, GID_LEN_V2); + + xa_unlock(&hr_dev->qp_table.dip_xa); +} + int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) { struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); + unsigned long flags; int ret;
+ /* Make sure flush_cqe() is completed */ + spin_lock_irqsave(&hr_qp->flush_lock, flags); + set_bit(HNS_ROCE_STOP_FLUSH_FLAG, &hr_qp->flush_flag); + spin_unlock_irqrestore(&hr_qp->flush_lock, flags); + flush_work(&hr_qp->flush_work.work); + + if (hr_qp->cong_type == CONG_TYPE_DIP) + put_dip_ctx_idx(hr_dev, hr_qp); + ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata); if (ret) - ibdev_err(&hr_dev->ib_dev, - "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n", - hr_qp->qpn, ret); + ibdev_err_ratelimited(&hr_dev->ib_dev, + "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n", + hr_qp->qpn, ret);
hns_roce_qp_destroy(hr_dev, hr_qp, udata);
@@ -5898,9 +5942,9 @@ static int hns_roce_v2_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period) HNS_ROCE_CMD_MODIFY_CQC, hr_cq->cqn); hns_roce_free_cmd_mailbox(hr_dev, mailbox); if (ret) - ibdev_err(&hr_dev->ib_dev, - "failed to process cmd when modifying CQ, ret = %d.\n", - ret); + ibdev_err_ratelimited(&hr_dev->ib_dev, + "failed to process cmd when modifying CQ, ret = %d.\n", + ret);
err_out: if (ret) @@ -5924,9 +5968,9 @@ static int hns_roce_v2_query_cqc(struct hns_roce_dev *hr_dev, u32 cqn, ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma, HNS_ROCE_CMD_QUERY_CQC, cqn); if (ret) { - ibdev_err(&hr_dev->ib_dev, - "failed to process cmd when querying CQ, ret = %d.\n", - ret); + ibdev_err_ratelimited(&hr_dev->ib_dev, + "failed to process cmd when querying CQ, ret = %d.\n", + ret); goto err_mailbox; }
@@ -5967,11 +6011,10 @@ static int hns_roce_v2_query_mpt(struct hns_roce_dev *hr_dev, u32 key, return ret; }
-static void hns_roce_irq_work_handle(struct work_struct *work) +static void dump_aeqe_log(struct hns_roce_work *irq_work) { - struct hns_roce_work *irq_work = - container_of(work, struct hns_roce_work, work); - struct ib_device *ibdev = &irq_work->hr_dev->ib_dev; + struct hns_roce_dev *hr_dev = irq_work->hr_dev; + struct ib_device *ibdev = &hr_dev->ib_dev;
switch (irq_work->event_type) { case HNS_ROCE_EVENT_TYPE_PATH_MIG: @@ -6015,6 +6058,8 @@ static void hns_roce_irq_work_handle(struct work_struct *work) case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW: ibdev_warn(ibdev, "DB overflow.\n"); break; + case HNS_ROCE_EVENT_TYPE_MB: + break; case HNS_ROCE_EVENT_TYPE_FLR: ibdev_warn(ibdev, "function level reset.\n"); break; @@ -6025,8 +6070,46 @@ static void hns_roce_irq_work_handle(struct work_struct *work) ibdev_err(ibdev, "invalid xrceth error.\n"); break; default: + ibdev_info(ibdev, "Undefined event %d.\n", + irq_work->event_type); break; } +} + +static void hns_roce_irq_work_handle(struct work_struct *work) +{ + struct hns_roce_work *irq_work = + container_of(work, struct hns_roce_work, work); + struct hns_roce_dev *hr_dev = irq_work->hr_dev; + int event_type = irq_work->event_type; + u32 queue_num = irq_work->queue_num; + + switch (event_type) { + case HNS_ROCE_EVENT_TYPE_PATH_MIG: + case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED: + case HNS_ROCE_EVENT_TYPE_COMM_EST: + case HNS_ROCE_EVENT_TYPE_SQ_DRAINED: + case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR: + case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH: + case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR: + case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR: + case HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION: + case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH: + hns_roce_qp_event(hr_dev, queue_num, event_type); + break; + case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH: + case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR: + hns_roce_srq_event(hr_dev, queue_num, event_type); + break; + case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR: + case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW: + hns_roce_cq_event(hr_dev, queue_num, event_type); + break; + default: + break; + } + + dump_aeqe_log(irq_work);
kfree(irq_work); } @@ -6087,14 +6170,14 @@ static struct hns_roce_aeqe *next_aeqe_sw_v2(struct hns_roce_eq *eq) static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq) { - struct device *dev = hr_dev->dev; struct hns_roce_aeqe *aeqe = next_aeqe_sw_v2(eq); irqreturn_t aeqe_found = IRQ_NONE; + int num_aeqes = 0; int event_type; u32 queue_num; int sub_type;
- while (aeqe) { + while (aeqe && num_aeqes < HNS_AEQ_POLLING_BUDGET) { /* Make sure we read AEQ entry after we have checked the * ownership bit */ @@ -6105,25 +6188,12 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, queue_num = hr_reg_read(aeqe, AEQE_EVENT_QUEUE_NUM);
switch (event_type) { - case HNS_ROCE_EVENT_TYPE_PATH_MIG: - case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED: - case HNS_ROCE_EVENT_TYPE_COMM_EST: - case HNS_ROCE_EVENT_TYPE_SQ_DRAINED: case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR: - case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH: case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR: case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR: case HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION: case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH: - hns_roce_qp_event(hr_dev, queue_num, event_type); - break; - case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH: - case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR: - hns_roce_srq_event(hr_dev, queue_num, event_type); - break; - case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR: - case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW: - hns_roce_cq_event(hr_dev, queue_num, event_type); + hns_roce_flush_cqe(hr_dev, queue_num); break; case HNS_ROCE_EVENT_TYPE_MB: hns_roce_cmd_event(hr_dev, @@ -6131,12 +6201,7 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, aeqe->event.cmd.status, le64_to_cpu(aeqe->event.cmd.out_param)); break; - case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW: - case HNS_ROCE_EVENT_TYPE_FLR: - break; default: - dev_err(dev, "unhandled event %d on EQ %d at idx %u.\n", - event_type, eq->eqn, eq->cons_index); break; }
@@ -6150,6 +6215,7 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, hns_roce_v2_init_irq_work(hr_dev, eq, queue_num);
aeqe = next_aeqe_sw_v2(eq); + ++num_aeqes; }
update_eq_db(eq); @@ -6699,6 +6765,9 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev) int ret; int i;
+ if (hr_dev->caps.aeqe_depth < HNS_AEQ_POLLING_BUDGET) + return -EINVAL; + other_num = hr_dev->caps.num_other_vectors; comp_num = hr_dev->caps.num_comp_vectors; aeq_num = hr_dev->caps.num_aeq_vectors; @@ -7017,6 +7086,7 @@ static void hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
handle->rinfo.instance_state = HNS_ROCE_STATE_NON_INIT; } + static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle) { struct hns_roce_dev *hr_dev; @@ -7035,6 +7105,9 @@ static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle)
hr_dev->active = false; hr_dev->dis_db = true; + + rdma_user_mmap_disassociate(&hr_dev->ib_dev); + hr_dev->state = HNS_ROCE_DEVICE_STATE_RST_DOWN;
return 0; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h index c65f68a14a26..cbdbc9edbce6 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h @@ -85,6 +85,11 @@
#define HNS_ROCE_V2_TABLE_CHUNK_SIZE (1 << 18)
+/* budget must be smaller than aeqe_depth to guarantee that we update + * the ci before we polled all the entries in the EQ. + */ +#define HNS_AEQ_POLLING_BUDGET 64 + enum { HNS_ROCE_CMD_FLAG_IN = BIT(0), HNS_ROCE_CMD_FLAG_OUT = BIT(1), @@ -919,6 +924,7 @@ struct hns_roce_v2_rc_send_wqe { #define RC_SEND_WQE_OWNER RC_SEND_WQE_FIELD_LOC(7, 7) #define RC_SEND_WQE_CQE RC_SEND_WQE_FIELD_LOC(8, 8) #define RC_SEND_WQE_FENCE RC_SEND_WQE_FIELD_LOC(9, 9) +#define RC_SEND_WQE_SO RC_SEND_WQE_FIELD_LOC(10, 10) #define RC_SEND_WQE_SE RC_SEND_WQE_FIELD_LOC(11, 11) #define RC_SEND_WQE_INLINE RC_SEND_WQE_FIELD_LOC(12, 12) #define RC_SEND_WQE_WQE_INDEX RC_SEND_WQE_FIELD_LOC(30, 15) @@ -1342,7 +1348,7 @@ struct hns_roce_v2_priv { struct hns_roce_dip { u8 dgid[GID_LEN_V2]; u32 dip_idx; - struct list_head node; /* all dips are on a list */ + u32 qp_cnt; };
struct fmea_ram_ecc { diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c index 4cb0af733587..ae24c81c9812 100644 --- a/drivers/infiniband/hw/hns/hns_roce_main.c +++ b/drivers/infiniband/hw/hns/hns_roce_main.c @@ -466,6 +466,11 @@ static int hns_roce_mmap(struct ib_ucontext *uctx, struct vm_area_struct *vma) pgprot_t prot; int ret;
+ if (hr_dev->dis_db) { + atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_MMAP_ERR_CNT]); + return -EPERM; + } + rdma_entry = rdma_user_mmap_entry_get_pgoff(uctx, vma->vm_pgoff); if (!rdma_entry) { atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_MMAP_ERR_CNT]); @@ -1130,8 +1135,6 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
INIT_LIST_HEAD(&hr_dev->qp_list); spin_lock_init(&hr_dev->qp_list_lock); - INIT_LIST_HEAD(&hr_dev->dip_list); - spin_lock_init(&hr_dev->dip_list_lock);
ret = hns_roce_register_device(hr_dev); if (ret) diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c index 846da8c78b8b..bf30b3a65a9b 100644 --- a/drivers/infiniband/hw/hns/hns_roce_mr.c +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c @@ -138,8 +138,8 @@ static void hns_roce_mr_free(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr key_to_hw_index(mr->key) & (hr_dev->caps.num_mtpts - 1)); if (ret) - ibdev_warn(ibdev, "failed to destroy mpt, ret = %d.\n", - ret); + ibdev_warn_ratelimited(ibdev, "failed to destroy mpt, ret = %d.\n", + ret); }
free_mr_pbl(hr_dev, mr); @@ -435,15 +435,16 @@ static int hns_roce_set_page(struct ib_mr *ibmr, u64 addr) }
int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset) + unsigned int *sg_offset_p) { + unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0; struct hns_roce_dev *hr_dev = to_hr_dev(ibmr->device); struct ib_device *ibdev = &hr_dev->ib_dev; struct hns_roce_mr *mr = to_hr_mr(ibmr); struct hns_roce_mtr *mtr = &mr->pbl_mtr; int ret, sg_num = 0;
- if (!IS_ALIGNED(*sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) || + if (!IS_ALIGNED(sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) || ibmr->page_size < HNS_HW_PAGE_SIZE || ibmr->page_size > HNS_HW_MAX_PAGE_SIZE) return sg_num; @@ -454,7 +455,7 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, if (!mr->page_list) return sg_num;
- sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, hns_roce_set_page); + sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset_p, hns_roce_set_page); if (sg_num < 1) { ibdev_err(ibdev, "failed to store sg pages %u %u, cnt = %d.\n", mr->npages, mr->pbl_mtr.hem_cfg.buf_pg_count, sg_num); diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index 6b03ba671ff8..9e2e76c59406 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -39,6 +39,25 @@ #include "hns_roce_device.h" #include "hns_roce_hem.h"
+static struct hns_roce_qp *hns_roce_qp_lookup(struct hns_roce_dev *hr_dev, + u32 qpn) +{ + struct device *dev = hr_dev->dev; + struct hns_roce_qp *qp; + unsigned long flags; + + xa_lock_irqsave(&hr_dev->qp_table_xa, flags); + qp = __hns_roce_qp_lookup(hr_dev, qpn); + if (qp) + refcount_inc(&qp->refcount); + xa_unlock_irqrestore(&hr_dev->qp_table_xa, flags); + + if (!qp) + dev_warn(dev, "async event for bogus QP %08x\n", qpn); + + return qp; +} + static void flush_work_handle(struct work_struct *work) { struct hns_roce_work *flush_work = container_of(work, @@ -71,11 +90,18 @@ static void flush_work_handle(struct work_struct *work) void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp) { struct hns_roce_work *flush_work = &hr_qp->flush_work; + unsigned long flags; + + spin_lock_irqsave(&hr_qp->flush_lock, flags); + /* Exit directly after destroy_qp() */ + if (test_bit(HNS_ROCE_STOP_FLUSH_FLAG, &hr_qp->flush_flag)) { + spin_unlock_irqrestore(&hr_qp->flush_lock, flags); + return; + }
- flush_work->hr_dev = hr_dev; - INIT_WORK(&flush_work->work, flush_work_handle); refcount_inc(&hr_qp->refcount); queue_work(hr_dev->irq_workq, &flush_work->work); + spin_unlock_irqrestore(&hr_qp->flush_lock, flags); }
void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp) @@ -95,31 +121,28 @@ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp)
void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type) { - struct device *dev = hr_dev->dev; struct hns_roce_qp *qp;
- xa_lock(&hr_dev->qp_table_xa); - qp = __hns_roce_qp_lookup(hr_dev, qpn); - if (qp) - refcount_inc(&qp->refcount); - xa_unlock(&hr_dev->qp_table_xa); - - if (!qp) { - dev_warn(dev, "async event for bogus QP %08x\n", qpn); + qp = hns_roce_qp_lookup(hr_dev, qpn); + if (!qp) return; - }
- if (event_type == HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR || - event_type == HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR || - event_type == HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR || - event_type == HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION || - event_type == HNS_ROCE_EVENT_TYPE_INVALID_XRCETH) { - qp->state = IB_QPS_ERR; + qp->event(qp, (enum hns_roce_event)event_type);
- flush_cqe(hr_dev, qp); - } + if (refcount_dec_and_test(&qp->refcount)) + complete(&qp->free); +}
- qp->event(qp, (enum hns_roce_event)event_type); +void hns_roce_flush_cqe(struct hns_roce_dev *hr_dev, u32 qpn) +{ + struct hns_roce_qp *qp; + + qp = hns_roce_qp_lookup(hr_dev, qpn); + if (!qp) + return; + + qp->state = IB_QPS_ERR; + flush_cqe(hr_dev, qp);
if (refcount_dec_and_test(&qp->refcount)) complete(&qp->free); @@ -1124,6 +1147,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, struct ib_udata *udata, struct hns_roce_qp *hr_qp) { + struct hns_roce_work *flush_work = &hr_qp->flush_work; struct hns_roce_ib_create_qp_resp resp = {}; struct ib_device *ibdev = &hr_dev->ib_dev; struct hns_roce_ib_create_qp ucmd = {}; @@ -1132,9 +1156,12 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, mutex_init(&hr_qp->mutex); spin_lock_init(&hr_qp->sq.lock); spin_lock_init(&hr_qp->rq.lock); + spin_lock_init(&hr_qp->flush_lock);
hr_qp->state = IB_QPS_RESET; hr_qp->flush_flag = 0; + flush_work->hr_dev = hr_dev; + INIT_WORK(&flush_work->work, flush_work_handle);
if (init_attr->create_flags) return -EOPNOTSUPP; @@ -1546,14 +1573,10 @@ int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev) unsigned int reserved_from_bot; unsigned int i;
- qp_table->idx_table.spare_idx = kcalloc(hr_dev->caps.num_qps, - sizeof(u32), GFP_KERNEL); - if (!qp_table->idx_table.spare_idx) - return -ENOMEM; - mutex_init(&qp_table->scc_mutex); mutex_init(&qp_table->bank_mutex); xa_init(&hr_dev->qp_table_xa); + xa_init(&qp_table->dip_xa);
reserved_from_bot = hr_dev->caps.reserved_qps;
@@ -1578,7 +1601,7 @@ void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev)
for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++) ida_destroy(&hr_dev->qp_table.bank[i].ida); + xa_destroy(&hr_dev->qp_table.dip_xa); mutex_destroy(&hr_dev->qp_table.bank_mutex); mutex_destroy(&hr_dev->qp_table.scc_mutex); - kfree(hr_dev->qp_table.idx_table.spare_idx); } diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c index c9b8233f4b05..70c06ef65603 100644 --- a/drivers/infiniband/hw/hns/hns_roce_srq.c +++ b/drivers/infiniband/hw/hns/hns_roce_srq.c @@ -151,8 +151,8 @@ static void free_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq) ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_SRQ, srq->srqn); if (ret) - dev_err(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n", - ret, srq->srqn); + dev_err_ratelimited(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n", + ret, srq->srqn);
xa_erase_irq(&srq_table->xa, srq->srqn);
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 4999239c8f41..ac20ab3bbabf 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -2997,7 +2997,6 @@ int mlx5_ib_dev_res_srq_init(struct mlx5_ib_dev *dev) static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev) { struct mlx5_ib_resources *devr = &dev->devr; - int port; int ret;
if (!MLX5_CAP_GEN(dev->mdev, xrc)) @@ -3013,10 +3012,6 @@ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev) return ret; }
- for (port = 0; port < ARRAY_SIZE(devr->ports); ++port) - INIT_WORK(&devr->ports[port].pkey_change_work, - pkey_change_handler); - mutex_init(&devr->cq_lock); mutex_init(&devr->srq_lock);
@@ -3026,16 +3021,6 @@ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev) static void mlx5_ib_dev_res_cleanup(struct mlx5_ib_dev *dev) { struct mlx5_ib_resources *devr = &dev->devr; - int port; - - /* - * Make sure no change P_Key work items are still executing. - * - * At this stage, the mlx5_ib_event should be unregistered - * and it ensures that no new works are added. - */ - for (port = 0; port < ARRAY_SIZE(devr->ports); ++port) - cancel_work_sync(&devr->ports[port].pkey_change_work);
/* After s0/s1 init, they are not unset during the device lifetime. */ if (devr->s1) { @@ -3211,12 +3196,14 @@ static int lag_event(struct notifier_block *nb, unsigned long event, void *data) struct mlx5_ib_dev *dev = container_of(nb, struct mlx5_ib_dev, lag_events); struct mlx5_core_dev *mdev = dev->mdev; + struct ib_device *ibdev = &dev->ib_dev; + struct net_device *old_ndev = NULL; struct mlx5_ib_port *port; struct net_device *ndev; - int i, err; - int portnum; + u32 portnum = 0; + int ret = 0; + int i;
- portnum = 0; switch (event) { case MLX5_DRIVER_EVENT_ACTIVE_BACKUP_LAG_CHANGE_LOWERSTATE: ndev = data; @@ -3232,19 +3219,24 @@ static int lag_event(struct notifier_block *nb, unsigned long event, void *data) } } } - err = ib_device_set_netdev(&dev->ib_dev, ndev, - portnum + 1); - dev_put(ndev); - if (err) - return err; - /* Rescan gids after new netdev assignment */ - rdma_roce_rescan_device(&dev->ib_dev); + old_ndev = ib_device_get_netdev(ibdev, portnum + 1); + ret = ib_device_set_netdev(ibdev, ndev, portnum + 1); + if (ret) + goto out; + + if (old_ndev) + roce_del_all_netdev_gids(ibdev, portnum + 1, + old_ndev); + rdma_roce_rescan_port(ibdev, portnum + 1); } break; default: return NOTIFY_DONE; } - return NOTIFY_OK; + +out: + dev_put(old_ndev); + return notifier_from_errno(ret); }
static void mlx5e_lag_event_register(struct mlx5_ib_dev *dev) @@ -4464,6 +4456,13 @@ static void mlx5_ib_stage_delay_drop_cleanup(struct mlx5_ib_dev *dev)
static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev) { + struct mlx5_ib_resources *devr = &dev->devr; + int port; + + for (port = 0; port < ARRAY_SIZE(devr->ports); ++port) + INIT_WORK(&devr->ports[port].pkey_change_work, + pkey_change_handler); + dev->mdev_events.notifier_call = mlx5_ib_event; mlx5_notifier_register(dev->mdev, &dev->mdev_events);
@@ -4474,8 +4473,14 @@ static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev)
static void mlx5_ib_stage_dev_notifier_cleanup(struct mlx5_ib_dev *dev) { + struct mlx5_ib_resources *devr = &dev->devr; + int port; + mlx5r_macsec_event_unregister(dev); mlx5_notifier_unregister(dev->mdev, &dev->mdev_events); + + for (port = 0; port < ARRAY_SIZE(devr->ports); ++port) + cancel_work_sync(&devr->ports[port].pkey_change_work); }
void mlx5_ib_data_direct_bind(struct mlx5_ib_dev *ibdev, @@ -4565,9 +4570,6 @@ static const struct mlx5_ib_profile pf_profile = { STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES, mlx5_ib_dev_res_init, mlx5_ib_dev_res_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER, - mlx5_ib_stage_dev_notifier_init, - mlx5_ib_stage_dev_notifier_cleanup), STAGE_CREATE(MLX5_IB_STAGE_ODP, mlx5_ib_odp_init_one, mlx5_ib_odp_cleanup_one), @@ -4592,6 +4594,9 @@ static const struct mlx5_ib_profile pf_profile = { STAGE_CREATE(MLX5_IB_STAGE_IB_REG, mlx5_ib_stage_ib_reg_init, mlx5_ib_stage_ib_reg_cleanup), + STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER, + mlx5_ib_stage_dev_notifier_init, + mlx5_ib_stage_dev_notifier_cleanup), STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR, mlx5_ib_stage_post_ib_reg_umr_init, NULL), @@ -4628,9 +4633,6 @@ const struct mlx5_ib_profile raw_eth_profile = { STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES, mlx5_ib_dev_res_init, mlx5_ib_dev_res_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER, - mlx5_ib_stage_dev_notifier_init, - mlx5_ib_stage_dev_notifier_cleanup), STAGE_CREATE(MLX5_IB_STAGE_COUNTERS, mlx5_ib_counters_init, mlx5_ib_counters_cleanup), @@ -4652,6 +4654,9 @@ const struct mlx5_ib_profile raw_eth_profile = { STAGE_CREATE(MLX5_IB_STAGE_IB_REG, mlx5_ib_stage_ib_reg_init, mlx5_ib_stage_ib_reg_cleanup), + STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER, + mlx5_ib_stage_dev_notifier_init, + mlx5_ib_stage_dev_notifier_cleanup), STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR, mlx5_ib_stage_post_ib_reg_umr_init, NULL), diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 23fd72f7f63d..29bde64ea1ea 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -972,7 +972,6 @@ enum mlx5_ib_stages { MLX5_IB_STAGE_QP, MLX5_IB_STAGE_SRQ, MLX5_IB_STAGE_DEVICE_RESOURCES, - MLX5_IB_STAGE_DEVICE_NOTIFIER, MLX5_IB_STAGE_ODP, MLX5_IB_STAGE_COUNTERS, MLX5_IB_STAGE_CONG_DEBUGFS, @@ -981,6 +980,7 @@ enum mlx5_ib_stages { MLX5_IB_STAGE_PRE_IB_REG_UMR, MLX5_IB_STAGE_WHITELIST_UID, MLX5_IB_STAGE_IB_REG, + MLX5_IB_STAGE_DEVICE_NOTIFIER, MLX5_IB_STAGE_POST_IB_REG_UMR, MLX5_IB_STAGE_DELAY_DROP, MLX5_IB_STAGE_RESTRACK, diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index d2f7b5195c19..91d329e90308 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -775,6 +775,7 @@ int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask) * Yield the processor */ spin_lock_irqsave(&qp->state_lock, flags); + attr->cur_qp_state = qp_state(qp); if (qp->attr.sq_draining) { spin_unlock_irqrestore(&qp->state_lock, flags); cond_resched(); diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 479c07e6e4ed..87a02f0deb00 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -663,10 +663,12 @@ int rxe_requester(struct rxe_qp *qp) if (unlikely(qp_state(qp) == IB_QPS_ERR)) { wqe = __req_next_wqe(qp); spin_unlock_irqrestore(&qp->state_lock, flags); - if (wqe) + if (wqe) { + wqe->status = IB_WC_WR_FLUSH_ERR; goto err; - else + } else { goto exit; + } }
if (unlikely(qp_state(qp) == IB_QPS_RESET)) { diff --git a/drivers/input/misc/cs40l50-vibra.c b/drivers/input/misc/cs40l50-vibra.c index 03bdb7c26ec0..dce3b0ec8cf3 100644 --- a/drivers/input/misc/cs40l50-vibra.c +++ b/drivers/input/misc/cs40l50-vibra.c @@ -334,11 +334,12 @@ static int cs40l50_add(struct input_dev *dev, struct ff_effect *effect, work_data.custom_len = effect->u.periodic.custom_len; work_data.vib = vib; work_data.effect = effect; - INIT_WORK(&work_data.work, cs40l50_add_worker); + INIT_WORK_ONSTACK(&work_data.work, cs40l50_add_worker);
/* Push to the workqueue to serialize with playbacks */ queue_work(vib->vib_wq, &work_data.work); flush_work(&work_data.work); + destroy_work_on_stack(&work_data.work);
kfree(work_data.custom_data);
@@ -467,11 +468,12 @@ static int cs40l50_erase(struct input_dev *dev, int effect_id) work_data.vib = vib; work_data.effect = &dev->ff->effects[effect_id];
- INIT_WORK(&work_data.work, cs40l50_erase_worker); + INIT_WORK_ONSTACK(&work_data.work, cs40l50_erase_worker);
/* Push to workqueue to serialize with playbacks */ queue_work(vib->vib_wq, &work_data.work); flush_work(&work_data.work); + destroy_work_on_stack(&work_data.work);
return work_data.error; } diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c index f49a8e0cb03c..adacd6f7d6a8 100644 --- a/drivers/interconnect/qcom/icc-rpmh.c +++ b/drivers/interconnect/qcom/icc-rpmh.c @@ -311,6 +311,9 @@ int qcom_icc_rpmh_probe(struct platform_device *pdev) }
qp->num_clks = devm_clk_bulk_get_all(qp->dev, &qp->clks); + if (qp->num_clks == -EPROBE_DEFER) + return dev_err_probe(dev, qp->num_clks, "Failed to get QoS clocks\n"); + if (qp->num_clks < 0 || (!qp->num_clks && desc->qos_clks_required)) { dev_info(dev, "Skipping QoS, failed to get clk: %d\n", qp->num_clks); goto skip_qos_config; diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c index 25b9042fa453..c616de2c5926 100644 --- a/drivers/iommu/amd/io_pgtable_v2.c +++ b/drivers/iommu/amd/io_pgtable_v2.c @@ -268,8 +268,11 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova, out: if (updated) { struct protection_domain *pdom = io_pgtable_ops_to_domain(ops); + unsigned long flags;
+ spin_lock_irqsave(&pdom->lock, flags); amd_iommu_domain_flush_pages(pdom, o_iova, size); + spin_unlock_irqrestore(&pdom->lock, flags); }
if (mapped) diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c index fcd13d301fff..6b479592140c 100644 --- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c +++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c @@ -509,7 +509,8 @@ static int tegra241_vcmdq_alloc_smmu_cmdq(struct tegra241_vcmdq *vcmdq)
snprintf(name, 16, "vcmdq%u", vcmdq->idx);
- q->llq.max_n_shift = VCMDQ_LOG2SIZE_MAX; + /* Queue size, capped to ensure natural alignment */ + q->llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT, VCMDQ_LOG2SIZE_MAX);
/* Use the common helper to init the VCMDQ, and then... */ ret = arm_smmu_init_one_queue(smmu, q, vcmdq->page0, @@ -800,7 +801,7 @@ static int tegra241_cmdqv_init_structures(struct arm_smmu_device *smmu) return 0; }
-struct dentry *cmdqv_debugfs_dir; +static struct dentry *cmdqv_debugfs_dir;
static struct arm_smmu_device * __tegra241_cmdqv_probe(struct arm_smmu_device *smmu, struct resource *res, diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index e860bc9439a2..a167d59101ae 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -707,14 +707,15 @@ static void pgtable_walk(struct intel_iommu *iommu, unsigned long pfn, while (1) { offset = pfn_level_offset(pfn, level); pte = &parent[offset]; - if (!pte || (dma_pte_superpage(pte) || !dma_pte_present(pte))) { - pr_info("PTE not present at level %d\n", level); - break; - }
pr_info("pte level: %d, pte value: 0x%016llx\n", level, pte->val);
- if (level == 1) + if (!dma_pte_present(pte)) { + pr_info("page table not present at level %d\n", level - 1); + break; + } + + if (level == 1 || dma_pte_superpage(pte)) break;
parent = phys_to_virt(dma_pte_addr(pte)); @@ -737,11 +738,11 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id, pr_info("Dump %s table entries for IOVA 0x%llx\n", iommu->name, addr);
/* root entry dump */ - rt_entry = &iommu->root_entry[bus]; - if (!rt_entry) { - pr_info("root table entry is not present\n"); + if (!iommu->root_entry) { + pr_info("root table is not present\n"); return; } + rt_entry = &iommu->root_entry[bus];
if (sm_supported(iommu)) pr_info("scalable mode root entry: hi 0x%016llx, low 0x%016llx\n", @@ -752,7 +753,7 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id, /* context entry dump */ ctx_entry = iommu_context_addr(iommu, bus, devfn, 0); if (!ctx_entry) { - pr_info("context table entry is not present\n"); + pr_info("context table is not present\n"); return; }
@@ -761,17 +762,23 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
/* legacy mode does not require PASID entries */ if (!sm_supported(iommu)) { + if (!context_present(ctx_entry)) { + pr_info("legacy mode page table is not present\n"); + return; + } level = agaw_to_level(ctx_entry->hi & 7); pgtable = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK); goto pgtable_walk; }
- /* get the pointer to pasid directory entry */ - dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK); - if (!dir) { - pr_info("pasid directory entry is not present\n"); + if (!context_present(ctx_entry)) { + pr_info("pasid directory table is not present\n"); return; } + + /* get the pointer to pasid directory entry */ + dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK); + /* For request-without-pasid, get the pasid from context entry */ if (intel_iommu_sm && pasid == IOMMU_PASID_INVALID) pasid = IOMMU_NO_PASID; @@ -783,7 +790,7 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id, /* get the pointer to the pasid table entry */ entries = get_pasid_table_from_pde(pde); if (!entries) { - pr_info("pasid table entry is not present\n"); + pr_info("pasid table is not present\n"); return; } index = pasid & PASID_PTE_MASK; @@ -791,6 +798,11 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id, for (i = 0; i < ARRAY_SIZE(pte->val); i++) pr_info("pasid table entry[%d]: 0x%016llx\n", i, pte->val[i]);
+ if (!pasid_pte_is_present(pte)) { + pr_info("scalable mode page table is not present\n"); + return; + } + if (pasid_pte_get_pgtt(pte) == PASID_ENTRY_PGTT_FL_ONLY) { level = pte->val[2] & BIT_ULL(2) ? 5 : 4; pgtable = phys_to_virt(pte->val[2] & VTD_PAGE_MASK); diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index d8eaa7ea380b..fbdeded3d48b 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -33,6 +33,8 @@ struct s390_domain { struct rcu_head rcu; };
+static struct iommu_domain blocking_domain; + static inline unsigned int calc_rtx(dma_addr_t ptr) { return ((unsigned long)ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK; @@ -369,20 +371,36 @@ static void s390_domain_free(struct iommu_domain *domain) call_rcu(&s390_domain->rcu, s390_iommu_rcu_free_domain); }
-static void s390_iommu_detach_device(struct iommu_domain *domain, - struct device *dev) +static void zdev_s390_domain_update(struct zpci_dev *zdev, + struct iommu_domain *domain) +{ + unsigned long flags; + + spin_lock_irqsave(&zdev->dom_lock, flags); + zdev->s390_domain = domain; + spin_unlock_irqrestore(&zdev->dom_lock, flags); +} + +static int blocking_domain_attach_device(struct iommu_domain *domain, + struct device *dev) { - struct s390_domain *s390_domain = to_s390_domain(domain); struct zpci_dev *zdev = to_zpci_dev(dev); + struct s390_domain *s390_domain; unsigned long flags;
+ if (zdev->s390_domain->type == IOMMU_DOMAIN_BLOCKED) + return 0; + + s390_domain = to_s390_domain(zdev->s390_domain); spin_lock_irqsave(&s390_domain->list_lock, flags); list_del_rcu(&zdev->iommu_list); spin_unlock_irqrestore(&s390_domain->list_lock, flags);
zpci_unregister_ioat(zdev, 0); - zdev->s390_domain = NULL; zdev->dma_table = NULL; + zdev_s390_domain_update(zdev, domain); + + return 0; }
static int s390_iommu_attach_device(struct iommu_domain *domain, @@ -401,20 +419,15 @@ static int s390_iommu_attach_device(struct iommu_domain *domain, domain->geometry.aperture_end < zdev->start_dma)) return -EINVAL;
- if (zdev->s390_domain) - s390_iommu_detach_device(&zdev->s390_domain->domain, dev); + blocking_domain_attach_device(&blocking_domain, dev);
+ /* If we fail now DMA remains blocked via blocking domain */ cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, virt_to_phys(s390_domain->dma_table), &status); - /* - * If the device is undergoing error recovery the reset code - * will re-establish the new domain. - */ if (cc && status != ZPCI_PCI_ST_FUNC_NOT_AVAIL) return -EIO; - zdev->dma_table = s390_domain->dma_table; - zdev->s390_domain = s390_domain; + zdev_s390_domain_update(zdev, domain);
spin_lock_irqsave(&s390_domain->list_lock, flags); list_add_rcu(&zdev->iommu_list, &s390_domain->devices); @@ -466,19 +479,11 @@ static struct iommu_device *s390_iommu_probe_device(struct device *dev) if (zdev->tlb_refresh) dev->iommu->shadow_on_flush = 1;
- return &zdev->iommu_dev; -} + /* Start with DMA blocked */ + spin_lock_init(&zdev->dom_lock); + zdev_s390_domain_update(zdev, &blocking_domain);
-static void s390_iommu_release_device(struct device *dev) -{ - struct zpci_dev *zdev = to_zpci_dev(dev); - - /* - * release_device is expected to detach any domain currently attached - * to the device, but keep it attached to other devices in the group. - */ - if (zdev) - s390_iommu_detach_device(&zdev->s390_domain->domain, dev); + return &zdev->iommu_dev; }
static int zpci_refresh_all(struct zpci_dev *zdev) @@ -697,9 +702,15 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain,
struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev) { - if (!zdev || !zdev->s390_domain) + struct s390_domain *s390_domain; + + lockdep_assert_held(&zdev->dom_lock); + + if (zdev->s390_domain->type == IOMMU_DOMAIN_BLOCKED) return NULL; - return &zdev->s390_domain->ctrs; + + s390_domain = to_s390_domain(zdev->s390_domain); + return &s390_domain->ctrs; }
int zpci_init_iommu(struct zpci_dev *zdev) @@ -776,11 +787,19 @@ static int __init s390_iommu_init(void) } subsys_initcall(s390_iommu_init);
+static struct iommu_domain blocking_domain = { + .type = IOMMU_DOMAIN_BLOCKED, + .ops = &(const struct iommu_domain_ops) { + .attach_dev = blocking_domain_attach_device, + } +}; + static const struct iommu_ops s390_iommu_ops = { + .blocked_domain = &blocking_domain, + .release_domain = &blocking_domain, .capable = s390_iommu_capable, .domain_alloc_paging = s390_domain_alloc_paging, .probe_device = s390_iommu_probe_device, - .release_device = s390_iommu_release_device, .device_group = generic_device_group, .pgsize_bitmap = SZ_4K, .get_resv_regions = s390_iommu_get_resv_regions, diff --git a/drivers/irqchip/irq-mvebu-sei.c b/drivers/irqchip/irq-mvebu-sei.c index f8c70f2d100a..065166ab5dbc 100644 --- a/drivers/irqchip/irq-mvebu-sei.c +++ b/drivers/irqchip/irq-mvebu-sei.c @@ -192,7 +192,6 @@ static void mvebu_sei_domain_free(struct irq_domain *domain, unsigned int virq, }
static const struct irq_domain_ops mvebu_sei_domain_ops = { - .select = msi_lib_irq_domain_select, .alloc = mvebu_sei_domain_alloc, .free = mvebu_sei_domain_free, }; @@ -306,6 +305,7 @@ static void mvebu_sei_cp_domain_free(struct irq_domain *domain, }
static const struct irq_domain_ops mvebu_sei_cp_domain_ops = { + .select = msi_lib_irq_domain_select, .alloc = mvebu_sei_cp_domain_alloc, .free = mvebu_sei_cp_domain_free, }; diff --git a/drivers/irqchip/irq-riscv-aplic-main.c b/drivers/irqchip/irq-riscv-aplic-main.c index 900e72541db9..93e7c51f944a 100644 --- a/drivers/irqchip/irq-riscv-aplic-main.c +++ b/drivers/irqchip/irq-riscv-aplic-main.c @@ -207,7 +207,8 @@ static int aplic_probe(struct platform_device *pdev) else rc = aplic_direct_setup(dev, regs); if (rc) - dev_err(dev, "failed to setup APLIC in %s mode\n", msi_mode ? "MSI" : "direct"); + dev_err_probe(dev, rc, "failed to setup APLIC in %s mode\n", + msi_mode ? "MSI" : "direct");
#ifdef CONFIG_ACPI if (!acpi_disabled) diff --git a/drivers/irqchip/irq-riscv-aplic-msi.c b/drivers/irqchip/irq-riscv-aplic-msi.c index 945bff28265c..fb8d1838609f 100644 --- a/drivers/irqchip/irq-riscv-aplic-msi.c +++ b/drivers/irqchip/irq-riscv-aplic-msi.c @@ -266,6 +266,9 @@ int aplic_msi_setup(struct device *dev, void __iomem *regs) if (msi_domain) dev_set_msi_domain(dev, msi_domain); } + + if (!dev_get_msi_domain(dev)) + return -EPROBE_DEFER; }
if (!msi_create_device_irq_domain(dev, MSI_DEFAULT_DOMAIN, &aplic_msi_template, diff --git a/drivers/leds/flash/leds-ktd2692.c b/drivers/leds/flash/leds-ktd2692.c index 16a01a200c0b..b92adf908793 100644 --- a/drivers/leds/flash/leds-ktd2692.c +++ b/drivers/leds/flash/leds-ktd2692.c @@ -292,6 +292,7 @@ static int ktd2692_probe(struct platform_device *pdev)
fled_cdev = &led->fled_cdev; led_cdev = &fled_cdev->led_cdev; + led->props.timing = ktd2692_timing;
ret = ktd2692_parse_dt(led, &pdev->dev, &led_cfg); if (ret) diff --git a/drivers/leds/leds-max5970.c b/drivers/leds/leds-max5970.c index 56a584311581..285074c53b23 100644 --- a/drivers/leds/leds-max5970.c +++ b/drivers/leds/leds-max5970.c @@ -45,7 +45,7 @@ static int max5970_led_set_brightness(struct led_classdev *cdev,
static int max5970_led_probe(struct platform_device *pdev) { - struct fwnode_handle *led_node, *child; + struct fwnode_handle *child; struct device *dev = &pdev->dev; struct regmap *regmap; struct max5970_led *ddata; @@ -55,7 +55,8 @@ static int max5970_led_probe(struct platform_device *pdev) if (!regmap) return -ENODEV;
- led_node = device_get_named_child_node(dev->parent, "leds"); + struct fwnode_handle *led_node __free(fwnode_handle) = + device_get_named_child_node(dev->parent, "leds"); if (!led_node) return -ENODEV;
diff --git a/drivers/mailbox/arm_mhuv2.c b/drivers/mailbox/arm_mhuv2.c index 0ec21dcdbde7..cff7c343ee08 100644 --- a/drivers/mailbox/arm_mhuv2.c +++ b/drivers/mailbox/arm_mhuv2.c @@ -500,7 +500,7 @@ static const struct mhuv2_protocol_ops mhuv2_data_transfer_ops = { static struct mbox_chan *get_irq_chan_comb(struct mhuv2 *mhu, u32 __iomem *reg) { struct mbox_chan *chans = mhu->mbox.chans; - int channel = 0, i, offset = 0, windows, protocol, ch_wn; + int channel = 0, i, j, offset = 0, windows, protocol, ch_wn; u32 stat;
for (i = 0; i < MHUV2_CMB_INT_ST_REG_CNT; i++) { @@ -510,9 +510,9 @@ static struct mbox_chan *get_irq_chan_comb(struct mhuv2 *mhu, u32 __iomem *reg)
ch_wn = i * MHUV2_STAT_BITS + __builtin_ctz(stat);
- for (i = 0; i < mhu->length; i += 2) { - protocol = mhu->protocols[i]; - windows = mhu->protocols[i + 1]; + for (j = 0; j < mhu->length; j += 2) { + protocol = mhu->protocols[j]; + windows = mhu->protocols[j + 1];
if (ch_wn >= offset + windows) { if (protocol == DOORBELL) diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c index 4bff73532085..9c43ed9bdd37 100644 --- a/drivers/mailbox/mtk-cmdq-mailbox.c +++ b/drivers/mailbox/mtk-cmdq-mailbox.c @@ -584,7 +584,7 @@ static int cmdq_get_clocks(struct device *dev, struct cmdq *cmdq) struct clk_bulk_data *clks;
cmdq->clocks = devm_kcalloc(dev, cmdq->pdata->gce_num, - sizeof(cmdq->clocks), GFP_KERNEL); + sizeof(*cmdq->clocks), GFP_KERNEL); if (!cmdq->clocks) return -ENOMEM;
diff --git a/drivers/mailbox/omap-mailbox.c b/drivers/mailbox/omap-mailbox.c index 6797770474a5..680243751d62 100644 --- a/drivers/mailbox/omap-mailbox.c +++ b/drivers/mailbox/omap-mailbox.c @@ -15,6 +15,7 @@ #include <linux/slab.h> #include <linux/kfifo.h> #include <linux/err.h> +#include <linux/io.h> #include <linux/module.h> #include <linux/of.h> #include <linux/platform_device.h> diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c index 272945a878b3..a3f4b4ad35aa 100644 --- a/drivers/media/i2c/adv7604.c +++ b/drivers/media/i2c/adv7604.c @@ -1405,12 +1405,13 @@ static int stdi2dv_timings(struct v4l2_subdev *sd, if (v4l2_detect_cvt(stdi->lcf + 1, hfreq, stdi->lcvs, 0, (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) | (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0), - false, timings)) + false, adv76xx_get_dv_timings_cap(sd, -1), timings)) return 0; if (v4l2_detect_gtf(stdi->lcf + 1, hfreq, stdi->lcvs, (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) | (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0), - false, state->aspect_ratio, timings)) + false, state->aspect_ratio, + adv76xx_get_dv_timings_cap(sd, -1), timings)) return 0;
v4l2_dbg(2, debug, sd, diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c index 014fc913225c..61ea7393066d 100644 --- a/drivers/media/i2c/adv7842.c +++ b/drivers/media/i2c/adv7842.c @@ -1431,14 +1431,15 @@ static int stdi2dv_timings(struct v4l2_subdev *sd, }
if (v4l2_detect_cvt(stdi->lcf + 1, hfreq, stdi->lcvs, 0, - (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) | - (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0), - false, timings)) + (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) | + (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0), + false, adv7842_get_dv_timings_cap(sd), timings)) return 0; if (v4l2_detect_gtf(stdi->lcf + 1, hfreq, stdi->lcvs, - (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) | - (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0), - false, state->aspect_ratio, timings)) + (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) | + (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0), + false, state->aspect_ratio, + adv7842_get_dv_timings_cap(sd), timings)) return 0;
v4l2_dbg(2, debug, sd, diff --git a/drivers/media/i2c/ds90ub960.c b/drivers/media/i2c/ds90ub960.c index ffe5f25f8647..58424d8f72af 100644 --- a/drivers/media/i2c/ds90ub960.c +++ b/drivers/media/i2c/ds90ub960.c @@ -1286,7 +1286,7 @@ static int ub960_rxport_get_strobe_pos(struct ub960_data *priv,
clk_delay += v & UB960_IR_RX_ANA_STROBE_SET_CLK_DELAY_MASK;
- ub960_rxport_read(priv, nport, UB960_RR_SFILTER_STS_1, &v); + ret = ub960_rxport_read(priv, nport, UB960_RR_SFILTER_STS_1, &v); if (ret) return ret;
diff --git a/drivers/media/i2c/max96717.c b/drivers/media/i2c/max96717.c index 4e85b8eb1e77..9259d58ba734 100644 --- a/drivers/media/i2c/max96717.c +++ b/drivers/media/i2c/max96717.c @@ -697,8 +697,10 @@ static int max96717_subdev_init(struct max96717_priv *priv) priv->pads[MAX96717_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE;
ret = media_entity_pads_init(&priv->sd.entity, 2, priv->pads); - if (ret) - return dev_err_probe(dev, ret, "Failed to init pads\n"); + if (ret) { + dev_err_probe(dev, ret, "Failed to init pads\n"); + goto err_free_ctrl; + }
ret = v4l2_subdev_init_finalize(&priv->sd); if (ret) { diff --git a/drivers/media/i2c/vgxy61.c b/drivers/media/i2c/vgxy61.c index 409d2d4ffb4b..d77468c8587b 100644 --- a/drivers/media/i2c/vgxy61.c +++ b/drivers/media/i2c/vgxy61.c @@ -1617,7 +1617,7 @@ static int vgxy61_detect(struct vgxy61_dev *sensor)
ret = cci_read(sensor->regmap, VGXY61_REG_NVM, &st, NULL); if (ret < 0) - return st; + return ret; if (st != VGXY61_NVM_OK) dev_warn(&client->dev, "Bad nvm state got %u\n", (u8)st);
diff --git a/drivers/media/pci/intel/ipu6/Kconfig b/drivers/media/pci/intel/ipu6/Kconfig index 49e4fb696573..a4537818a58c 100644 --- a/drivers/media/pci/intel/ipu6/Kconfig +++ b/drivers/media/pci/intel/ipu6/Kconfig @@ -4,12 +4,6 @@ config VIDEO_INTEL_IPU6 depends on VIDEO_DEV depends on X86 && X86_64 && HAS_DMA depends on IPU_BRIDGE || !IPU_BRIDGE - # - # This driver incorrectly tries to override the dma_ops. It should - # never have done that, but for now keep it working on architectures - # that use dma ops - # - depends on ARCH_HAS_DMA_OPS select AUXILIARY_BUS select IOMMU_IOVA select VIDEO_V4L2_SUBDEV_API diff --git a/drivers/media/pci/intel/ipu6/ipu6-bus.c b/drivers/media/pci/intel/ipu6/ipu6-bus.c index 149ec098cdbf..37d88ddb6ee7 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-bus.c +++ b/drivers/media/pci/intel/ipu6/ipu6-bus.c @@ -94,8 +94,6 @@ ipu6_bus_initialize_device(struct pci_dev *pdev, struct device *parent, if (!adev) return ERR_PTR(-ENOMEM);
- adev->dma_mask = DMA_BIT_MASK(isp->secure_mode ? IPU6_MMU_ADDR_BITS : - IPU6_MMU_ADDR_BITS_NON_SECURE); adev->isp = isp; adev->ctrl = ctrl; adev->pdata = pdata; @@ -106,10 +104,6 @@ ipu6_bus_initialize_device(struct pci_dev *pdev, struct device *parent,
auxdev->dev.parent = parent; auxdev->dev.release = ipu6_bus_release; - auxdev->dev.dma_ops = &ipu6_dma_ops; - auxdev->dev.dma_mask = &adev->dma_mask; - auxdev->dev.dma_parms = pdev->dev.dma_parms; - auxdev->dev.coherent_dma_mask = adev->dma_mask;
ret = auxiliary_device_init(auxdev); if (ret < 0) { diff --git a/drivers/media/pci/intel/ipu6/ipu6-buttress.c b/drivers/media/pci/intel/ipu6/ipu6-buttress.c index e47f84c30e10..1ee63ef4a40b 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-buttress.c +++ b/drivers/media/pci/intel/ipu6/ipu6-buttress.c @@ -24,6 +24,7 @@
#include "ipu6.h" #include "ipu6-bus.h" +#include "ipu6-dma.h" #include "ipu6-buttress.h" #include "ipu6-platform-buttress-regs.h"
@@ -345,12 +346,16 @@ irqreturn_t ipu6_buttress_isr(int irq, void *isp_ptr) u32 disable_irqs = 0; u32 irq_status; u32 i, count = 0; + int active;
- pm_runtime_get_noresume(&isp->pdev->dev); + active = pm_runtime_get_if_active(&isp->pdev->dev); + if (!active) + return IRQ_NONE;
irq_status = readl(isp->base + reg_irq_sts); - if (!irq_status) { - pm_runtime_put_noidle(&isp->pdev->dev); + if (irq_status == 0 || WARN_ON_ONCE(irq_status == 0xffffffffu)) { + if (active > 0) + pm_runtime_put_noidle(&isp->pdev->dev); return IRQ_NONE; }
@@ -426,7 +431,8 @@ irqreturn_t ipu6_buttress_isr(int irq, void *isp_ptr) writel(BUTTRESS_IRQS & ~disable_irqs, isp->base + BUTTRESS_REG_ISR_ENABLE);
- pm_runtime_put(&isp->pdev->dev); + if (active > 0) + pm_runtime_put(&isp->pdev->dev);
return ret; } @@ -553,6 +559,7 @@ int ipu6_buttress_map_fw_image(struct ipu6_bus_device *sys, const struct firmware *fw, struct sg_table *sgt) { bool is_vmalloc = is_vmalloc_addr(fw->data); + struct pci_dev *pdev = sys->isp->pdev; struct page **pages; const void *addr; unsigned long n_pages; @@ -588,14 +595,20 @@ int ipu6_buttress_map_fw_image(struct ipu6_bus_device *sys, goto out; }
- ret = dma_map_sgtable(&sys->auxdev.dev, sgt, DMA_TO_DEVICE, 0); - if (ret < 0) { - ret = -ENOMEM; + ret = dma_map_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0); + if (ret) { sg_free_table(sgt); goto out; }
- dma_sync_sgtable_for_device(&sys->auxdev.dev, sgt, DMA_TO_DEVICE); + ret = ipu6_dma_map_sgtable(sys, sgt, DMA_TO_DEVICE, 0); + if (ret) { + dma_unmap_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0); + sg_free_table(sgt); + goto out; + } + + ipu6_dma_sync_sgtable(sys, sgt);
out: kfree(pages); @@ -607,7 +620,10 @@ EXPORT_SYMBOL_NS_GPL(ipu6_buttress_map_fw_image, INTEL_IPU6); void ipu6_buttress_unmap_fw_image(struct ipu6_bus_device *sys, struct sg_table *sgt) { - dma_unmap_sgtable(&sys->auxdev.dev, sgt, DMA_TO_DEVICE, 0); + struct pci_dev *pdev = sys->isp->pdev; + + ipu6_dma_unmap_sgtable(sys, sgt, DMA_TO_DEVICE, 0); + dma_unmap_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0); sg_free_table(sgt); } EXPORT_SYMBOL_NS_GPL(ipu6_buttress_unmap_fw_image, INTEL_IPU6); diff --git a/drivers/media/pci/intel/ipu6/ipu6-cpd.c b/drivers/media/pci/intel/ipu6/ipu6-cpd.c index 715b21ab4b8e..21c1c128a7ea 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-cpd.c +++ b/drivers/media/pci/intel/ipu6/ipu6-cpd.c @@ -15,6 +15,7 @@ #include "ipu6.h" #include "ipu6-bus.h" #include "ipu6-cpd.h" +#include "ipu6-dma.h"
/* 15 entries + header*/ #define MAX_PKG_DIR_ENT_CNT 16 @@ -162,7 +163,6 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src) { dma_addr_t dma_addr_src = sg_dma_address(adev->fw_sgt.sgl); const struct ipu6_cpd_ent *ent, *man_ent, *met_ent; - struct device *dev = &adev->auxdev.dev; struct ipu6_device *isp = adev->isp; unsigned int man_sz, met_sz; void *pkg_dir_pos; @@ -175,8 +175,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src) met_sz = met_ent->len;
adev->pkg_dir_size = PKG_DIR_SIZE + man_sz + met_sz; - adev->pkg_dir = dma_alloc_attrs(dev, adev->pkg_dir_size, - &adev->pkg_dir_dma_addr, GFP_KERNEL, 0); + adev->pkg_dir = ipu6_dma_alloc(adev, adev->pkg_dir_size, + &adev->pkg_dir_dma_addr, GFP_KERNEL, 0); if (!adev->pkg_dir) return -ENOMEM;
@@ -198,8 +198,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src) met_ent->len); if (ret) { dev_err(&isp->pdev->dev, "Failed to parse module data\n"); - dma_free_attrs(dev, adev->pkg_dir_size, - adev->pkg_dir, adev->pkg_dir_dma_addr, 0); + ipu6_dma_free(adev, adev->pkg_dir_size, + adev->pkg_dir, adev->pkg_dir_dma_addr, 0); return ret; }
@@ -211,8 +211,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src) pkg_dir_pos += man_sz; memcpy(pkg_dir_pos, src + met_ent->offset, met_sz);
- dma_sync_single_range_for_device(dev, adev->pkg_dir_dma_addr, - 0, adev->pkg_dir_size, DMA_TO_DEVICE); + ipu6_dma_sync_single(adev, adev->pkg_dir_dma_addr, + adev->pkg_dir_size);
return 0; } @@ -220,8 +220,8 @@ EXPORT_SYMBOL_NS_GPL(ipu6_cpd_create_pkg_dir, INTEL_IPU6);
void ipu6_cpd_free_pkg_dir(struct ipu6_bus_device *adev) { - dma_free_attrs(&adev->auxdev.dev, adev->pkg_dir_size, adev->pkg_dir, - adev->pkg_dir_dma_addr, 0); + ipu6_dma_free(adev, adev->pkg_dir_size, adev->pkg_dir, + adev->pkg_dir_dma_addr, 0); } EXPORT_SYMBOL_NS_GPL(ipu6_cpd_free_pkg_dir, INTEL_IPU6);
diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.c b/drivers/media/pci/intel/ipu6/ipu6-dma.c index 92530a1cc90f..b71f66bd8c1f 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-dma.c +++ b/drivers/media/pci/intel/ipu6/ipu6-dma.c @@ -39,8 +39,7 @@ static struct vm_info *get_vm_info(struct ipu6_mmu *mmu, dma_addr_t iova) return NULL; }
-static void __dma_clear_buffer(struct page *page, size_t size, - unsigned long attrs) +static void __clear_buffer(struct page *page, size_t size, unsigned long attrs) { void *ptr;
@@ -56,8 +55,7 @@ static void __dma_clear_buffer(struct page *page, size_t size, clflush_cache_range(ptr, size); }
-static struct page **__dma_alloc_buffer(struct device *dev, size_t size, - gfp_t gfp, unsigned long attrs) +static struct page **__alloc_buffer(size_t size, gfp_t gfp, unsigned long attrs) { int count = PHYS_PFN(size); int array_size = count * sizeof(struct page *); @@ -86,7 +84,7 @@ static struct page **__dma_alloc_buffer(struct device *dev, size_t size, pages[i + j] = pages[i] + j; }
- __dma_clear_buffer(pages[i], PAGE_SIZE << order, attrs); + __clear_buffer(pages[i], PAGE_SIZE << order, attrs); i += 1 << order; count -= 1 << order; } @@ -100,29 +98,26 @@ static struct page **__dma_alloc_buffer(struct device *dev, size_t size, return NULL; }
-static void __dma_free_buffer(struct device *dev, struct page **pages, - size_t size, unsigned long attrs) +static void __free_buffer(struct page **pages, size_t size, unsigned long attrs) { int count = PHYS_PFN(size); unsigned int i;
for (i = 0; i < count && pages[i]; i++) { - __dma_clear_buffer(pages[i], PAGE_SIZE, attrs); + __clear_buffer(pages[i], PAGE_SIZE, attrs); __free_pages(pages[i], 0); }
kvfree(pages); }
-static void ipu6_dma_sync_single_for_cpu(struct device *dev, - dma_addr_t dma_handle, - size_t size, - enum dma_data_direction dir) +void ipu6_dma_sync_single(struct ipu6_bus_device *sys, dma_addr_t dma_handle, + size_t size) { void *vaddr; u32 offset; struct vm_info *info; - struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; + struct ipu6_mmu *mmu = sys->mmu;
info = get_vm_info(mmu, dma_handle); if (WARN_ON(!info)) @@ -135,10 +130,10 @@ static void ipu6_dma_sync_single_for_cpu(struct device *dev, vaddr = info->vaddr + offset; clflush_cache_range(vaddr, size); } +EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_single, INTEL_IPU6);
-static void ipu6_dma_sync_sg_for_cpu(struct device *dev, - struct scatterlist *sglist, - int nents, enum dma_data_direction dir) +void ipu6_dma_sync_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist, + int nents) { struct scatterlist *sg; int i; @@ -146,14 +141,22 @@ static void ipu6_dma_sync_sg_for_cpu(struct device *dev, for_each_sg(sglist, sg, nents, i) clflush_cache_range(page_to_virt(sg_page(sg)), sg->length); } +EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_sg, INTEL_IPU6);
-static void *ipu6_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, - unsigned long attrs) +void ipu6_dma_sync_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt) { - struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; - struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev; + ipu6_dma_sync_sg(sys, sgt->sgl, sgt->orig_nents); +} +EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_sgtable, INTEL_IPU6); + +void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, + unsigned long attrs) +{ + struct device *dev = &sys->auxdev.dev; + struct pci_dev *pdev = sys->isp->pdev; dma_addr_t pci_dma_addr, ipu6_iova; + struct ipu6_mmu *mmu = sys->mmu; struct vm_info *info; unsigned long count; struct page **pages; @@ -173,7 +176,7 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size, if (!iova) goto out_kfree;
- pages = __dma_alloc_buffer(dev, size, gfp, attrs); + pages = __alloc_buffer(size, gfp, attrs); if (!pages) goto out_free_iova;
@@ -227,7 +230,7 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size, ipu6_mmu_unmap(mmu->dmap->mmu_info, ipu6_iova, PAGE_SIZE); }
- __dma_free_buffer(dev, pages, size, attrs); + __free_buffer(pages, size, attrs);
out_free_iova: __free_iova(&mmu->dmap->iovad, iova); @@ -236,13 +239,13 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size,
return NULL; } +EXPORT_SYMBOL_NS_GPL(ipu6_dma_alloc, INTEL_IPU6);
-static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, - unsigned long attrs) +void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr, + dma_addr_t dma_handle, unsigned long attrs) { - struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; - struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev; + struct ipu6_mmu *mmu = sys->mmu; + struct pci_dev *pdev = sys->isp->pdev; struct iova *iova = find_iova(&mmu->dmap->iovad, PHYS_PFN(dma_handle)); dma_addr_t pci_dma_addr, ipu6_iova; struct vm_info *info; @@ -281,7 +284,7 @@ static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr, ipu6_mmu_unmap(mmu->dmap->mmu_info, PFN_PHYS(iova->pfn_lo), PFN_PHYS(iova_size(iova)));
- __dma_free_buffer(dev, pages, size, attrs); + __free_buffer(pages, size, attrs);
mmu->tlb_invalidate(mmu);
@@ -289,13 +292,14 @@ static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr,
kfree(info); } +EXPORT_SYMBOL_NS_GPL(ipu6_dma_free, INTEL_IPU6);
-static int ipu6_dma_mmap(struct device *dev, struct vm_area_struct *vma, - void *addr, dma_addr_t iova, size_t size, - unsigned long attrs) +int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma, + void *addr, dma_addr_t iova, size_t size, + unsigned long attrs) { - struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; - size_t count = PHYS_PFN(PAGE_ALIGN(size)); + struct ipu6_mmu *mmu = sys->mmu; + size_t count = PFN_UP(size); struct vm_info *info; size_t i; int ret; @@ -323,18 +327,17 @@ static int ipu6_dma_mmap(struct device *dev, struct vm_area_struct *vma, return 0; }
-static void ipu6_dma_unmap_sg(struct device *dev, - struct scatterlist *sglist, - int nents, enum dma_data_direction dir, - unsigned long attrs) +void ipu6_dma_unmap_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist, + int nents, enum dma_data_direction dir, + unsigned long attrs) { - struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev; - struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; + struct device *dev = &sys->auxdev.dev; + struct ipu6_mmu *mmu = sys->mmu; struct iova *iova = find_iova(&mmu->dmap->iovad, PHYS_PFN(sg_dma_address(sglist))); - int i, npages, count; struct scatterlist *sg; dma_addr_t pci_dma_addr; + unsigned int i;
if (!nents) return; @@ -342,31 +345,15 @@ static void ipu6_dma_unmap_sg(struct device *dev, if (WARN_ON(!iova)) return;
- if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) - ipu6_dma_sync_sg_for_cpu(dev, sglist, nents, DMA_BIDIRECTIONAL); - - /* get the nents as orig_nents given by caller */ - count = 0; - npages = iova_size(iova); - for_each_sg(sglist, sg, nents, i) { - if (sg_dma_len(sg) == 0 || - sg_dma_address(sg) == DMA_MAPPING_ERROR) - break; - - npages -= PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg))); - count++; - if (npages <= 0) - break; - } - /* * Before IPU6 mmu unmap, return the pci dma address back to sg * assume the nents is less than orig_nents as the least granule * is 1 SZ_4K page */ - dev_dbg(dev, "trying to unmap concatenated %u ents\n", count); - for_each_sg(sglist, sg, count, i) { - dev_dbg(dev, "ipu unmap sg[%d] %pad\n", i, &sg_dma_address(sg)); + dev_dbg(dev, "trying to unmap concatenated %u ents\n", nents); + for_each_sg(sglist, sg, nents, i) { + dev_dbg(dev, "unmap sg[%d] %pad size %u\n", i, + &sg_dma_address(sg), sg_dma_len(sg)); pci_dma_addr = ipu6_mmu_iova_to_phys(mmu->dmap->mmu_info, sg_dma_address(sg)); dev_dbg(dev, "return pci_dma_addr %pad back to sg[%d]\n", @@ -380,23 +367,21 @@ static void ipu6_dma_unmap_sg(struct device *dev, PFN_PHYS(iova_size(iova)));
mmu->tlb_invalidate(mmu); - - dma_unmap_sg_attrs(&pdev->dev, sglist, nents, dir, attrs); - __free_iova(&mmu->dmap->iovad, iova); } +EXPORT_SYMBOL_NS_GPL(ipu6_dma_unmap_sg, INTEL_IPU6);
-static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist, - int nents, enum dma_data_direction dir, - unsigned long attrs) +int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist, + int nents, enum dma_data_direction dir, + unsigned long attrs) { - struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; - struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev; + struct device *dev = &sys->auxdev.dev; + struct ipu6_mmu *mmu = sys->mmu; struct scatterlist *sg; struct iova *iova; size_t npages = 0; unsigned long iova_addr; - int i, count; + int i;
for_each_sg(sglist, sg, nents, i) { if (sg->offset) { @@ -406,18 +391,12 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist, } }
- dev_dbg(dev, "pci_dma_map_sg trying to map %d ents\n", nents); - count = dma_map_sg_attrs(&pdev->dev, sglist, nents, dir, attrs); - if (count <= 0) { - dev_err(dev, "pci_dma_map_sg %d ents failed\n", nents); - return 0; - } - - dev_dbg(dev, "pci_dma_map_sg %d ents mapped\n", count); - - for_each_sg(sglist, sg, count, i) + for_each_sg(sglist, sg, nents, i) npages += PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg)));
+ dev_dbg(dev, "dmamap trying to map %d ents %zu pages\n", + nents, npages); + iova = alloc_iova(&mmu->dmap->iovad, npages, PHYS_PFN(dma_get_mask(dev)), 0); if (!iova) @@ -427,12 +406,13 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist, iova->pfn_hi);
iova_addr = iova->pfn_lo; - for_each_sg(sglist, sg, count, i) { + for_each_sg(sglist, sg, nents, i) { + phys_addr_t iova_pa; int ret;
- dev_dbg(dev, "mapping entry %d: iova 0x%llx phy %pad size %d\n", - i, PFN_PHYS(iova_addr), &sg_dma_address(sg), - sg_dma_len(sg)); + iova_pa = PFN_PHYS(iova_addr); + dev_dbg(dev, "mapping entry %d: iova %pap phy %pap size %d\n", + i, &iova_pa, &sg_dma_address(sg), sg_dma_len(sg));
ret = ipu6_mmu_map(mmu->dmap->mmu_info, PFN_PHYS(iova_addr), sg_dma_address(sg), @@ -445,25 +425,48 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist, iova_addr += PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg))); }
- if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) - ipu6_dma_sync_sg_for_cpu(dev, sglist, nents, DMA_BIDIRECTIONAL); + dev_dbg(dev, "dmamap %d ents %zu pages mapped\n", nents, npages);
- return count; + return nents;
out_fail: - ipu6_dma_unmap_sg(dev, sglist, i, dir, attrs); + ipu6_dma_unmap_sg(sys, sglist, i, dir, attrs);
return 0; } +EXPORT_SYMBOL_NS_GPL(ipu6_dma_map_sg, INTEL_IPU6); + +int ipu6_dma_map_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt, + enum dma_data_direction dir, unsigned long attrs) +{ + int nents; + + nents = ipu6_dma_map_sg(sys, sgt->sgl, sgt->nents, dir, attrs); + if (nents < 0) + return nents; + + sgt->nents = nents; + + return 0; +} +EXPORT_SYMBOL_NS_GPL(ipu6_dma_map_sgtable, INTEL_IPU6); + +void ipu6_dma_unmap_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt, + enum dma_data_direction dir, unsigned long attrs) +{ + ipu6_dma_unmap_sg(sys, sgt->sgl, sgt->nents, dir, attrs); +} +EXPORT_SYMBOL_NS_GPL(ipu6_dma_unmap_sgtable, INTEL_IPU6);
/* * Create scatter-list for the already allocated DMA buffer */ -static int ipu6_dma_get_sgtable(struct device *dev, struct sg_table *sgt, - void *cpu_addr, dma_addr_t handle, size_t size, - unsigned long attrs) +int ipu6_dma_get_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt, + void *cpu_addr, dma_addr_t handle, size_t size, + unsigned long attrs) { - struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; + struct device *dev = &sys->auxdev.dev; + struct ipu6_mmu *mmu = sys->mmu; struct vm_info *info; int n_pages; int ret = 0; @@ -483,20 +486,7 @@ static int ipu6_dma_get_sgtable(struct device *dev, struct sg_table *sgt, ret = sg_alloc_table_from_pages(sgt, info->pages, n_pages, 0, size, GFP_KERNEL); if (ret) - dev_warn(dev, "IPU6 get sgt table failed\n"); + dev_warn(dev, "get sgt table failed\n");
return ret; } - -const struct dma_map_ops ipu6_dma_ops = { - .alloc = ipu6_dma_alloc, - .free = ipu6_dma_free, - .mmap = ipu6_dma_mmap, - .map_sg = ipu6_dma_map_sg, - .unmap_sg = ipu6_dma_unmap_sg, - .sync_single_for_cpu = ipu6_dma_sync_single_for_cpu, - .sync_single_for_device = ipu6_dma_sync_single_for_cpu, - .sync_sg_for_cpu = ipu6_dma_sync_sg_for_cpu, - .sync_sg_for_device = ipu6_dma_sync_sg_for_cpu, - .get_sgtable = ipu6_dma_get_sgtable, -}; diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.h b/drivers/media/pci/intel/ipu6/ipu6-dma.h index 847ea5b7c925..b51244add9e6 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-dma.h +++ b/drivers/media/pci/intel/ipu6/ipu6-dma.h @@ -5,7 +5,13 @@ #define IPU6_DMA_H
#include <linux/dma-map-ops.h> +#include <linux/dma-mapping.h> #include <linux/iova.h> +#include <linux/iova.h> +#include <linux/scatterlist.h> +#include <linux/types.h> + +#include "ipu6-bus.h"
struct ipu6_mmu_info;
@@ -14,6 +20,30 @@ struct ipu6_dma_mapping { struct iova_domain iovad; };
-extern const struct dma_map_ops ipu6_dma_ops; - +void ipu6_dma_sync_single(struct ipu6_bus_device *sys, dma_addr_t dma_handle, + size_t size); +void ipu6_dma_sync_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist, + int nents); +void ipu6_dma_sync_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt); +void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, + unsigned long attrs); +void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr, + dma_addr_t dma_handle, unsigned long attrs); +int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma, + void *addr, dma_addr_t iova, size_t size, + unsigned long attrs); +int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist, + int nents, enum dma_data_direction dir, + unsigned long attrs); +void ipu6_dma_unmap_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist, + int nents, enum dma_data_direction dir, + unsigned long attrs); +int ipu6_dma_map_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt, + enum dma_data_direction dir, unsigned long attrs); +void ipu6_dma_unmap_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt, + enum dma_data_direction dir, unsigned long attrs); +int ipu6_dma_get_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt, + void *cpu_addr, dma_addr_t handle, size_t size, + unsigned long attrs); #endif /* IPU6_DMA_H */ diff --git a/drivers/media/pci/intel/ipu6/ipu6-fw-com.c b/drivers/media/pci/intel/ipu6/ipu6-fw-com.c index 0b33fe9e703d..7d3d9314cb30 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-fw-com.c +++ b/drivers/media/pci/intel/ipu6/ipu6-fw-com.c @@ -12,6 +12,7 @@ #include <linux/types.h>
#include "ipu6-bus.h" +#include "ipu6-dma.h" #include "ipu6-fw-com.h"
/* @@ -88,7 +89,6 @@ struct ipu6_fw_com_context { void *dma_buffer; dma_addr_t dma_addr; unsigned int dma_size; - unsigned long attrs;
struct ipu6_fw_sys_queue *input_queue; /* array of host to SP queues */ struct ipu6_fw_sys_queue *output_queue; /* array of SP to host */ @@ -164,7 +164,6 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg, struct ipu6_fw_com_context *ctx; struct device *dev = &adev->auxdev.dev; size_t sizeall, offset; - unsigned long attrs = 0; void *specific_host_addr; unsigned int i;
@@ -206,9 +205,8 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg,
sizeall += sizeinput + sizeoutput;
- ctx->dma_buffer = dma_alloc_attrs(dev, sizeall, &ctx->dma_addr, - GFP_KERNEL, attrs); - ctx->attrs = attrs; + ctx->dma_buffer = ipu6_dma_alloc(adev, sizeall, &ctx->dma_addr, + GFP_KERNEL, 0); if (!ctx->dma_buffer) { dev_err(dev, "failed to allocate dma memory\n"); kfree(ctx); @@ -239,6 +237,8 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg, memcpy(specific_host_addr, cfg->specific_addr, cfg->specific_size);
+ ipu6_dma_sync_single(adev, ctx->config_vied_addr, sizeall); + /* initialize input queues */ offset += specific_size; res.reg = SYSCOM_QPR_BASE_REG; @@ -315,8 +315,8 @@ int ipu6_fw_com_release(struct ipu6_fw_com_context *ctx, unsigned int force) if (!force && !ctx->cell_ready(ctx->adev)) return -EBUSY;
- dma_free_attrs(&ctx->adev->auxdev.dev, ctx->dma_size, - ctx->dma_buffer, ctx->dma_addr, ctx->attrs); + ipu6_dma_free(ctx->adev, ctx->dma_size, + ctx->dma_buffer, ctx->dma_addr, 0); kfree(ctx); return 0; } diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.c b/drivers/media/pci/intel/ipu6/ipu6-mmu.c index c3a20507d6db..57298ac73d07 100644 --- a/drivers/media/pci/intel/ipu6/ipu6-mmu.c +++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.c @@ -97,13 +97,15 @@ static void page_table_dump(struct ipu6_mmu_info *mmu_info) for (l1_idx = 0; l1_idx < ISP_L1PT_PTES; l1_idx++) { u32 l2_idx; u32 iova = (phys_addr_t)l1_idx << ISP_L1PT_SHIFT; + phys_addr_t l2_phys;
if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval) continue; + + l2_phys = TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx];) dev_dbg(mmu_info->dev, - "l1 entry %u; iovas 0x%8.8x-0x%8.8x, at %pa\n", - l1_idx, iova, iova + ISP_PAGE_SIZE, - TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx])); + "l1 entry %u; iovas 0x%8.8x-0x%8.8x, at %pap\n", + l1_idx, iova, iova + ISP_PAGE_SIZE, &l2_phys);
for (l2_idx = 0; l2_idx < ISP_L2PT_PTES; l2_idx++) { u32 *l2_pt = mmu_info->l2_pts[l1_idx]; @@ -227,7 +229,7 @@ static u32 *alloc_l1_pt(struct ipu6_mmu_info *mmu_info) }
mmu_info->l1_pt_dma = dma >> ISP_PADDR_SHIFT; - dev_dbg(mmu_info->dev, "l1 pt %p mapped at %llx\n", pt, dma); + dev_dbg(mmu_info->dev, "l1 pt %p mapped at %pad\n", pt, &dma);
return pt;
@@ -330,8 +332,8 @@ static int __ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, u32 iova_end = ALIGN(iova + size, ISP_PAGE_SIZE);
dev_dbg(mmu_info->dev, - "mapping iova 0x%8.8x--0x%8.8x, size %zu at paddr 0x%10.10llx\n", - iova_start, iova_end, size, paddr); + "mapping iova 0x%8.8x--0x%8.8x, size %zu at paddr %pap\n", + iova_start, iova_end, size, &paddr);
return l2_map(mmu_info, iova_start, paddr, size); } @@ -361,10 +363,13 @@ static size_t l2_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, for (l2_idx = (iova_start & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT; (iova_start & ISP_L1PT_MASK) + (l2_idx << ISP_PAGE_SHIFT) < iova_start + size && l2_idx < ISP_L2PT_PTES; l2_idx++) { + phys_addr_t pteval; + l2_pt = mmu_info->l2_pts[l1_idx]; + pteval = TBL_PHYS_ADDR(l2_pt[l2_idx]); dev_dbg(mmu_info->dev, - "unmap l2 index %u with pteval 0x%10.10llx\n", - l2_idx, TBL_PHYS_ADDR(l2_pt[l2_idx])); + "unmap l2 index %u with pteval 0x%p\n", + l2_idx, &pteval); l2_pt[l2_idx] = mmu_info->dummy_page_pteval;
clflush_cache_range((void *)&l2_pt[l2_idx], @@ -525,9 +530,10 @@ static struct ipu6_mmu_info *ipu6_mmu_alloc(struct ipu6_device *isp) return NULL;
mmu_info->aperture_start = 0; - mmu_info->aperture_end = DMA_BIT_MASK(isp->secure_mode ? - IPU6_MMU_ADDR_BITS : - IPU6_MMU_ADDR_BITS_NON_SECURE); + mmu_info->aperture_end = + (dma_addr_t)DMA_BIT_MASK(isp->secure_mode ? + IPU6_MMU_ADDR_BITS : + IPU6_MMU_ADDR_BITS_NON_SECURE); mmu_info->pgsize_bitmap = SZ_4K; mmu_info->dev = &isp->pdev->dev;
diff --git a/drivers/media/pci/intel/ipu6/ipu6.c b/drivers/media/pci/intel/ipu6/ipu6.c index 7fb707d35309..91718eabd74e 100644 --- a/drivers/media/pci/intel/ipu6/ipu6.c +++ b/drivers/media/pci/intel/ipu6/ipu6.c @@ -752,6 +752,9 @@ static void ipu6_pci_reset_done(struct pci_dev *pdev) */ static int ipu6_suspend(struct device *dev) { + struct pci_dev *pdev = to_pci_dev(dev); + + synchronize_irq(pdev->irq); return 0; }
diff --git a/drivers/media/radio/wl128x/fmdrv_common.c b/drivers/media/radio/wl128x/fmdrv_common.c index 3d36f323a8f8..4d032436691c 100644 --- a/drivers/media/radio/wl128x/fmdrv_common.c +++ b/drivers/media/radio/wl128x/fmdrv_common.c @@ -466,11 +466,12 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload, jiffies_to_msecs(FM_DRV_TX_TIMEOUT) / 1000); return -ETIMEDOUT; } + spin_lock_irqsave(&fmdev->resp_skb_lock, flags); if (!fmdev->resp_skb) { + spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags); fmerr("Response SKB is missing\n"); return -EFAULT; } - spin_lock_irqsave(&fmdev->resp_skb_lock, flags); skb = fmdev->resp_skb; fmdev->resp_skb = NULL; spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags); diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c index 6a790ac8cbe6..f25e01115364 100644 --- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c +++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c @@ -1459,12 +1459,19 @@ static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings) h_freq = (u32)bt->pixelclock / total_h_pixel;
if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_CVT)) { + struct v4l2_dv_timings cvt = {}; + if (v4l2_detect_cvt(total_v_lines, h_freq, bt->vsync, bt->width, - bt->polarities, bt->interlaced, timings)) + bt->polarities, bt->interlaced, + &vivid_dv_timings_cap, &cvt) && + cvt.bt.width == bt->width && cvt.bt.height == bt->height) { + *timings = cvt; return true; + } }
if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_GTF)) { + struct v4l2_dv_timings gtf = {}; struct v4l2_fract aspect_ratio;
find_aspect_ratio(bt->width, bt->height, @@ -1472,8 +1479,12 @@ static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings) &aspect_ratio.denominator); if (v4l2_detect_gtf(total_v_lines, h_freq, bt->vsync, bt->polarities, bt->interlaced, - aspect_ratio, timings)) + aspect_ratio, &vivid_dv_timings_cap, + >f) && + gtf.bt.width == bt->width && gtf.bt.height == bt->height) { + *timings = gtf; return true; + } } return false; } diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c index 942d0005c55e..2cf5dcee0ce8 100644 --- a/drivers/media/v4l2-core/v4l2-dv-timings.c +++ b/drivers/media/v4l2-core/v4l2-dv-timings.c @@ -481,25 +481,28 @@ EXPORT_SYMBOL_GPL(v4l2_calc_timeperframe); * @polarities - the horizontal and vertical polarities (same as struct * v4l2_bt_timings polarities). * @interlaced - if this flag is true, it indicates interlaced format - * @fmt - the resulting timings. + * @cap - the v4l2_dv_timings_cap capabilities. + * @timings - the resulting timings. * * This function will attempt to detect if the given values correspond to a * valid CVT format. If so, then it will return true, and fmt will be filled * in with the found CVT timings. */ -bool v4l2_detect_cvt(unsigned frame_height, - unsigned hfreq, - unsigned vsync, - unsigned active_width, +bool v4l2_detect_cvt(unsigned int frame_height, + unsigned int hfreq, + unsigned int vsync, + unsigned int active_width, u32 polarities, bool interlaced, - struct v4l2_dv_timings *fmt) + const struct v4l2_dv_timings_cap *cap, + struct v4l2_dv_timings *timings) { - int v_fp, v_bp, h_fp, h_bp, hsync; - int frame_width, image_height, image_width; + struct v4l2_dv_timings t = {}; + int v_fp, v_bp, h_fp, h_bp, hsync; + int frame_width, image_height, image_width; bool reduced_blanking; bool rb_v2 = false; - unsigned pix_clk; + unsigned int pix_clk;
if (vsync < 4 || vsync > 8) return false; @@ -625,36 +628,39 @@ bool v4l2_detect_cvt(unsigned frame_height, h_fp = h_blank - hsync - h_bp; }
- fmt->type = V4L2_DV_BT_656_1120; - fmt->bt.polarities = polarities; - fmt->bt.width = image_width; - fmt->bt.height = image_height; - fmt->bt.hfrontporch = h_fp; - fmt->bt.vfrontporch = v_fp; - fmt->bt.hsync = hsync; - fmt->bt.vsync = vsync; - fmt->bt.hbackporch = frame_width - image_width - h_fp - hsync; + t.type = V4L2_DV_BT_656_1120; + t.bt.polarities = polarities; + t.bt.width = image_width; + t.bt.height = image_height; + t.bt.hfrontporch = h_fp; + t.bt.vfrontporch = v_fp; + t.bt.hsync = hsync; + t.bt.vsync = vsync; + t.bt.hbackporch = frame_width - image_width - h_fp - hsync;
if (!interlaced) { - fmt->bt.vbackporch = frame_height - image_height - v_fp - vsync; - fmt->bt.interlaced = V4L2_DV_PROGRESSIVE; + t.bt.vbackporch = frame_height - image_height - v_fp - vsync; + t.bt.interlaced = V4L2_DV_PROGRESSIVE; } else { - fmt->bt.vbackporch = (frame_height - image_height - 2 * v_fp - + t.bt.vbackporch = (frame_height - image_height - 2 * v_fp - 2 * vsync) / 2; - fmt->bt.il_vbackporch = frame_height - image_height - 2 * v_fp - - 2 * vsync - fmt->bt.vbackporch; - fmt->bt.il_vfrontporch = v_fp; - fmt->bt.il_vsync = vsync; - fmt->bt.flags |= V4L2_DV_FL_HALF_LINE; - fmt->bt.interlaced = V4L2_DV_INTERLACED; + t.bt.il_vbackporch = frame_height - image_height - 2 * v_fp - + 2 * vsync - t.bt.vbackporch; + t.bt.il_vfrontporch = v_fp; + t.bt.il_vsync = vsync; + t.bt.flags |= V4L2_DV_FL_HALF_LINE; + t.bt.interlaced = V4L2_DV_INTERLACED; }
- fmt->bt.pixelclock = pix_clk; - fmt->bt.standards = V4L2_DV_BT_STD_CVT; + t.bt.pixelclock = pix_clk; + t.bt.standards = V4L2_DV_BT_STD_CVT;
if (reduced_blanking) - fmt->bt.flags |= V4L2_DV_FL_REDUCED_BLANKING; + t.bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+ if (!v4l2_valid_dv_timings(&t, cap, NULL, NULL)) + return false; + *timings = t; return true; } EXPORT_SYMBOL_GPL(v4l2_detect_cvt); @@ -699,22 +705,25 @@ EXPORT_SYMBOL_GPL(v4l2_detect_cvt); * image height, so it has to be passed explicitly. Usually * the native screen aspect ratio is used for this. If it * is not filled in correctly, then 16:9 will be assumed. - * @fmt - the resulting timings. + * @cap - the v4l2_dv_timings_cap capabilities. + * @timings - the resulting timings. * * This function will attempt to detect if the given values correspond to a * valid GTF format. If so, then it will return true, and fmt will be filled * in with the found GTF timings. */ -bool v4l2_detect_gtf(unsigned frame_height, - unsigned hfreq, - unsigned vsync, - u32 polarities, - bool interlaced, - struct v4l2_fract aspect, - struct v4l2_dv_timings *fmt) +bool v4l2_detect_gtf(unsigned int frame_height, + unsigned int hfreq, + unsigned int vsync, + u32 polarities, + bool interlaced, + struct v4l2_fract aspect, + const struct v4l2_dv_timings_cap *cap, + struct v4l2_dv_timings *timings) { + struct v4l2_dv_timings t = {}; int pix_clk; - int v_fp, v_bp, h_fp, hsync; + int v_fp, v_bp, h_fp, hsync; int frame_width, image_height, image_width; bool default_gtf; int h_blank; @@ -783,36 +792,39 @@ bool v4l2_detect_gtf(unsigned frame_height,
h_fp = h_blank / 2 - hsync;
- fmt->type = V4L2_DV_BT_656_1120; - fmt->bt.polarities = polarities; - fmt->bt.width = image_width; - fmt->bt.height = image_height; - fmt->bt.hfrontporch = h_fp; - fmt->bt.vfrontporch = v_fp; - fmt->bt.hsync = hsync; - fmt->bt.vsync = vsync; - fmt->bt.hbackporch = frame_width - image_width - h_fp - hsync; + t.type = V4L2_DV_BT_656_1120; + t.bt.polarities = polarities; + t.bt.width = image_width; + t.bt.height = image_height; + t.bt.hfrontporch = h_fp; + t.bt.vfrontporch = v_fp; + t.bt.hsync = hsync; + t.bt.vsync = vsync; + t.bt.hbackporch = frame_width - image_width - h_fp - hsync;
if (!interlaced) { - fmt->bt.vbackporch = frame_height - image_height - v_fp - vsync; - fmt->bt.interlaced = V4L2_DV_PROGRESSIVE; + t.bt.vbackporch = frame_height - image_height - v_fp - vsync; + t.bt.interlaced = V4L2_DV_PROGRESSIVE; } else { - fmt->bt.vbackporch = (frame_height - image_height - 2 * v_fp - + t.bt.vbackporch = (frame_height - image_height - 2 * v_fp - 2 * vsync) / 2; - fmt->bt.il_vbackporch = frame_height - image_height - 2 * v_fp - - 2 * vsync - fmt->bt.vbackporch; - fmt->bt.il_vfrontporch = v_fp; - fmt->bt.il_vsync = vsync; - fmt->bt.flags |= V4L2_DV_FL_HALF_LINE; - fmt->bt.interlaced = V4L2_DV_INTERLACED; + t.bt.il_vbackporch = frame_height - image_height - 2 * v_fp - + 2 * vsync - t.bt.vbackporch; + t.bt.il_vfrontporch = v_fp; + t.bt.il_vsync = vsync; + t.bt.flags |= V4L2_DV_FL_HALF_LINE; + t.bt.interlaced = V4L2_DV_INTERLACED; }
- fmt->bt.pixelclock = pix_clk; - fmt->bt.standards = V4L2_DV_BT_STD_GTF; + t.bt.pixelclock = pix_clk; + t.bt.standards = V4L2_DV_BT_STD_GTF;
if (!default_gtf) - fmt->bt.flags |= V4L2_DV_FL_REDUCED_BLANKING; + t.bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+ if (!v4l2_valid_dv_timings(&t, cap, NULL, NULL)) + return false; + *timings = t; return true; } EXPORT_SYMBOL_GPL(v4l2_detect_gtf); diff --git a/drivers/message/fusion/mptsas.c b/drivers/message/fusion/mptsas.c index a0bcb0864ecd..a798e26c6402 100644 --- a/drivers/message/fusion/mptsas.c +++ b/drivers/message/fusion/mptsas.c @@ -4231,10 +4231,8 @@ mptsas_find_phyinfo_by_phys_disk_num(MPT_ADAPTER *ioc, u8 phys_disk_num, static void mptsas_reprobe_lun(struct scsi_device *sdev, void *data) { - int rc; - sdev->no_uld_attach = data ? 1 : 0; - rc = scsi_device_reprobe(sdev); + WARN_ON(scsi_device_reprobe(sdev)); }
static void diff --git a/drivers/mfd/da9052-spi.c b/drivers/mfd/da9052-spi.c index be5f2b34e18a..80fc5c0cac2f 100644 --- a/drivers/mfd/da9052-spi.c +++ b/drivers/mfd/da9052-spi.c @@ -37,7 +37,7 @@ static int da9052_spi_probe(struct spi_device *spi) spi_set_drvdata(spi, da9052);
config = da9052_regmap_config; - config.read_flag_mask = 1; + config.write_flag_mask = 1; config.reg_bits = 7; config.pad_bits = 1; config.val_bits = 8; diff --git a/drivers/mfd/intel_soc_pmic_bxtwc.c b/drivers/mfd/intel_soc_pmic_bxtwc.c index ccd76800d8e4..b7204072e93e 100644 --- a/drivers/mfd/intel_soc_pmic_bxtwc.c +++ b/drivers/mfd/intel_soc_pmic_bxtwc.c @@ -148,6 +148,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip = {
static const struct regmap_irq_chip bxtwc_regmap_irq_chip_pwrbtn = { .name = "bxtwc_irq_chip_pwrbtn", + .domain_suffix = "PWRBTN", .status_base = BXTWC_PWRBTNIRQ, .mask_base = BXTWC_MPWRBTNIRQ, .irqs = bxtwc_regmap_irqs_pwrbtn, @@ -157,6 +158,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_pwrbtn = {
static const struct regmap_irq_chip bxtwc_regmap_irq_chip_tmu = { .name = "bxtwc_irq_chip_tmu", + .domain_suffix = "TMU", .status_base = BXTWC_TMUIRQ, .mask_base = BXTWC_MTMUIRQ, .irqs = bxtwc_regmap_irqs_tmu, @@ -166,6 +168,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_tmu = {
static const struct regmap_irq_chip bxtwc_regmap_irq_chip_bcu = { .name = "bxtwc_irq_chip_bcu", + .domain_suffix = "BCU", .status_base = BXTWC_BCUIRQ, .mask_base = BXTWC_MBCUIRQ, .irqs = bxtwc_regmap_irqs_bcu, @@ -175,6 +178,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_bcu = {
static const struct regmap_irq_chip bxtwc_regmap_irq_chip_adc = { .name = "bxtwc_irq_chip_adc", + .domain_suffix = "ADC", .status_base = BXTWC_ADCIRQ, .mask_base = BXTWC_MADCIRQ, .irqs = bxtwc_regmap_irqs_adc, @@ -184,6 +188,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_adc = {
static const struct regmap_irq_chip bxtwc_regmap_irq_chip_chgr = { .name = "bxtwc_irq_chip_chgr", + .domain_suffix = "CHGR", .status_base = BXTWC_CHGR0IRQ, .mask_base = BXTWC_MCHGR0IRQ, .irqs = bxtwc_regmap_irqs_chgr, @@ -193,6 +198,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_chgr = {
static const struct regmap_irq_chip bxtwc_regmap_irq_chip_crit = { .name = "bxtwc_irq_chip_crit", + .domain_suffix = "CRIT", .status_base = BXTWC_CRITIRQ, .mask_base = BXTWC_MCRITIRQ, .irqs = bxtwc_regmap_irqs_crit, @@ -230,44 +236,55 @@ static const struct resource tmu_resources[] = { };
static struct mfd_cell bxt_wc_dev[] = { - { - .name = "bxt_wcove_gpadc", - .num_resources = ARRAY_SIZE(adc_resources), - .resources = adc_resources, - }, { .name = "bxt_wcove_thermal", .num_resources = ARRAY_SIZE(thermal_resources), .resources = thermal_resources, }, { - .name = "bxt_wcove_usbc", - .num_resources = ARRAY_SIZE(usbc_resources), - .resources = usbc_resources, + .name = "bxt_wcove_gpio", + .num_resources = ARRAY_SIZE(gpio_resources), + .resources = gpio_resources, }, { - .name = "bxt_wcove_ext_charger", - .num_resources = ARRAY_SIZE(charger_resources), - .resources = charger_resources, + .name = "bxt_wcove_region", + }, +}; + +static const struct mfd_cell bxt_wc_tmu_dev[] = { + { + .name = "bxt_wcove_tmu", + .num_resources = ARRAY_SIZE(tmu_resources), + .resources = tmu_resources, }, +}; + +static const struct mfd_cell bxt_wc_bcu_dev[] = { { .name = "bxt_wcove_bcu", .num_resources = ARRAY_SIZE(bcu_resources), .resources = bcu_resources, }, +}; + +static const struct mfd_cell bxt_wc_adc_dev[] = { { - .name = "bxt_wcove_tmu", - .num_resources = ARRAY_SIZE(tmu_resources), - .resources = tmu_resources, + .name = "bxt_wcove_gpadc", + .num_resources = ARRAY_SIZE(adc_resources), + .resources = adc_resources, }, +};
+static struct mfd_cell bxt_wc_chgr_dev[] = { { - .name = "bxt_wcove_gpio", - .num_resources = ARRAY_SIZE(gpio_resources), - .resources = gpio_resources, + .name = "bxt_wcove_usbc", + .num_resources = ARRAY_SIZE(usbc_resources), + .resources = usbc_resources, }, { - .name = "bxt_wcove_region", + .name = "bxt_wcove_ext_charger", + .num_resources = ARRAY_SIZE(charger_resources), + .resources = charger_resources, }, };
@@ -425,6 +442,26 @@ static int bxtwc_add_chained_irq_chip(struct intel_soc_pmic *pmic, 0, chip, data); }
+static int bxtwc_add_chained_devices(struct intel_soc_pmic *pmic, + const struct mfd_cell *cells, int n_devs, + struct regmap_irq_chip_data *pdata, + int pirq, int irq_flags, + const struct regmap_irq_chip *chip, + struct regmap_irq_chip_data **data) +{ + struct device *dev = pmic->dev; + struct irq_domain *domain; + int ret; + + ret = bxtwc_add_chained_irq_chip(pmic, pdata, pirq, irq_flags, chip, data); + if (ret) + return dev_err_probe(dev, ret, "Failed to add %s IRQ chip\n", chip->name); + + domain = regmap_irq_get_domain(*data); + + return devm_mfd_add_devices(dev, PLATFORM_DEVID_NONE, cells, n_devs, NULL, 0, domain); +} + static int bxtwc_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; @@ -466,6 +503,15 @@ static int bxtwc_probe(struct platform_device *pdev) if (ret) return dev_err_probe(dev, ret, "Failed to add IRQ chip\n");
+ ret = bxtwc_add_chained_devices(pmic, bxt_wc_tmu_dev, ARRAY_SIZE(bxt_wc_tmu_dev), + pmic->irq_chip_data, + BXTWC_TMU_LVL1_IRQ, + IRQF_ONESHOT, + &bxtwc_regmap_irq_chip_tmu, + &pmic->irq_chip_data_tmu); + if (ret) + return ret; + ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data, BXTWC_PWRBTN_LVL1_IRQ, IRQF_ONESHOT, @@ -474,40 +520,32 @@ static int bxtwc_probe(struct platform_device *pdev) if (ret) return dev_err_probe(dev, ret, "Failed to add PWRBTN IRQ chip\n");
- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data, - BXTWC_TMU_LVL1_IRQ, - IRQF_ONESHOT, - &bxtwc_regmap_irq_chip_tmu, - &pmic->irq_chip_data_tmu); - if (ret) - return dev_err_probe(dev, ret, "Failed to add TMU IRQ chip\n"); - - /* Add chained IRQ handler for BCU IRQs */ - ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data, - BXTWC_BCU_LVL1_IRQ, - IRQF_ONESHOT, - &bxtwc_regmap_irq_chip_bcu, - &pmic->irq_chip_data_bcu); + ret = bxtwc_add_chained_devices(pmic, bxt_wc_bcu_dev, ARRAY_SIZE(bxt_wc_bcu_dev), + pmic->irq_chip_data, + BXTWC_BCU_LVL1_IRQ, + IRQF_ONESHOT, + &bxtwc_regmap_irq_chip_bcu, + &pmic->irq_chip_data_bcu); if (ret) - return dev_err_probe(dev, ret, "Failed to add BUC IRQ chip\n"); + return ret;
- /* Add chained IRQ handler for ADC IRQs */ - ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data, - BXTWC_ADC_LVL1_IRQ, - IRQF_ONESHOT, - &bxtwc_regmap_irq_chip_adc, - &pmic->irq_chip_data_adc); + ret = bxtwc_add_chained_devices(pmic, bxt_wc_adc_dev, ARRAY_SIZE(bxt_wc_adc_dev), + pmic->irq_chip_data, + BXTWC_ADC_LVL1_IRQ, + IRQF_ONESHOT, + &bxtwc_regmap_irq_chip_adc, + &pmic->irq_chip_data_adc); if (ret) - return dev_err_probe(dev, ret, "Failed to add ADC IRQ chip\n"); + return ret;
- /* Add chained IRQ handler for CHGR IRQs */ - ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data, - BXTWC_CHGR_LVL1_IRQ, - IRQF_ONESHOT, - &bxtwc_regmap_irq_chip_chgr, - &pmic->irq_chip_data_chgr); + ret = bxtwc_add_chained_devices(pmic, bxt_wc_chgr_dev, ARRAY_SIZE(bxt_wc_chgr_dev), + pmic->irq_chip_data, + BXTWC_CHGR_LVL1_IRQ, + IRQF_ONESHOT, + &bxtwc_regmap_irq_chip_chgr, + &pmic->irq_chip_data_chgr); if (ret) - return dev_err_probe(dev, ret, "Failed to add CHGR IRQ chip\n"); + return ret;
/* Add chained IRQ handler for CRIT IRQs */ ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data, diff --git a/drivers/mfd/rt5033.c b/drivers/mfd/rt5033.c index 7e23ab3d5842..84ebc96f58e4 100644 --- a/drivers/mfd/rt5033.c +++ b/drivers/mfd/rt5033.c @@ -81,8 +81,8 @@ static int rt5033_i2c_probe(struct i2c_client *i2c) chip_rev = dev_id & RT5033_CHIP_REV_MASK; dev_info(&i2c->dev, "Device found (rev. %d)\n", chip_rev);
- ret = regmap_add_irq_chip(rt5033->regmap, rt5033->irq, - IRQF_TRIGGER_FALLING | IRQF_ONESHOT, + ret = devm_regmap_add_irq_chip(rt5033->dev, rt5033->regmap, + rt5033->irq, IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 0, &rt5033_irq_chip, &rt5033->irq_data); if (ret) { dev_err(&i2c->dev, "Failed to request IRQ %d: %d\n", diff --git a/drivers/mfd/tps65010.c b/drivers/mfd/tps65010.c index 2b9105295f30..710364435b6b 100644 --- a/drivers/mfd/tps65010.c +++ b/drivers/mfd/tps65010.c @@ -544,17 +544,13 @@ static int tps65010_probe(struct i2c_client *client) */ if (client->irq > 0) { status = request_irq(client->irq, tps65010_irq, - IRQF_TRIGGER_FALLING, DRIVER_NAME, tps); + IRQF_TRIGGER_FALLING | IRQF_NO_AUTOEN, + DRIVER_NAME, tps); if (status < 0) { dev_dbg(&client->dev, "can't get IRQ %d, err %d\n", client->irq, status); return status; } - /* annoying race here, ideally we'd have an option - * to claim the irq now and enable it later. - * FIXME genirq IRQF_NOAUTOEN now solves that ... - */ - disable_irq(client->irq); set_bit(FLAG_IRQ_ENABLE, &tps->flags); } else dev_warn(&client->dev, "IRQ not configured!\n"); diff --git a/drivers/misc/apds990x.c b/drivers/misc/apds990x.c index 6d4edd69db12..e7d73c972f65 100644 --- a/drivers/misc/apds990x.c +++ b/drivers/misc/apds990x.c @@ -1147,7 +1147,7 @@ static int apds990x_probe(struct i2c_client *client) err = chip->pdata->setup_resources(); if (err) { err = -EINVAL; - goto fail3; + goto fail4; } }
@@ -1155,7 +1155,7 @@ static int apds990x_probe(struct i2c_client *client) apds990x_attribute_group); if (err < 0) { dev_err(&chip->client->dev, "Sysfs registration failed\n"); - goto fail4; + goto fail5; }
err = request_threaded_irq(client->irq, NULL, @@ -1166,15 +1166,17 @@ static int apds990x_probe(struct i2c_client *client) if (err) { dev_err(&client->dev, "could not get IRQ %d\n", client->irq); - goto fail5; + goto fail6; } return err; -fail5: +fail6: sysfs_remove_group(&chip->client->dev.kobj, &apds990x_attribute_group[0]); -fail4: +fail5: if (chip->pdata && chip->pdata->release_resources) chip->pdata->release_resources(); +fail4: + pm_runtime_disable(&client->dev); fail3: regulator_bulk_disable(ARRAY_SIZE(chip->regs), chip->regs); fail2: diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c index 62ba01525479..376047beea3d 100644 --- a/drivers/misc/lkdtm/bugs.c +++ b/drivers/misc/lkdtm/bugs.c @@ -445,7 +445,7 @@ static void lkdtm_FAM_BOUNDS(void)
pr_err("FAIL: survived access of invalid flexible array member index!\n");
- if (!__has_attribute(__counted_by__)) + if (!IS_ENABLED(CONFIG_CC_HAS_COUNTED_BY)) pr_warn("This is expected since this %s was built with a compiler that does not support __counted_by\n", lkdtm_kernel_info); else if (IS_ENABLED(CONFIG_UBSAN_BOUNDS)) diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c index 8fee7052f2ef..47443fb5eb33 100644 --- a/drivers/mmc/host/mmc_spi.c +++ b/drivers/mmc/host/mmc_spi.c @@ -222,10 +222,6 @@ static int mmc_spi_response_get(struct mmc_spi_host *host, u8 leftover = 0; unsigned short rotator; int i; - char tag[32]; - - snprintf(tag, sizeof(tag), " ... CMD%d response SPI_%s", - cmd->opcode, maptype(cmd));
/* Except for data block reads, the whole response will already * be stored in the scratch buffer. It's somewhere after the @@ -378,8 +374,9 @@ static int mmc_spi_response_get(struct mmc_spi_host *host, }
if (value < 0) - dev_dbg(&host->spi->dev, "%s: resp %04x %08x\n", - tag, cmd->resp[0], cmd->resp[1]); + dev_dbg(&host->spi->dev, + " ... CMD%d response SPI_%s: resp %04x %08x\n", + cmd->opcode, maptype(cmd), cmd->resp[0], cmd->resp[1]);
/* disable chipselect on errors and some success cases */ if (value >= 0 && cs_on) diff --git a/drivers/mtd/hyperbus/rpc-if.c b/drivers/mtd/hyperbus/rpc-if.c index b22aa57119f2..e7a28f3316c3 100644 --- a/drivers/mtd/hyperbus/rpc-if.c +++ b/drivers/mtd/hyperbus/rpc-if.c @@ -163,9 +163,16 @@ static void rpcif_hb_remove(struct platform_device *pdev) pm_runtime_disable(hyperbus->rpc.dev); }
+static const struct platform_device_id rpc_if_hyperflash_id_table[] = { + { .name = "rpc-if-hyperflash" }, + { /* sentinel */ } +}; +MODULE_DEVICE_TABLE(platform, rpc_if_hyperflash_id_table); + static struct platform_driver rpcif_platform_driver = { .probe = rpcif_hb_probe, .remove_new = rpcif_hb_remove, + .id_table = rpc_if_hyperflash_id_table, .driver = { .name = "rpc-if-hyperflash", }, diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c index 4d7dc8a9c373..a22aab4ed4e8 100644 --- a/drivers/mtd/nand/raw/atmel/pmecc.c +++ b/drivers/mtd/nand/raw/atmel/pmecc.c @@ -362,7 +362,7 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc, size = ALIGN(size, sizeof(s32)); size += (req->ecc.strength + 1) * sizeof(s32) * 3;
- user = kzalloc(size, GFP_KERNEL); + user = devm_kzalloc(pmecc->dev, size, GFP_KERNEL); if (!user) return ERR_PTR(-ENOMEM);
@@ -408,12 +408,6 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc, } EXPORT_SYMBOL_GPL(atmel_pmecc_create_user);
-void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user) -{ - kfree(user); -} -EXPORT_SYMBOL_GPL(atmel_pmecc_destroy_user); - static int get_strength(struct atmel_pmecc_user *user) { const int *strengths = user->pmecc->caps->strengths; diff --git a/drivers/mtd/nand/raw/atmel/pmecc.h b/drivers/mtd/nand/raw/atmel/pmecc.h index 7851c05126cf..cc0c5af1f4f1 100644 --- a/drivers/mtd/nand/raw/atmel/pmecc.h +++ b/drivers/mtd/nand/raw/atmel/pmecc.h @@ -55,8 +55,6 @@ struct atmel_pmecc *devm_atmel_pmecc_get(struct device *dev); struct atmel_pmecc_user * atmel_pmecc_create_user(struct atmel_pmecc *pmecc, struct atmel_pmecc_user_req *req); -void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user); - void atmel_pmecc_reset(struct atmel_pmecc *pmecc); int atmel_pmecc_enable(struct atmel_pmecc_user *user, int op); void atmel_pmecc_disable(struct atmel_pmecc_user *user); diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c index 9d6e85bf227b..8c57df44c40f 100644 --- a/drivers/mtd/spi-nor/core.c +++ b/drivers/mtd/spi-nor/core.c @@ -89,7 +89,7 @@ void spi_nor_spimem_setup_op(const struct spi_nor *nor, op->addr.buswidth = spi_nor_get_protocol_addr_nbits(proto);
if (op->dummy.nbytes) - op->dummy.buswidth = spi_nor_get_protocol_addr_nbits(proto); + op->dummy.buswidth = spi_nor_get_protocol_data_nbits(proto);
if (op->data.nbytes) op->data.buswidth = spi_nor_get_protocol_data_nbits(proto); diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c index d6c92595f6bc..5a88a6096ca8 100644 --- a/drivers/mtd/spi-nor/spansion.c +++ b/drivers/mtd/spi-nor/spansion.c @@ -106,6 +106,7 @@ static int cypress_nor_sr_ready_and_clear_reg(struct spi_nor *nor, u64 addr) int ret;
if (nor->reg_proto == SNOR_PROTO_8_8_8_DTR) { + op.addr.nbytes = nor->addr_nbytes; op.dummy.nbytes = params->rdsr_dummy; op.data.nbytes = 2; } diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c index ae5abe492b52..adc47b87b38a 100644 --- a/drivers/mtd/ubi/attach.c +++ b/drivers/mtd/ubi/attach.c @@ -1447,7 +1447,7 @@ static int scan_all(struct ubi_device *ubi, struct ubi_attach_info *ai, return err; }
-static struct ubi_attach_info *alloc_ai(void) +static struct ubi_attach_info *alloc_ai(const char *slab_name) { struct ubi_attach_info *ai;
@@ -1461,7 +1461,7 @@ static struct ubi_attach_info *alloc_ai(void) INIT_LIST_HEAD(&ai->alien); INIT_LIST_HEAD(&ai->fastmap); ai->volumes = RB_ROOT; - ai->aeb_slab_cache = kmem_cache_create("ubi_aeb_slab_cache", + ai->aeb_slab_cache = kmem_cache_create(slab_name, sizeof(struct ubi_ainf_peb), 0, 0, NULL); if (!ai->aeb_slab_cache) { @@ -1491,7 +1491,7 @@ static int scan_fast(struct ubi_device *ubi, struct ubi_attach_info **ai)
err = -ENOMEM;
- scan_ai = alloc_ai(); + scan_ai = alloc_ai("ubi_aeb_slab_cache_fastmap"); if (!scan_ai) goto out;
@@ -1557,7 +1557,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan) int err; struct ubi_attach_info *ai;
- ai = alloc_ai(); + ai = alloc_ai("ubi_aeb_slab_cache"); if (!ai) return -ENOMEM;
@@ -1575,7 +1575,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan) if (err > 0 || mtd_is_eccerr(err)) { if (err != UBI_NO_FASTMAP) { destroy_ai(ai); - ai = alloc_ai(); + ai = alloc_ai("ubi_aeb_slab_cache"); if (!ai) return -ENOMEM;
@@ -1614,7 +1614,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan) if (ubi->fm && ubi_dbg_chk_fastmap(ubi)) { struct ubi_attach_info *scan_ai;
- scan_ai = alloc_ai(); + scan_ai = alloc_ai("ubi_aeb_slab_cache_dbg_chk_fastmap"); if (!scan_ai) { err = -ENOMEM; goto out_wl; diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c index 2a9cc9413c42..9bdb6525f128 100644 --- a/drivers/mtd/ubi/fastmap-wl.c +++ b/drivers/mtd/ubi/fastmap-wl.c @@ -346,14 +346,27 @@ int ubi_wl_get_peb(struct ubi_device *ubi) * WL sub-system. * * @ubi: UBI device description object + * @need_fill: whether to fill wear-leveling pool when no PEBs are found */ -static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi) +static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi, + bool need_fill) { struct ubi_fm_pool *pool = &ubi->fm_wl_pool; int pnum;
- if (pool->used == pool->size) + if (pool->used == pool->size) { + if (need_fill && !ubi->fm_work_scheduled) { + /* + * We cannot update the fastmap here because this + * function is called in atomic context. + * Let's fail here and refill/update it as soon as + * possible. + */ + ubi->fm_work_scheduled = 1; + schedule_work(&ubi->fm_work); + } return NULL; + }
pnum = pool->pebs[pool->used]; return ubi->lookuptbl[pnum]; @@ -375,7 +388,7 @@ static bool need_wear_leveling(struct ubi_device *ubi) if (!ubi->used.rb_node) return false;
- e = next_peb_for_wl(ubi); + e = next_peb_for_wl(ubi, false); if (!e) { if (!ubi->free.rb_node) return false; diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c index 5a3558bbb903..e5cf3bdca3b0 100644 --- a/drivers/mtd/ubi/vmt.c +++ b/drivers/mtd/ubi/vmt.c @@ -143,8 +143,10 @@ static struct fwnode_handle *find_volume_fwnode(struct ubi_volume *vol) vol->vol_id != volid) continue;
+ fwnode_handle_put(fw_vols); return fw_vol; } + fwnode_handle_put(fw_vols);
return NULL; } diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c index a357f3d27f2f..fbd399cf6503 100644 --- a/drivers/mtd/ubi/wl.c +++ b/drivers/mtd/ubi/wl.c @@ -683,7 +683,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk, ubi_assert(!ubi->move_to_put);
#ifdef CONFIG_MTD_UBI_FASTMAP - if (!next_peb_for_wl(ubi) || + if (!next_peb_for_wl(ubi, true) || #else if (!ubi->free.rb_node || #endif @@ -846,7 +846,14 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk, goto out_not_moved; } if (err == MOVE_RETRY) { - scrubbing = 1; + /* + * For source PEB: + * 1. The scrubbing is set for scrub type PEB, it will + * be put back into ubi->scrub list. + * 2. Non-scrub type PEB will be put back into ubi->used + * list. + */ + keep = 1; dst_leb_clean = 1; goto out_not_moved; } diff --git a/drivers/mtd/ubi/wl.h b/drivers/mtd/ubi/wl.h index 7b6715ef6d4a..a69169c35e31 100644 --- a/drivers/mtd/ubi/wl.h +++ b/drivers/mtd/ubi/wl.h @@ -5,7 +5,8 @@ static void update_fastmap_work_fn(struct work_struct *wrk); static struct ubi_wl_entry *find_anchor_wl_entry(struct rb_root *root); static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi); -static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi); +static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi, + bool need_fill); static bool need_wear_leveling(struct ubi_device *ubi); static void ubi_fastmap_close(struct ubi_device *ubi); static inline void ubi_fastmap_init(struct ubi_device *ubi, int *count) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 99d025b69079..3d9ee91e1f8b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4558,7 +4558,7 @@ int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode) struct net_device *dev = bp->dev;
if (page_mode) { - bp->flags &= ~BNXT_FLAG_AGG_RINGS; + bp->flags &= ~(BNXT_FLAG_AGG_RINGS | BNXT_FLAG_NO_AGG_RINGS); bp->flags |= BNXT_FLAG_RX_PAGE_MODE;
if (bp->xdp_prog->aux->xdp_has_frags) @@ -9053,7 +9053,6 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp) struct hwrm_port_mac_ptp_qcfg_output *resp; struct hwrm_port_mac_ptp_qcfg_input *req; struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; - bool phc_cfg; u8 flags; int rc;
@@ -9100,8 +9099,9 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp) rc = -ENODEV; goto exit; } - phc_cfg = (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0; - rc = bnxt_ptp_init(bp, phc_cfg); + ptp->rtc_configured = + (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0; + rc = bnxt_ptp_init(bp); if (rc) netdev_warn(bp->dev, "PTP initialization failed.\n"); exit: @@ -14494,6 +14494,14 @@ static int bnxt_change_mtu(struct net_device *dev, int new_mtu) bnxt_close_nic(bp, true, false);
WRITE_ONCE(dev->mtu, new_mtu); + + /* MTU change may change the AGG ring settings if an XDP multi-buffer + * program is attached. We need to set the AGG rings settings and + * rx_skb_func accordingly. + */ + if (READ_ONCE(bp->xdp_prog)) + bnxt_set_rx_skb_mode(bp, true); + bnxt_set_ring_params(bp);
if (netif_running(dev)) @@ -15231,6 +15239,13 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) { vnic = &bp->vnic_info[i]; + + rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true); + if (rc) { + netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n", + vnic->vnic_id, rc); + return rc; + } vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; bnxt_hwrm_vnic_update(bp, vnic, VNIC_UPDATE_REQ_ENABLES_MRU_VALID); @@ -15984,6 +15999,7 @@ static void bnxt_shutdown(struct pci_dev *pdev) if (netif_running(dev)) dev_close(dev);
+ bnxt_ptp_clear(bp); bnxt_clear_int_mode(bp); pci_disable_device(pdev);
@@ -16011,6 +16027,7 @@ static int bnxt_suspend(struct device *device) rc = bnxt_close(dev); } bnxt_hwrm_func_drv_unrgtr(bp); + bnxt_ptp_clear(bp); pci_disable_device(bp->pdev); bnxt_free_ctx_mem(bp); rtnl_unlock(); @@ -16054,6 +16071,10 @@ static int bnxt_resume(struct device *device) if (bp->fw_crash_mem) bnxt_hwrm_crash_dump_mem_cfg(bp);
+ if (bnxt_ptp_init(bp)) { + kfree(bp->ptp_cfg); + bp->ptp_cfg = NULL; + } bnxt_get_wol_settings(bp); if (netif_running(dev)) { rc = bnxt_open(dev); @@ -16232,8 +16253,12 @@ static void bnxt_io_resume(struct pci_dev *pdev) rtnl_lock();
err = bnxt_hwrm_func_qcaps(bp); - if (!err && netif_running(netdev)) - err = bnxt_open(netdev); + if (!err) { + if (netif_running(netdev)) + err = bnxt_open(netdev); + else + err = bnxt_reserve_rings(bp, true); + }
if (!err) netif_device_attach(netdev); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index f71cc8188b4e..20ba14eb87e0 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -2838,19 +2838,24 @@ static int bnxt_get_link_ksettings(struct net_device *dev, }
base->port = PORT_NONE; - if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP) { + if (media == BNXT_MEDIA_TP) { base->port = PORT_TP; linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT, lk_ksettings->link_modes.supported); linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT, lk_ksettings->link_modes.advertising); + } else if (media == BNXT_MEDIA_KR) { + linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT, + lk_ksettings->link_modes.supported); + linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT, + lk_ksettings->link_modes.advertising); } else { linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, lk_ksettings->link_modes.supported); linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, lk_ksettings->link_modes.advertising);
- if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC) + if (media == BNXT_MEDIA_CR) base->port = PORT_DA; else base->port = PORT_FIBRE; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c index fa514be87650..781225d3ba8f 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c @@ -1024,7 +1024,7 @@ static void bnxt_ptp_free(struct bnxt *bp) } }
-int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg) +int bnxt_ptp_init(struct bnxt *bp) { struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; int rc; @@ -1047,7 +1047,7 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
if (BNXT_PTP_USE_RTC(bp)) { bnxt_ptp_timecounter_init(bp, false); - rc = bnxt_ptp_init_rtc(bp, phc_cfg); + rc = bnxt_ptp_init_rtc(bp, ptp->rtc_configured); if (rc) goto out; } else { diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h index f322466ecad3..61e89bb2d269 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h @@ -133,6 +133,7 @@ struct bnxt_ptp_cfg { BNXT_PTP_MSG_PDELAY_REQ | \ BNXT_PTP_MSG_PDELAY_RESP) u8 tx_tstamp_en:1; + u8 rtc_configured:1; int rx_filter; u32 tstamp_filters;
@@ -180,6 +181,6 @@ void bnxt_tx_ts_cmp(struct bnxt *bp, struct bnxt_napi *bnapi, struct tx_ts_cmp *tscmp); void bnxt_ptp_rtc_timecounter_init(struct bnxt_ptp_cfg *ptp, u64 ns); int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg); -int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg); +int bnxt_ptp_init(struct bnxt *bp); void bnxt_ptp_clear(struct bnxt *bp); #endif diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c index 378815917741..d178138981a9 100644 --- a/drivers/net/ethernet/broadcom/tg3.c +++ b/drivers/net/ethernet/broadcom/tg3.c @@ -17801,6 +17801,9 @@ static int tg3_init_one(struct pci_dev *pdev, } else persist_dma_mask = dma_mask = DMA_BIT_MASK(64);
+ if (tg3_asic_rev(tp) == ASIC_REV_57766) + persist_dma_mask = DMA_BIT_MASK(31); + /* Configure DMA attributes. */ if (dma_mask > DMA_BIT_MASK(32)) { err = dma_set_mask(&pdev->dev, dma_mask); diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index e44e8b139633..060e0e674938 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -1248,10 +1248,10 @@ gve_adminq_configure_flow_rule(struct gve_priv *priv, sizeof(struct gve_adminq_configure_flow_rule), flow_rule_cmd);
- if (err) { + if (err == -ETIME) { dev_err(&priv->pdev->dev, "Timeout to configure the flow rule, trigger reset"); gve_reset(priv, true); - } else { + } else if (!err) { priv->flow_rules_cache.rules_cache_synced = false; }
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index f2506511bbff..bce5b76f1e7a 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -5299,7 +5299,7 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags) }
flags_complete: - bitmap_xor(changed_flags, pf->flags, orig_flags, I40E_PF_FLAGS_NBITS); + bitmap_xor(changed_flags, new_flags, orig_flags, I40E_PF_FLAGS_NBITS);
if (test_bit(I40E_FLAG_FW_LLDP_DIS, changed_flags)) reset_needed = I40E_PF_RESET_AND_REBUILD_FLAG; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index 59f62306b9cb..b6ec01f6fa73 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -1715,8 +1715,8 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
/* copy Tx queue info from VF into VSI */ if (qpi->txq.ring_len > 0) { - vsi->tx_rings[i]->dma = qpi->txq.dma_ring_addr; - vsi->tx_rings[i]->count = qpi->txq.ring_len; + vsi->tx_rings[q_idx]->dma = qpi->txq.dma_ring_addr; + vsi->tx_rings[q_idx]->count = qpi->txq.ring_len;
/* Disable any existing queue first */ if (ice_vf_vsi_dis_single_txq(vf, vsi, q_idx)) @@ -1725,7 +1725,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) /* Configure a queue with the requested settings */ if (ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx)) { dev_warn(ice_pf_to_dev(pf), "VF-%d failed to configure TX queue %d\n", - vf->vf_id, i); + vf->vf_id, q_idx); goto error_param; } } @@ -1733,24 +1733,23 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) /* copy Rx queue info from VF into VSI */ if (qpi->rxq.ring_len > 0) { u16 max_frame_size = ice_vc_get_max_frame_size(vf); + struct ice_rx_ring *ring = vsi->rx_rings[q_idx]; u32 rxdid;
- vsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr; - vsi->rx_rings[i]->count = qpi->rxq.ring_len; + ring->dma = qpi->rxq.dma_ring_addr; + ring->count = qpi->rxq.ring_len;
if (qpi->rxq.crc_disable) - vsi->rx_rings[q_idx]->flags |= - ICE_RX_FLAGS_CRC_STRIP_DIS; + ring->flags |= ICE_RX_FLAGS_CRC_STRIP_DIS; else - vsi->rx_rings[q_idx]->flags &= - ~ICE_RX_FLAGS_CRC_STRIP_DIS; + ring->flags &= ~ICE_RX_FLAGS_CRC_STRIP_DIS;
if (qpi->rxq.databuffer_size != 0 && (qpi->rxq.databuffer_size > ((16 * 1024) - 128) || qpi->rxq.databuffer_size < 1024)) goto error_param; vsi->rx_buf_len = qpi->rxq.databuffer_size; - vsi->rx_rings[i]->rx_buf_len = vsi->rx_buf_len; + ring->rx_buf_len = vsi->rx_buf_len; if (qpi->rxq.max_pkt_size > max_frame_size || qpi->rxq.max_pkt_size < 64) goto error_param; @@ -1765,7 +1764,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
if (ice_vsi_cfg_single_rxq(vsi, q_idx)) { dev_warn(ice_pf_to_dev(pf), "VF-%d failed to configure RX queue %d\n", - vf->vf_id, i); + vf->vf_id, q_idx); goto error_param; }
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index 27935c54b91b..8216f843a7cd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -112,6 +112,11 @@ struct mac_ops *get_mac_ops(void *cgxd) return ((struct cgx *)cgxd)->mac_ops; }
+u32 cgx_get_fifo_len(void *cgxd) +{ + return ((struct cgx *)cgxd)->fifo_len; +} + void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val) { writeq(val, cgx->reg_base + (lmac << cgx->mac_ops->lmac_offset) + @@ -209,6 +214,24 @@ u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id) return (cfg & CMR_P2X_SEL_MASK) >> CMR_P2X_SEL_SHIFT; }
+static u8 cgx_get_nix_resetbit(struct cgx *cgx) +{ + int first_lmac; + u8 p2x; + + /* non 98XX silicons supports only NIX0 block */ + if (cgx->pdev->subsystem_device != PCI_SUBSYS_DEVID_98XX) + return CGX_NIX0_RESET; + + first_lmac = find_first_bit(&cgx->lmac_bmap, cgx->max_lmac_per_mac); + p2x = cgx_lmac_get_p2x(cgx->cgx_id, first_lmac); + + if (p2x == CMR_P2X_SEL_NIX1) + return CGX_NIX1_RESET; + else + return CGX_NIX0_RESET; +} + /* Ensure the required lock for event queue(where asynchronous events are * posted) is acquired before calling this API. Else an asynchronous event(with * latest link status) can reach the destination before this function returns @@ -501,7 +524,7 @@ static u32 cgx_get_lmac_fifo_len(void *cgxd, int lmac_id) u8 num_lmacs; u32 fifo_len;
- fifo_len = cgx->mac_ops->fifo_len; + fifo_len = cgx->fifo_len; num_lmacs = cgx->mac_ops->get_nr_lmacs(cgx);
switch (num_lmacs) { @@ -1719,6 +1742,8 @@ static int cgx_lmac_init(struct cgx *cgx) lmac->lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac->lmac_id); }
+ /* Start X2P reset on given MAC block */ + cgx->mac_ops->mac_x2p_reset(cgx, true); return cgx_lmac_verify_fwi_version(cgx);
err_bitmap_free: @@ -1764,7 +1789,7 @@ static void cgx_populate_features(struct cgx *cgx) u64 cfg;
cfg = cgx_read(cgx, 0, CGX_CONST); - cgx->mac_ops->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg); + cgx->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg); cgx->max_lmac_per_mac = FIELD_GET(CGX_CONST_MAX_LMACS, cfg);
if (is_dev_rpm(cgx)) @@ -1784,6 +1809,45 @@ static u8 cgx_get_rxid_mapoffset(struct cgx *cgx) return 0x60; }
+static void cgx_x2p_reset(void *cgxd, bool enable) +{ + struct cgx *cgx = cgxd; + int lmac_id; + u64 cfg; + + if (enable) { + for_each_set_bit(lmac_id, &cgx->lmac_bmap, cgx->max_lmac_per_mac) + cgx->mac_ops->mac_enadis_rx(cgx, lmac_id, false); + + usleep_range(1000, 2000); + + cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG); + cfg |= cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP; + cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg); + } else { + cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG); + cfg &= ~(cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP); + cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg); + } +} + +static int cgx_enadis_rx(void *cgxd, int lmac_id, bool enable) +{ + struct cgx *cgx = cgxd; + u64 cfg; + + if (!is_lmac_valid(cgx, lmac_id)) + return -ENODEV; + + cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG); + if (enable) + cfg |= DATA_PKT_RX_EN; + else + cfg &= ~DATA_PKT_RX_EN; + cgx_write(cgx, lmac_id, CGXX_CMRX_CFG, cfg); + return 0; +} + static struct mac_ops cgx_mac_ops = { .name = "cgx", .csr_offset = 0, @@ -1815,6 +1879,8 @@ static struct mac_ops cgx_mac_ops = { .mac_get_pfc_frm_cfg = cgx_lmac_get_pfc_frm_cfg, .mac_reset = cgx_lmac_reset, .mac_stats_reset = cgx_stats_reset, + .mac_x2p_reset = cgx_x2p_reset, + .mac_enadis_rx = cgx_enadis_rx, };
static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h index dc9ace30554a..1cf12e5c7da8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h @@ -32,6 +32,10 @@ #define CGX_LMAC_TYPE_MASK 0xF #define CGXX_CMRX_INT 0x040 #define FW_CGX_INT BIT_ULL(1) +#define CGXX_CMR_GLOBAL_CONFIG 0x08 +#define CGX_NIX0_RESET BIT_ULL(2) +#define CGX_NIX1_RESET BIT_ULL(3) +#define CGX_NSCI_DROP BIT_ULL(9) #define CGXX_CMRX_INT_ENA_W1S 0x058 #define CGXX_CMRX_RX_ID_MAP 0x060 #define CGXX_CMRX_RX_STAT0 0x070 @@ -185,4 +189,5 @@ int cgx_lmac_get_pfc_frm_cfg(void *cgxd, int lmac_id, u8 *tx_pause, int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause, int pfvf_idx); int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr); +u32 cgx_get_fifo_len(void *cgxd); #endif /* CGX_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h index 9ffc6790c513..6180e68e1765 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h @@ -72,7 +72,6 @@ struct mac_ops { u8 irq_offset; u8 int_ena_bit; u8 lmac_fwi; - u32 fifo_len; bool non_contiguous_serdes_lane; /* RPM & CGX differs in number of Receive/transmit stats */ u8 rx_stats_cnt; @@ -133,6 +132,8 @@ struct mac_ops { int (*get_fec_stats)(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp); int (*mac_stats_reset)(void *cgxd, int lmac_id); + void (*mac_x2p_reset)(void *cgxd, bool enable); + int (*mac_enadis_rx)(void *cgxd, int lmac_id, bool enable); };
struct cgx { @@ -142,6 +143,10 @@ struct cgx { u8 lmac_count; /* number of LMACs per MAC could be 4 or 8 */ u8 max_lmac_per_mac; + /* length of fifo varies depending on the number + * of LMACS + */ + u32 fifo_len; #define MAX_LMAC_COUNT 8 struct lmac *lmac_idmap[MAX_LMAC_COUNT]; struct work_struct cgx_cmd_work; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c index 1b34cf9c9703..2e9945446199 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c @@ -39,6 +39,8 @@ static struct mac_ops rpm_mac_ops = { .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg, .mac_reset = rpm_lmac_reset, .mac_stats_reset = rpm_stats_reset, + .mac_x2p_reset = rpm_x2p_reset, + .mac_enadis_rx = rpm_enadis_rx, };
static struct mac_ops rpm2_mac_ops = { @@ -72,6 +74,8 @@ static struct mac_ops rpm2_mac_ops = { .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg, .mac_reset = rpm_lmac_reset, .mac_stats_reset = rpm_stats_reset, + .mac_x2p_reset = rpm_x2p_reset, + .mac_enadis_rx = rpm_enadis_rx, };
bool is_dev_rpm2(void *rpmd) @@ -467,7 +471,7 @@ u8 rpm_get_lmac_type(void *rpmd, int lmac_id) int err;
req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_LINK_STS, req); - err = cgx_fwi_cmd_generic(req, &resp, rpm, 0); + err = cgx_fwi_cmd_generic(req, &resp, rpm, lmac_id); if (!err) return FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, resp); return err; @@ -480,7 +484,7 @@ u32 rpm_get_lmac_fifo_len(void *rpmd, int lmac_id) u8 num_lmacs; u32 fifo_len;
- fifo_len = rpm->mac_ops->fifo_len; + fifo_len = rpm->fifo_len; num_lmacs = rpm->mac_ops->get_nr_lmacs(rpm);
switch (num_lmacs) { @@ -533,9 +537,9 @@ u32 rpm2_get_lmac_fifo_len(void *rpmd, int lmac_id) */ max_lmac = (rpm_read(rpm, 0, CGX_CONST) >> 24) & 0xFF; if (max_lmac > 4) - fifo_len = rpm->mac_ops->fifo_len / 2; + fifo_len = rpm->fifo_len / 2; else - fifo_len = rpm->mac_ops->fifo_len; + fifo_len = rpm->fifo_len;
if (lmac_id < 4) { num_lmacs = hweight8(lmac_info & 0xF); @@ -699,46 +703,51 @@ int rpm_get_fec_stats(void *rpmd, int lmac_id, struct cgx_fec_stats_rsp *rsp) if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_NONE) return 0;
+ /* latched registers FCFECX_CW_HI/RSFEC_STAT_FAST_DATA_HI_CDC are common + * for all counters. Acquire lock to ensure serialized reads + */ + mutex_lock(&rpm->lock); if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_BASER) { - val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_CCW_LO); - val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI); + val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_CCW_LO(lmac_id)); + val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id)); rsp->fec_corr_blks = (val_hi << 16 | val_lo);
- val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_NCCW_LO); - val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI); + val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_NCCW_LO(lmac_id)); + val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id)); rsp->fec_uncorr_blks = (val_hi << 16 | val_lo);
/* 50G uses 2 Physical serdes lines */ if (rpm->lmac_idmap[lmac_id]->link_info.lmac_type_id == LMAC_MODE_50G_R) { - val_lo = rpm_read(rpm, lmac_id, - RPMX_MTI_FCFECX_VL1_CCW_LO); - val_hi = rpm_read(rpm, lmac_id, - RPMX_MTI_FCFECX_CW_HI); + val_lo = rpm_read(rpm, 0, + RPMX_MTI_FCFECX_VL1_CCW_LO(lmac_id)); + val_hi = rpm_read(rpm, 0, + RPMX_MTI_FCFECX_CW_HI(lmac_id)); rsp->fec_corr_blks += (val_hi << 16 | val_lo);
- val_lo = rpm_read(rpm, lmac_id, - RPMX_MTI_FCFECX_VL1_NCCW_LO); - val_hi = rpm_read(rpm, lmac_id, - RPMX_MTI_FCFECX_CW_HI); + val_lo = rpm_read(rpm, 0, + RPMX_MTI_FCFECX_VL1_NCCW_LO(lmac_id)); + val_hi = rpm_read(rpm, 0, + RPMX_MTI_FCFECX_CW_HI(lmac_id)); rsp->fec_uncorr_blks += (val_hi << 16 | val_lo); } } else { /* enable RS-FEC capture */ - cfg = rpm_read(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL); + cfg = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL); cfg |= RPMX_RSFEC_RX_CAPTURE | BIT(lmac_id); - rpm_write(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL, cfg); + rpm_write(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL, cfg);
val_lo = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2); - val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC); + val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC); rsp->fec_corr_blks = (val_hi << 32 | val_lo);
val_lo = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3); - val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC); + val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC); rsp->fec_uncorr_blks = (val_hi << 32 | val_lo); } + mutex_unlock(&rpm->lock);
return 0; } @@ -763,3 +772,41 @@ int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr)
return 0; } + +void rpm_x2p_reset(void *rpmd, bool enable) +{ + rpm_t *rpm = rpmd; + int lmac_id; + u64 cfg; + + if (enable) { + for_each_set_bit(lmac_id, &rpm->lmac_bmap, rpm->max_lmac_per_mac) + rpm->mac_ops->mac_enadis_rx(rpm, lmac_id, false); + + usleep_range(1000, 2000); + + cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG); + rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg | RPM_NIX0_RESET); + } else { + cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG); + cfg &= ~RPM_NIX0_RESET; + rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg); + } +} + +int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable) +{ + rpm_t *rpm = rpmd; + u64 cfg; + + if (!is_lmac_valid(rpm, lmac_id)) + return -ENODEV; + + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + if (enable) + cfg |= RPM_RX_EN; + else + cfg &= ~RPM_RX_EN; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h index 34b11deb0f3c..b8d3972e096a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h @@ -17,6 +17,8 @@
/* Registers */ #define RPMX_CMRX_CFG 0x00 +#define RPMX_CMR_GLOBAL_CFG 0x08 +#define RPM_NIX0_RESET BIT_ULL(3) #define RPMX_RX_TS_PREPEND BIT_ULL(22) #define RPMX_TX_PTP_1S_SUPPORT BIT_ULL(17) #define RPMX_CMRX_RX_ID_MAP 0x80 @@ -84,16 +86,18 @@ /* FEC stats */ #define RPMX_MTI_STAT_STATN_CONTROL 0x10018 #define RPMX_MTI_STAT_DATA_HI_CDC 0x10038 -#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(27) +#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(28) #define RPMX_CMD_CLEAR_RX BIT_ULL(30) #define RPMX_CMD_CLEAR_TX BIT_ULL(31) +#define RPMX_MTI_RSFEC_STAT_STATN_CONTROL 0x40018 +#define RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC 0x40000 #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2 0x40050 #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3 0x40058 -#define RPMX_MTI_FCFECX_VL0_CCW_LO 0x38618 -#define RPMX_MTI_FCFECX_VL0_NCCW_LO 0x38620 -#define RPMX_MTI_FCFECX_VL1_CCW_LO 0x38628 -#define RPMX_MTI_FCFECX_VL1_NCCW_LO 0x38630 -#define RPMX_MTI_FCFECX_CW_HI 0x38638 +#define RPMX_MTI_FCFECX_VL0_CCW_LO(a) (0x38618 + ((a) * 0x40)) +#define RPMX_MTI_FCFECX_VL0_NCCW_LO(a) (0x38620 + ((a) * 0x40)) +#define RPMX_MTI_FCFECX_VL1_CCW_LO(a) (0x38628 + ((a) * 0x40)) +#define RPMX_MTI_FCFECX_VL1_NCCW_LO(a) (0x38630 + ((a) * 0x40)) +#define RPMX_MTI_FCFECX_CW_HI(a) (0x38638 + ((a) * 0x40))
/* CN10KB CSR Declaration */ #define RPM2_CMRX_SW_INT 0x1b0 @@ -137,4 +141,6 @@ bool is_dev_rpm2(void *rpmd); int rpm_get_fec_stats(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp); int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr); int rpm_stats_reset(void *rpmd, int lmac_id); +void rpm_x2p_reset(void *rpmd, bool enable); +int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable); #endif /* RPM_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 1a97fb9032fa..cd0d7b7774f1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -1162,6 +1162,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu) }
rvu_program_channels(rvu); + cgx_start_linkup(rvu);
err = rvu_mcs_init(rvu); if (err) { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 5016ba82e142..8555edbb1c8f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -997,6 +997,7 @@ int rvu_cgx_prio_flow_ctrl_cfg(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_ int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause); void rvu_mac_reset(struct rvu *rvu, u16 pcifunc); u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac); +void cgx_start_linkup(struct rvu *rvu); int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf, int type); bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr, diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index 266ecbc1b97a..992fa0b82e8d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -349,6 +349,7 @@ static void rvu_cgx_wq_destroy(struct rvu *rvu)
int rvu_cgx_init(struct rvu *rvu) { + struct mac_ops *mac_ops; int cgx, err; void *cgxd;
@@ -375,6 +376,15 @@ int rvu_cgx_init(struct rvu *rvu) if (err) return err;
+ /* Clear X2P reset on all MAC blocks */ + for (cgx = 0; cgx < rvu->cgx_cnt_max; cgx++) { + cgxd = rvu_cgx_pdata(cgx, rvu); + if (!cgxd) + continue; + mac_ops = get_mac_ops(cgxd); + mac_ops->mac_x2p_reset(cgxd, false); + } + /* Register for CGX events */ err = cgx_lmac_event_handler_init(rvu); if (err) @@ -382,10 +392,26 @@ int rvu_cgx_init(struct rvu *rvu)
mutex_init(&rvu->cgx_cfg_lock);
- /* Ensure event handler registration is completed, before - * we turn on the links - */ - mb(); + return 0; +} + +void cgx_start_linkup(struct rvu *rvu) +{ + unsigned long lmac_bmap; + struct mac_ops *mac_ops; + int cgx, lmac, err; + void *cgxd; + + /* Enable receive on all LMACS */ + for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) { + cgxd = rvu_cgx_pdata(cgx, rvu); + if (!cgxd) + continue; + mac_ops = get_mac_ops(cgxd); + lmac_bmap = cgx_get_lmac_bmap(cgxd); + for_each_set_bit(lmac, &lmac_bmap, rvu->hw->lmac_per_cgx) + mac_ops->mac_enadis_rx(cgxd, lmac, true); + }
/* Do link up for all CGX ports */ for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) { @@ -398,8 +424,6 @@ int rvu_cgx_init(struct rvu *rvu) "Link up process failed to start on cgx %d\n", cgx); } - - return 0; }
int rvu_cgx_exit(struct rvu *rvu) @@ -923,13 +947,12 @@ int rvu_mbox_handler_cgx_features_get(struct rvu *rvu,
u32 rvu_cgx_get_fifolen(struct rvu *rvu) { - struct mac_ops *mac_ops; - u32 fifo_len; + void *cgxd = rvu_first_cgx_pdata(rvu);
- mac_ops = get_mac_ops(rvu_first_cgx_pdata(rvu)); - fifo_len = mac_ops ? mac_ops->fifo_len : 0; + if (!cgxd) + return 0;
- return fifo_len; + return cgx_get_fifo_len(cgxd); }
u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c index c1c99d7054f8..7417087b6db5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c @@ -203,6 +203,11 @@ int cn10k_alloc_leaf_profile(struct otx2_nic *pfvf, u16 *leaf)
rsp = (struct nix_bandprof_alloc_rsp *) otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + rc = PTR_ERR(rsp); + goto out; + } + if (!rsp->prof_count[BAND_PROF_LEAF_LAYER]) { rc = -EIO; goto out; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 87d5776e3b88..7510a918d942 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -1837,6 +1837,10 @@ u16 otx2_get_max_mtu(struct otx2_nic *pfvf) if (!rc) { rsp = (struct nix_hw_info *) otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + rc = PTR_ERR(rsp); + goto out; + }
/* HW counts VLAN insertion bytes (8 for double tag) * irrespective of whether SQE is requesting to insert VLAN diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c index aa01110f04a3..294fba58b670 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c @@ -315,6 +315,11 @@ int otx2_config_priority_flow_ctrl(struct otx2_nic *pfvf) if (!otx2_sync_mbox_msg(&pfvf->mbox)) { rsp = (struct cgx_pfc_rsp *) otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + err = PTR_ERR(rsp); + goto unlock; + } + if (req->rx_pause != rsp->rx_pause || req->tx_pause != rsp->tx_pause) { dev_warn(pfvf->dev, "Failed to config PFC\n"); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c index 80d853b343f9..2046dd0da00d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c @@ -28,6 +28,11 @@ static int otx2_dmacflt_do_add(struct otx2_nic *pf, const u8 *mac, if (!err) { rsp = (struct cgx_mac_addr_add_rsp *) otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + mutex_unlock(&pf->mbox.lock); + return PTR_ERR(rsp); + } + *dmac_index = rsp->index; }
@@ -200,6 +205,10 @@ int otx2_dmacflt_update(struct otx2_nic *pf, u8 *mac, u32 bit_pos)
rsp = (struct cgx_mac_addr_update_rsp *) otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + rc = PTR_ERR(rsp); + goto out; + }
pf->flow_cfg->bmap_to_dmacindex[bit_pos] = rsp->index;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c index 32468c663605..5197ce816581 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c @@ -343,6 +343,11 @@ static void otx2_get_pauseparam(struct net_device *netdev, if (!otx2_sync_mbox_msg(&pfvf->mbox)) { rsp = (struct cgx_pause_frm_cfg *) otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + mutex_unlock(&pfvf->mbox.lock); + return; + } + pause->rx_pause = rsp->rx_pause; pause->tx_pause = rsp->tx_pause; } @@ -1072,6 +1077,11 @@ static int otx2_set_fecparam(struct net_device *netdev,
rsp = (struct fec_mode *)otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + err = PTR_ERR(rsp); + goto end; + } + if (rsp->fec >= 0) pfvf->linfo.fec = rsp->fec; else diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c index 98c31a16c70b..58720a161ee2 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c @@ -119,6 +119,8 @@ int otx2_alloc_mcam_entries(struct otx2_nic *pfvf, u16 count)
rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp (&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) + goto exit;
for (ent = 0; ent < rsp->count; ent++) flow_cfg->flow_ent[ent + allocated] = rsp->entry_list[ent]; @@ -197,6 +199,10 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf)
rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp (&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + mutex_unlock(&pfvf->mbox.lock); + return PTR_ERR(rsp); + }
if (rsp->count != req->count) { netdev_info(pfvf->netdev, @@ -232,6 +238,10 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf)
frsp = (struct npc_get_field_status_rsp *)otx2_mbox_get_rsp (&pfvf->mbox.mbox, 0, &freq->hdr); + if (IS_ERR(frsp)) { + mutex_unlock(&pfvf->mbox.lock); + return PTR_ERR(frsp); + }
if (frsp->enable) { pfvf->flags |= OTX2_FLAG_RX_VLAN_SUPPORT; diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c index 1a59c952aa01..45f115e41857 100644 --- a/drivers/net/ethernet/marvell/pxa168_eth.c +++ b/drivers/net/ethernet/marvell/pxa168_eth.c @@ -1394,18 +1394,15 @@ static int pxa168_eth_probe(struct platform_device *pdev)
printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n");
- clk = devm_clk_get(&pdev->dev, NULL); + clk = devm_clk_get_enabled(&pdev->dev, NULL); if (IS_ERR(clk)) { - dev_err(&pdev->dev, "Fast Ethernet failed to get clock\n"); + dev_err(&pdev->dev, "Fast Ethernet failed to get and enable clock\n"); return -ENODEV; } - clk_prepare_enable(clk);
dev = alloc_etherdev(sizeof(struct pxa168_eth_private)); - if (!dev) { - err = -ENOMEM; - goto err_clk; - } + if (!dev) + return -ENOMEM;
platform_set_drvdata(pdev, dev); pep = netdev_priv(dev); @@ -1523,8 +1520,6 @@ static int pxa168_eth_probe(struct platform_device *pdev) mdiobus_free(pep->smi_bus); err_netdev: free_netdev(dev); -err_clk: - clk_disable_unprepare(clk); return err; }
@@ -1542,7 +1537,6 @@ static void pxa168_eth_remove(struct platform_device *pdev) if (dev->phydev) phy_disconnect(dev->phydev);
- clk_disable_unprepare(pep->clk); mdiobus_unregister(pep->smi_bus); mdiobus_free(pep->smi_bus); unregister_netdev(dev); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c index 8577db3308cc..7f68468c2e75 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c @@ -516,6 +516,7 @@ void mlx5_modify_lag(struct mlx5_lag *ldev, blocking_notifier_call_chain(&dev0->priv.lag_nh, MLX5_DRIVER_EVENT_ACTIVE_BACKUP_LAG_CHANGE_LOWERSTATE, ndev); + dev_put(ndev); } }
@@ -918,6 +919,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev) { struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev; struct lag_tracker tracker = { }; + struct net_device *ndev; bool do_bond, roce_lag; int err; int i; @@ -981,6 +983,16 @@ static void mlx5_do_bond(struct mlx5_lag *ldev) return; } } + if (tracker.tx_type == NETDEV_LAG_TX_TYPE_ACTIVEBACKUP) { + ndev = mlx5_lag_active_backup_get_netdev(dev0); + /** Only sriov and roce lag should have tracker->TX_type + * set so no need to check the mode + */ + blocking_notifier_call_chain(&dev0->priv.lag_nh, + MLX5_DRIVER_EVENT_ACTIVE_BACKUP_LAG_CHANGE_LOWERSTATE, + ndev); + dev_put(ndev); + } } else if (mlx5_lag_should_modify_lag(ldev, do_bond)) { mlx5_modify_lag(ldev, &tracker); } else if (mlx5_lag_should_disable_lag(ldev, do_bond)) { diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c index a4809fe0fc24..268489b15616 100644 --- a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c +++ b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c @@ -319,7 +319,6 @@ static int fbnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) free_irqs: fbnic_free_irqs(fbd); free_fbd: - pci_disable_device(pdev); fbnic_devlink_free(fbd);
return err; @@ -349,7 +348,6 @@ static void fbnic_remove(struct pci_dev *pdev) fbnic_fw_disable_mbx(fbd); fbnic_free_irqs(fbd);
- pci_disable_device(pdev); fbnic_devlink_free(fbd); }
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c index 7251121ab196..16eb3de60eb6 100644 --- a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c +++ b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c @@ -366,12 +366,13 @@ static void vcap_api_iterator_init_test(struct kunit *test) struct vcap_typegroup typegroups[] = { { .offset = 0, .width = 2, .value = 2, }, { .offset = 156, .width = 1, .value = 0, }, - { .offset = 0, .width = 0, .value = 0, }, + { } }; struct vcap_typegroup typegroups2[] = { { .offset = 0, .width = 3, .value = 4, }, { .offset = 49, .width = 2, .value = 0, }, { .offset = 98, .width = 2, .value = 0, }, + { } };
vcap_iter_init(&iter, 52, typegroups, 86); @@ -399,6 +400,7 @@ static void vcap_api_iterator_next_test(struct kunit *test) { .offset = 147, .width = 3, .value = 0, }, { .offset = 196, .width = 2, .value = 0, }, { .offset = 245, .width = 1, .value = 0, }, + { } }; int idx;
@@ -433,7 +435,7 @@ static void vcap_api_encode_typegroups_test(struct kunit *test) { .offset = 147, .width = 3, .value = 5, }, { .offset = 196, .width = 2, .value = 2, }, { .offset = 245, .width = 5, .value = 27, }, - { .offset = 0, .width = 0, .value = 0, }, + { } };
vcap_encode_typegroups(stream, 49, typegroups, false); @@ -463,6 +465,7 @@ static void vcap_api_encode_bit_test(struct kunit *test) { .offset = 147, .width = 3, .value = 5, }, { .offset = 196, .width = 2, .value = 2, }, { .offset = 245, .width = 1, .value = 0, }, + { } };
vcap_iter_init(&iter, 49, typegroups, 44); @@ -489,7 +492,7 @@ static void vcap_api_encode_field_test(struct kunit *test) { .offset = 147, .width = 3, .value = 5, }, { .offset = 196, .width = 2, .value = 2, }, { .offset = 245, .width = 5, .value = 27, }, - { .offset = 0, .width = 0, .value = 0, }, + { } }; struct vcap_field rf = { .type = VCAP_FIELD_U32, @@ -538,7 +541,7 @@ static void vcap_api_encode_short_field_test(struct kunit *test) { .offset = 0, .width = 3, .value = 7, }, { .offset = 21, .width = 2, .value = 3, }, { .offset = 42, .width = 1, .value = 1, }, - { .offset = 0, .width = 0, .value = 0, }, + { } }; struct vcap_field rf = { .type = VCAP_FIELD_U32, @@ -608,7 +611,7 @@ static void vcap_api_encode_keyfield_test(struct kunit *test) struct vcap_typegroup tgt[] = { { .offset = 0, .width = 2, .value = 2, }, { .offset = 156, .width = 1, .value = 1, }, - { .offset = 0, .width = 0, .value = 0, }, + { } };
vcap_test_api_init(&admin); @@ -671,7 +674,7 @@ static void vcap_api_encode_max_keyfield_test(struct kunit *test) struct vcap_typegroup tgt[] = { { .offset = 0, .width = 2, .value = 2, }, { .offset = 156, .width = 1, .value = 1, }, - { .offset = 0, .width = 0, .value = 0, }, + { } }; u32 keyres[] = { 0x928e8a84, @@ -732,7 +735,7 @@ static void vcap_api_encode_actionfield_test(struct kunit *test) { .offset = 0, .width = 2, .value = 2, }, { .offset = 21, .width = 1, .value = 1, }, { .offset = 42, .width = 1, .value = 0, }, - { .offset = 0, .width = 0, .value = 0, }, + { } };
vcap_encode_actionfield(&rule, &caf, &rf, tgt); diff --git a/drivers/net/ethernet/realtek/rtase/rtase.h b/drivers/net/ethernet/realtek/rtase/rtase.h index 583c33930f88..4a4434869b10 100644 --- a/drivers/net/ethernet/realtek/rtase/rtase.h +++ b/drivers/net/ethernet/realtek/rtase/rtase.h @@ -9,7 +9,10 @@ #ifndef RTASE_H #define RTASE_H
-#define RTASE_HW_VER_MASK 0x7C800000 +#define RTASE_HW_VER_MASK 0x7C800000 +#define RTASE_HW_VER_906X_7XA 0x00800000 +#define RTASE_HW_VER_906X_7XC 0x04000000 +#define RTASE_HW_VER_907XD_V1 0x04800000
#define RTASE_RX_DMA_BURST_256 4 #define RTASE_TX_DMA_BURST_UNLIMITED 7 @@ -327,6 +330,8 @@ struct rtase_private { u16 int_nums; u16 tx_int_mit; u16 rx_int_mit; + + u32 hw_ver; };
#define RTASE_LSO_64K 64000 diff --git a/drivers/net/ethernet/realtek/rtase/rtase_main.c b/drivers/net/ethernet/realtek/rtase/rtase_main.c index f8777b7663d3..1bfe5ef40c52 100644 --- a/drivers/net/ethernet/realtek/rtase/rtase_main.c +++ b/drivers/net/ethernet/realtek/rtase/rtase_main.c @@ -1714,10 +1714,21 @@ static int rtase_get_settings(struct net_device *dev, struct ethtool_link_ksettings *cmd) { u32 supported = SUPPORTED_MII | SUPPORTED_Pause | SUPPORTED_Asym_Pause; + const struct rtase_private *tp = netdev_priv(dev);
ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported, supported); - cmd->base.speed = SPEED_5000; + + switch (tp->hw_ver) { + case RTASE_HW_VER_906X_7XA: + case RTASE_HW_VER_906X_7XC: + cmd->base.speed = SPEED_5000; + break; + case RTASE_HW_VER_907XD_V1: + cmd->base.speed = SPEED_10000; + break; + } + cmd->base.duplex = DUPLEX_FULL; cmd->base.port = PORT_MII; cmd->base.autoneg = AUTONEG_DISABLE; @@ -1972,20 +1983,21 @@ static void rtase_init_software_variable(struct pci_dev *pdev, tp->dev->max_mtu = RTASE_MAX_JUMBO_SIZE; }
-static bool rtase_check_mac_version_valid(struct rtase_private *tp) +static int rtase_check_mac_version_valid(struct rtase_private *tp) { - u32 hw_ver = rtase_r32(tp, RTASE_TX_CONFIG_0) & RTASE_HW_VER_MASK; - bool known_ver = false; + int ret = -ENODEV; + + tp->hw_ver = rtase_r32(tp, RTASE_TX_CONFIG_0) & RTASE_HW_VER_MASK;
- switch (hw_ver) { - case 0x00800000: - case 0x04000000: - case 0x04800000: - known_ver = true; + switch (tp->hw_ver) { + case RTASE_HW_VER_906X_7XA: + case RTASE_HW_VER_906X_7XC: + case RTASE_HW_VER_907XD_V1: + ret = 0; break; }
- return known_ver; + return ret; }
static int rtase_init_board(struct pci_dev *pdev, struct net_device **dev_out, @@ -2105,9 +2117,13 @@ static int rtase_init_one(struct pci_dev *pdev, tp->pdev = pdev;
/* identify chip attached to board */ - if (!rtase_check_mac_version_valid(tp)) - return dev_err_probe(&pdev->dev, -ENODEV, - "unknown chip version, contact rtase maintainers (see MAINTAINERS file)\n"); + ret = rtase_check_mac_version_valid(tp); + if (ret != 0) { + dev_err(&pdev->dev, + "unknown chip version: 0x%08x, contact rtase maintainers (see MAINTAINERS file)\n", + tp->hw_ver); + goto err_out_release_board; + }
rtase_init_software_variable(pdev, tp); rtase_init_hardware(tp); @@ -2181,6 +2197,7 @@ static int rtase_init_one(struct pci_dev *pdev, netif_napi_del(&ivec->napi); }
+err_out_release_board: rtase_release_board(pdev, dev, ioaddr);
return ret; diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c index fdb4c773ec98..e897b49aa9e0 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c @@ -486,6 +486,8 @@ static int socfpga_dwmac_probe(struct platform_device *pdev) plat_dat->pcs_exit = socfpga_dwmac_pcs_exit; plat_dat->select_pcs = socfpga_dwmac_select_pcs;
+ plat_dat->riwt_off = 1; + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); if (ret) return ret; diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c index a4cf682dca65..0ee73a265545 100644 --- a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c @@ -72,14 +72,6 @@ int txgbe_request_queue_irqs(struct wx *wx) return err; }
-static int txgbe_request_gpio_irq(struct txgbe *txgbe) -{ - txgbe->gpio_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_GPIO); - return request_threaded_irq(txgbe->gpio_irq, NULL, - txgbe_gpio_irq_handler, - IRQF_ONESHOT, "txgbe-gpio-irq", txgbe); -} - static int txgbe_request_link_irq(struct txgbe *txgbe) { txgbe->link_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_LINK); @@ -149,11 +141,6 @@ static irqreturn_t txgbe_misc_irq_thread_fn(int irq, void *data) u32 eicr;
eicr = wx_misc_isb(wx, WX_ISB_MISC); - if (eicr & TXGBE_PX_MISC_GPIO) { - sub_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_GPIO); - handle_nested_irq(sub_irq); - nhandled++; - } if (eicr & (TXGBE_PX_MISC_ETH_LK | TXGBE_PX_MISC_ETH_LKDN | TXGBE_PX_MISC_ETH_AN)) { sub_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_LINK); @@ -179,7 +166,6 @@ static void txgbe_del_irq_domain(struct txgbe *txgbe)
void txgbe_free_misc_irq(struct txgbe *txgbe) { - free_irq(txgbe->gpio_irq, txgbe); free_irq(txgbe->link_irq, txgbe); free_irq(txgbe->misc.irq, txgbe); txgbe_del_irq_domain(txgbe); @@ -191,7 +177,7 @@ int txgbe_setup_misc_irq(struct txgbe *txgbe) struct wx *wx = txgbe->wx; int hwirq, err;
- txgbe->misc.nirqs = 2; + txgbe->misc.nirqs = 1; txgbe->misc.domain = irq_domain_add_simple(NULL, txgbe->misc.nirqs, 0, &txgbe_misc_irq_domain_ops, txgbe); if (!txgbe->misc.domain) @@ -216,20 +202,14 @@ int txgbe_setup_misc_irq(struct txgbe *txgbe) if (err) goto del_misc_irq;
- err = txgbe_request_gpio_irq(txgbe); - if (err) - goto free_msic_irq; - err = txgbe_request_link_irq(txgbe); if (err) - goto free_gpio_irq; + goto free_msic_irq;
wx->misc_irq_domain = true;
return 0;
-free_gpio_irq: - free_irq(txgbe->gpio_irq, txgbe); free_msic_irq: free_irq(txgbe->misc.irq, txgbe); del_misc_irq: diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c index 93180225a6f1..f77450268036 100644 --- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c @@ -82,7 +82,6 @@ static void txgbe_up_complete(struct wx *wx) { struct net_device *netdev = wx->netdev;
- txgbe_reinit_gpio_intr(wx); wx_control_hw(wx, true); wx_configure_vectors(wx);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c index 67b61afdde96..f26946198a2f 100644 --- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c @@ -162,7 +162,7 @@ static struct phylink_pcs *txgbe_phylink_mac_select(struct phylink_config *confi struct wx *wx = phylink_to_wx(config); struct txgbe *txgbe = wx->priv;
- if (interface == PHY_INTERFACE_MODE_10GBASER) + if (wx->media_type != sp_media_copper) return &txgbe->xpcs->pcs;
return NULL; @@ -358,169 +358,8 @@ static int txgbe_gpio_direction_out(struct gpio_chip *chip, unsigned int offset, return 0; }
-static void txgbe_gpio_irq_ack(struct irq_data *d) -{ - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); - irq_hw_number_t hwirq = irqd_to_hwirq(d); - struct wx *wx = gpiochip_get_data(gc); - unsigned long flags; - - raw_spin_lock_irqsave(&wx->gpio_lock, flags); - wr32(wx, WX_GPIO_EOI, BIT(hwirq)); - raw_spin_unlock_irqrestore(&wx->gpio_lock, flags); -} - -static void txgbe_gpio_irq_mask(struct irq_data *d) -{ - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); - irq_hw_number_t hwirq = irqd_to_hwirq(d); - struct wx *wx = gpiochip_get_data(gc); - unsigned long flags; - - gpiochip_disable_irq(gc, hwirq); - - raw_spin_lock_irqsave(&wx->gpio_lock, flags); - wr32m(wx, WX_GPIO_INTMASK, BIT(hwirq), BIT(hwirq)); - raw_spin_unlock_irqrestore(&wx->gpio_lock, flags); -} - -static void txgbe_gpio_irq_unmask(struct irq_data *d) -{ - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); - irq_hw_number_t hwirq = irqd_to_hwirq(d); - struct wx *wx = gpiochip_get_data(gc); - unsigned long flags; - - gpiochip_enable_irq(gc, hwirq); - - raw_spin_lock_irqsave(&wx->gpio_lock, flags); - wr32m(wx, WX_GPIO_INTMASK, BIT(hwirq), 0); - raw_spin_unlock_irqrestore(&wx->gpio_lock, flags); -} - -static void txgbe_toggle_trigger(struct gpio_chip *gc, unsigned int offset) -{ - struct wx *wx = gpiochip_get_data(gc); - u32 pol, val; - - pol = rd32(wx, WX_GPIO_POLARITY); - val = rd32(wx, WX_GPIO_EXT); - - if (val & BIT(offset)) - pol &= ~BIT(offset); - else - pol |= BIT(offset); - - wr32(wx, WX_GPIO_POLARITY, pol); -} - -static int txgbe_gpio_set_type(struct irq_data *d, unsigned int type) -{ - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); - irq_hw_number_t hwirq = irqd_to_hwirq(d); - struct wx *wx = gpiochip_get_data(gc); - u32 level, polarity, mask; - unsigned long flags; - - mask = BIT(hwirq); - - if (type & IRQ_TYPE_LEVEL_MASK) { - level = 0; - irq_set_handler_locked(d, handle_level_irq); - } else { - level = mask; - irq_set_handler_locked(d, handle_edge_irq); - } - - if (type == IRQ_TYPE_EDGE_RISING || type == IRQ_TYPE_LEVEL_HIGH) - polarity = mask; - else - polarity = 0; - - raw_spin_lock_irqsave(&wx->gpio_lock, flags); - - wr32m(wx, WX_GPIO_INTEN, mask, mask); - wr32m(wx, WX_GPIO_INTTYPE_LEVEL, mask, level); - if (type == IRQ_TYPE_EDGE_BOTH) - txgbe_toggle_trigger(gc, hwirq); - else - wr32m(wx, WX_GPIO_POLARITY, mask, polarity); - - raw_spin_unlock_irqrestore(&wx->gpio_lock, flags); - - return 0; -} - -static const struct irq_chip txgbe_gpio_irq_chip = { - .name = "txgbe-gpio-irq", - .irq_ack = txgbe_gpio_irq_ack, - .irq_mask = txgbe_gpio_irq_mask, - .irq_unmask = txgbe_gpio_irq_unmask, - .irq_set_type = txgbe_gpio_set_type, - .flags = IRQCHIP_IMMUTABLE, - GPIOCHIP_IRQ_RESOURCE_HELPERS, -}; - -irqreturn_t txgbe_gpio_irq_handler(int irq, void *data) -{ - struct txgbe *txgbe = data; - struct wx *wx = txgbe->wx; - irq_hw_number_t hwirq; - unsigned long gpioirq; - struct gpio_chip *gc; - unsigned long flags; - - gpioirq = rd32(wx, WX_GPIO_INTSTATUS); - - gc = txgbe->gpio; - for_each_set_bit(hwirq, &gpioirq, gc->ngpio) { - int gpio = irq_find_mapping(gc->irq.domain, hwirq); - struct irq_data *d = irq_get_irq_data(gpio); - u32 irq_type = irq_get_trigger_type(gpio); - - txgbe_gpio_irq_ack(d); - handle_nested_irq(gpio); - - if ((irq_type & IRQ_TYPE_SENSE_MASK) == IRQ_TYPE_EDGE_BOTH) { - raw_spin_lock_irqsave(&wx->gpio_lock, flags); - txgbe_toggle_trigger(gc, hwirq); - raw_spin_unlock_irqrestore(&wx->gpio_lock, flags); - } - } - - return IRQ_HANDLED; -} - -void txgbe_reinit_gpio_intr(struct wx *wx) -{ - struct txgbe *txgbe = wx->priv; - irq_hw_number_t hwirq; - unsigned long gpioirq; - struct gpio_chip *gc; - unsigned long flags; - - /* for gpio interrupt pending before irq enable */ - gpioirq = rd32(wx, WX_GPIO_INTSTATUS); - - gc = txgbe->gpio; - for_each_set_bit(hwirq, &gpioirq, gc->ngpio) { - int gpio = irq_find_mapping(gc->irq.domain, hwirq); - struct irq_data *d = irq_get_irq_data(gpio); - u32 irq_type = irq_get_trigger_type(gpio); - - txgbe_gpio_irq_ack(d); - - if ((irq_type & IRQ_TYPE_SENSE_MASK) == IRQ_TYPE_EDGE_BOTH) { - raw_spin_lock_irqsave(&wx->gpio_lock, flags); - txgbe_toggle_trigger(gc, hwirq); - raw_spin_unlock_irqrestore(&wx->gpio_lock, flags); - } - } -} - static int txgbe_gpio_init(struct txgbe *txgbe) { - struct gpio_irq_chip *girq; struct gpio_chip *gc; struct device *dev; struct wx *wx; @@ -550,11 +389,6 @@ static int txgbe_gpio_init(struct txgbe *txgbe) gc->direction_input = txgbe_gpio_direction_in; gc->direction_output = txgbe_gpio_direction_out;
- girq = &gc->irq; - gpio_irq_chip_set_chip(girq, &txgbe_gpio_irq_chip); - girq->default_type = IRQ_TYPE_NONE; - girq->handler = handle_bad_irq; - ret = devm_gpiochip_add_data(dev, gc, wx); if (ret) return ret; diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h index 8a026d804fe2..3938985355ed 100644 --- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h @@ -4,8 +4,6 @@ #ifndef _TXGBE_PHY_H_ #define _TXGBE_PHY_H_
-irqreturn_t txgbe_gpio_irq_handler(int irq, void *data); -void txgbe_reinit_gpio_intr(struct wx *wx); irqreturn_t txgbe_link_irq_handler(int irq, void *data); int txgbe_init_phy(struct txgbe *txgbe); void txgbe_remove_phy(struct txgbe *txgbe); diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h index 959102c4c379..8ea413a7abe9 100644 --- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h +++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h @@ -75,8 +75,7 @@ #define TXGBE_PX_MISC_IEN_MASK \ (TXGBE_PX_MISC_ETH_LKDN | TXGBE_PX_MISC_DEV_RST | \ TXGBE_PX_MISC_ETH_EVENT | TXGBE_PX_MISC_ETH_LK | \ - TXGBE_PX_MISC_ETH_AN | TXGBE_PX_MISC_INT_ERR | \ - TXGBE_PX_MISC_GPIO) + TXGBE_PX_MISC_ETH_AN | TXGBE_PX_MISC_INT_ERR)
/* Port cfg registers */ #define TXGBE_CFG_PORT_ST 0x14404 @@ -313,8 +312,7 @@ struct txgbe_nodes { };
enum txgbe_misc_irqs { - TXGBE_IRQ_GPIO = 0, - TXGBE_IRQ_LINK, + TXGBE_IRQ_LINK = 0, TXGBE_IRQ_MAX };
@@ -335,7 +333,6 @@ struct txgbe { struct clk_lookup *clock; struct clk *clk; struct gpio_chip *gpio; - unsigned int gpio_irq; unsigned int link_irq;
/* flow director */ diff --git a/drivers/net/mdio/mdio-ipq4019.c b/drivers/net/mdio/mdio-ipq4019.c index 9d8f43b28aac..ea1f64596a85 100644 --- a/drivers/net/mdio/mdio-ipq4019.c +++ b/drivers/net/mdio/mdio-ipq4019.c @@ -352,8 +352,11 @@ static int ipq4019_mdio_probe(struct platform_device *pdev) /* The platform resource is provided on the chipset IPQ5018 */ /* This resource is optional */ res = platform_get_resource(pdev, IORESOURCE_MEM, 1); - if (res) + if (res) { priv->eth_ldo_rdy = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(priv->eth_ldo_rdy)) + return PTR_ERR(priv->eth_ldo_rdy); + }
bus->name = "ipq4019_mdio"; bus->read = ipq4019_mdio_read_c22; diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c index f0d58092e7e9..3612b0633bd1 100644 --- a/drivers/net/netdevsim/ipsec.c +++ b/drivers/net/netdevsim/ipsec.c @@ -176,14 +176,13 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs, return ret; }
- if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) { + if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) sa.rx = true;
- if (xs->props.family == AF_INET6) - memcpy(sa.ipaddr, &xs->id.daddr.a6, 16); - else - memcpy(&sa.ipaddr[3], &xs->id.daddr.a4, 4); - } + if (xs->props.family == AF_INET6) + memcpy(sa.ipaddr, &xs->id.daddr.a6, 16); + else + memcpy(&sa.ipaddr[3], &xs->id.daddr.a4, 4);
/* the preparations worked, so save the info */ memcpy(&ipsec->sa[sa_idx], &sa, sizeof(sa)); diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c index 8adf77e3557e..531b1b6a37d1 100644 --- a/drivers/net/usb/lan78xx.c +++ b/drivers/net/usb/lan78xx.c @@ -1652,13 +1652,13 @@ static int lan78xx_set_wol(struct net_device *netdev, struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]); int ret;
+ if (wol->wolopts & ~WAKE_ALL) + return -EINVAL; + ret = usb_autopm_get_interface(dev->intf); if (ret < 0) return ret;
- if (wol->wolopts & ~WAKE_ALL) - return -EINVAL; - pdata->wol = wol->wolopts;
device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts); @@ -2380,6 +2380,7 @@ static int lan78xx_phy_init(struct lan78xx_net *dev) if (dev->chipid == ID_REV_CHIP_ID_7801_) { if (phy_is_pseudo_fixed_link(phydev)) { fixed_phy_unregister(phydev); + phy_device_free(phydev); } else { phy_unregister_fixup_for_uid(PHY_KSZ9031RNX, 0xfffffff0); @@ -4246,8 +4247,10 @@ static void lan78xx_disconnect(struct usb_interface *intf)
phy_disconnect(net->phydev);
- if (phy_is_pseudo_fixed_link(phydev)) + if (phy_is_pseudo_fixed_link(phydev)) { fixed_phy_unregister(phydev); + phy_device_free(phydev); + }
usb_scuttle_anchored_urbs(&dev->deferred);
@@ -4414,29 +4417,30 @@ static int lan78xx_probe(struct usb_interface *intf,
period = ep_intr->desc.bInterval; maxp = usb_maxpacket(dev->udev, dev->pipe_intr); - buf = kmalloc(maxp, GFP_KERNEL); - if (!buf) { + + dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL); + if (!dev->urb_intr) { ret = -ENOMEM; goto out5; }
- dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL); - if (!dev->urb_intr) { + buf = kmalloc(maxp, GFP_KERNEL); + if (!buf) { ret = -ENOMEM; - goto out6; - } else { - usb_fill_int_urb(dev->urb_intr, dev->udev, - dev->pipe_intr, buf, maxp, - intr_complete, dev, period); - dev->urb_intr->transfer_flags |= URB_FREE_BUFFER; + goto free_urbs; }
+ usb_fill_int_urb(dev->urb_intr, dev->udev, + dev->pipe_intr, buf, maxp, + intr_complete, dev, period); + dev->urb_intr->transfer_flags |= URB_FREE_BUFFER; + dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out);
/* Reject broken descriptors. */ if (dev->maxpacket == 0) { ret = -ENODEV; - goto out6; + goto free_urbs; }
/* driver requires remote-wakeup capability during autosuspend. */ @@ -4444,7 +4448,7 @@ static int lan78xx_probe(struct usb_interface *intf,
ret = lan78xx_phy_init(dev); if (ret < 0) - goto out7; + goto free_urbs;
ret = register_netdev(netdev); if (ret != 0) { @@ -4466,10 +4470,8 @@ static int lan78xx_probe(struct usb_interface *intf,
out8: phy_disconnect(netdev->phydev); -out7: +free_urbs: usb_free_urb(dev->urb_intr); -out6: - kfree(buf); out5: lan78xx_unbind(dev, intf); out4: diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c index 646e1737d4c4..6b467696bc98 100644 --- a/drivers/net/wireless/ath/ath10k/mac.c +++ b/drivers/net/wireless/ath/ath10k/mac.c @@ -9121,7 +9121,7 @@ static const struct ath10k_index_vht_data_rate_type supported_vht_mcs_rate_nss1[ {6, {2633, 2925}, {1215, 1350}, {585, 650} }, {7, {2925, 3250}, {1350, 1500}, {650, 722} }, {8, {3510, 3900}, {1620, 1800}, {780, 867} }, - {9, {3900, 4333}, {1800, 2000}, {780, 867} } + {9, {3900, 4333}, {1800, 2000}, {865, 960} } };
/*MCS parameters with Nss = 2 */ @@ -9136,7 +9136,7 @@ static const struct ath10k_index_vht_data_rate_type supported_vht_mcs_rate_nss2[ {6, {5265, 5850}, {2430, 2700}, {1170, 1300} }, {7, {5850, 6500}, {2700, 3000}, {1300, 1444} }, {8, {7020, 7800}, {3240, 3600}, {1560, 1733} }, - {9, {7800, 8667}, {3600, 4000}, {1560, 1733} } + {9, {7800, 8667}, {3600, 4000}, {1730, 1920} } };
static void ath10k_mac_get_rate_flags_ht(struct ath10k *ar, u32 rate, u8 nss, u8 mcs, diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c index f477afd325de..7a22483b35cd 100644 --- a/drivers/net/wireless/ath/ath11k/qmi.c +++ b/drivers/net/wireless/ath/ath11k/qmi.c @@ -2180,6 +2180,9 @@ static int ath11k_qmi_request_device_info(struct ath11k_base *ab) ab->mem = bar_addr_va; ab->mem_len = resp.bar_size;
+ if (!ab->hw_params.ce_remap) + ab->mem_ce = ab->mem; + return 0; out: return ret; diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c index 61aa78d8bd8c..217eb57663f0 100644 --- a/drivers/net/wireless/ath/ath12k/dp.c +++ b/drivers/net/wireless/ath/ath12k/dp.c @@ -1202,10 +1202,16 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab) if (!skb) continue;
- skb_cb = ATH12K_SKB_CB(skb); - ar = skb_cb->ar; - if (atomic_dec_and_test(&ar->dp.num_tx_pending)) - wake_up(&ar->dp.tx_empty_waitq); + /* if we are unregistering, hw would've been destroyed and + * ar is no longer valid. + */ + if (!(test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags))) { + skb_cb = ATH12K_SKB_CB(skb); + ar = skb_cb->ar; + + if (atomic_dec_and_test(&ar->dp.num_tx_pending)) + wake_up(&ar->dp.tx_empty_waitq); + }
dma_unmap_single(ab->dev, ATH12K_SKB_CB(skb)->paddr, skb->len, DMA_TO_DEVICE); @@ -1241,6 +1247,7 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab) }
kfree(dp->spt_info); + dp->spt_info = NULL; }
static void ath12k_dp_reoq_lut_cleanup(struct ath12k_base *ab) @@ -1276,8 +1283,10 @@ void ath12k_dp_free(struct ath12k_base *ab)
ath12k_dp_rx_reo_cmd_list_cleanup(ab);
- for (i = 0; i < ab->hw_params->max_tx_ring; i++) + for (i = 0; i < ab->hw_params->max_tx_ring; i++) { kfree(dp->tx_ring[i].tx_status); + dp->tx_ring[i].tx_status = NULL; + }
ath12k_dp_rx_free(ab); /* Deinit any SOC level resource */ diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c index 137394c36460..6d0784a21558 100644 --- a/drivers/net/wireless/ath/ath12k/mac.c +++ b/drivers/net/wireless/ath/ath12k/mac.c @@ -917,7 +917,10 @@ void ath12k_mac_peer_cleanup_all(struct ath12k *ar)
spin_lock_bh(&ab->base_lock); list_for_each_entry_safe(peer, tmp, &ab->peers, list) { - ath12k_dp_rx_peer_tid_cleanup(ar, peer); + /* Skip Rx TID cleanup for self peer */ + if (peer->sta) + ath12k_dp_rx_peer_tid_cleanup(ar, peer); + list_del(&peer->list); kfree(peer); } diff --git a/drivers/net/wireless/ath/ath12k/wow.c b/drivers/net/wireless/ath/ath12k/wow.c index 9b8684abbe40..3624180b25b9 100644 --- a/drivers/net/wireless/ath/ath12k/wow.c +++ b/drivers/net/wireless/ath/ath12k/wow.c @@ -191,7 +191,7 @@ ath12k_wow_convert_8023_to_80211(struct ath12k *ar, memcpy(bytemask, eth_bytemask, eth_pat_len);
pat_len = eth_pat_len; - } else if (eth_pkt_ofs + eth_pat_len < prot_ofs) { + } else if (size_add(eth_pkt_ofs, eth_pat_len) < prot_ofs) { memcpy(pat, eth_pat, ETH_ALEN - eth_pkt_ofs); memcpy(bytemask, eth_bytemask, ETH_ALEN - eth_pkt_ofs);
diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c index eb631fd3336d..b5257b2b4aa5 100644 --- a/drivers/net/wireless/ath/ath9k/htc_hst.c +++ b/drivers/net/wireless/ath/ath9k/htc_hst.c @@ -294,6 +294,9 @@ int htc_connect_service(struct htc_target *target, return -ETIMEDOUT; }
+ if (target->conn_rsp_epid < 0 || target->conn_rsp_epid >= ENDPOINT_MAX) + return -EINVAL; + *conn_rsp_epid = target->conn_rsp_epid; return 0; err: diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c index fe4f65756105..af930e34c21f 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c @@ -110,9 +110,8 @@ void brcmf_of_probe(struct device *dev, enum brcmf_bus_type bus_type, } strreplace(board_type, '/', '-'); settings->board_type = board_type; - - of_node_put(root); } + of_node_put(root);
if (!np || !of_device_is_compatible(np, "brcm,bcm4329-fmac")) return; diff --git a/drivers/net/wireless/intel/iwlegacy/3945.c b/drivers/net/wireless/intel/iwlegacy/3945.c index 14d2331ee6cb..b0656b143f77 100644 --- a/drivers/net/wireless/intel/iwlegacy/3945.c +++ b/drivers/net/wireless/intel/iwlegacy/3945.c @@ -566,7 +566,7 @@ il3945_hdl_rx(struct il_priv *il, struct il_rx_buf *rxb) if (!(rx_end->status & RX_RES_STATUS_NO_CRC32_ERROR) || !(rx_end->status & RX_RES_STATUS_NO_RXE_OVERFLOW)) { D_RX("Bad CRC or FIFO: 0x%08X.\n", rx_end->status); - rx_status.flag |= RX_FLAG_FAILED_FCS_CRC; + return; }
/* Convert 3945's rssi indicator to dBm */ diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c index fcccde7bb659..05c4af41bdb9 100644 --- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c +++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c @@ -664,7 +664,7 @@ il4965_hdl_rx(struct il_priv *il, struct il_rx_buf *rxb) if (!(rx_pkt_status & RX_RES_STATUS_NO_CRC32_ERROR) || !(rx_pkt_status & RX_RES_STATUS_NO_RXE_OVERFLOW)) { D_RX("Bad CRC or FIFO: 0x%08X.\n", le32_to_cpu(rx_pkt_status)); - rx_status.flag |= RX_FLAG_FAILED_FCS_CRC; + return; }
/* This will be used in several places later */ diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c index 80b9a115245f..d37d83d24635 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c @@ -1237,6 +1237,7 @@ int __iwl_mvm_mac_start(struct iwl_mvm *mvm) fast_resume = mvm->fast_resume;
if (fast_resume) { + iwl_mvm_mei_device_state(mvm, true); ret = iwl_mvm_fast_resume(mvm); if (ret) { iwl_mvm_stop_device(mvm); @@ -1377,10 +1378,13 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm, bool suspend) iwl_mvm_rm_aux_sta(mvm);
if (suspend && - mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) + mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_22000) { iwl_mvm_fast_suspend(mvm); - else + /* From this point on, we won't touch the device */ + iwl_mvm_mei_device_state(mvm, false); + } else { iwl_mvm_stop_device(mvm); + }
iwl_mvm_async_handlers_purge(mvm); /* async_handlers_list is empty and will stay empty: HW is stopped */ diff --git a/drivers/net/wireless/intersil/p54/p54spi.c b/drivers/net/wireless/intersil/p54/p54spi.c index d33a994906a7..27f44a9f0bc1 100644 --- a/drivers/net/wireless/intersil/p54/p54spi.c +++ b/drivers/net/wireless/intersil/p54/p54spi.c @@ -624,7 +624,7 @@ static int p54spi_probe(struct spi_device *spi) gpio_direction_input(p54spi_gpio_irq);
ret = request_irq(gpio_to_irq(p54spi_gpio_irq), - p54spi_interrupt, 0, "p54spi", + p54spi_interrupt, IRQF_NO_AUTOEN, "p54spi", priv->spi); if (ret < 0) { dev_err(&priv->spi->dev, "request_irq() failed"); @@ -633,8 +633,6 @@ static int p54spi_probe(struct spi_device *spi)
irq_set_irq_type(gpio_to_irq(p54spi_gpio_irq), IRQ_TYPE_EDGE_RISING);
- disable_irq(gpio_to_irq(p54spi_gpio_irq)); - INIT_WORK(&priv->work, p54spi_work); init_completion(&priv->fw_comp); INIT_LIST_HEAD(&priv->tx_pending); diff --git a/drivers/net/wireless/marvell/mwifiex/cmdevt.c b/drivers/net/wireless/marvell/mwifiex/cmdevt.c index 1cff001bdc51..b30ed321c625 100644 --- a/drivers/net/wireless/marvell/mwifiex/cmdevt.c +++ b/drivers/net/wireless/marvell/mwifiex/cmdevt.c @@ -938,8 +938,10 @@ void mwifiex_process_assoc_resp(struct mwifiex_adapter *adapter) assoc_resp.links[0].bss = priv->req_bss; assoc_resp.buf = priv->assoc_rsp_buf; assoc_resp.len = priv->assoc_rsp_size; + wiphy_lock(priv->wdev.wiphy); cfg80211_rx_assoc_resp(priv->netdev, &assoc_resp); + wiphy_unlock(priv->wdev.wiphy); priv->assoc_rsp_size = 0; } } diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h index d03129d5d24e..4a96281792cc 100644 --- a/drivers/net/wireless/marvell/mwifiex/fw.h +++ b/drivers/net/wireless/marvell/mwifiex/fw.h @@ -875,7 +875,7 @@ struct mwifiex_ietypes_chanstats { struct mwifiex_ie_types_wildcard_ssid_params { struct mwifiex_ie_types_header header; u8 max_ssid_length; - u8 ssid[1]; + u8 ssid[]; } __packed;
#define TSF_DATA_SIZE 8 diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c index 96d1f6039fbc..855019fe5485 100644 --- a/drivers/net/wireless/marvell/mwifiex/main.c +++ b/drivers/net/wireless/marvell/mwifiex/main.c @@ -1679,7 +1679,8 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter) }
ret = devm_request_irq(dev, adapter->irq_wakeup, - mwifiex_irq_wakeup_handler, IRQF_TRIGGER_LOW, + mwifiex_irq_wakeup_handler, + IRQF_TRIGGER_LOW | IRQF_NO_AUTOEN, "wifi_wake", adapter); if (ret) { dev_err(dev, "Failed to request irq_wakeup %d (%d)\n", @@ -1687,7 +1688,6 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter) goto err_exit; }
- disable_irq(adapter->irq_wakeup); if (device_init_wakeup(dev, true)) { dev_err(dev, "fail to init wakeup for mwifiex\n"); goto err_exit; diff --git a/drivers/net/wireless/marvell/mwifiex/util.c b/drivers/net/wireless/marvell/mwifiex/util.c index 42c04bf858da..1f1f6280a0f2 100644 --- a/drivers/net/wireless/marvell/mwifiex/util.c +++ b/drivers/net/wireless/marvell/mwifiex/util.c @@ -494,7 +494,9 @@ mwifiex_process_mgmt_packet(struct mwifiex_private *priv, } }
+ wiphy_lock(priv->wdev.wiphy); cfg80211_rx_mlme_mgmt(priv->netdev, skb->data, pkt_len); + wiphy_unlock(priv->wdev.wiphy); }
if (priv->adapter->host_mlme_enabled && diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c index 9ecf3fb29b55..8bc127c5a538 100644 --- a/drivers/net/wireless/microchip/wilc1000/netdev.c +++ b/drivers/net/wireless/microchip/wilc1000/netdev.c @@ -608,6 +608,9 @@ static int wilc_mac_open(struct net_device *ndev) return ret; }
+ wilc_set_operation_mode(vif, wilc_get_vif_idx(vif), vif->iftype, + vif->idx); + netdev_dbg(ndev, "Mac address: %pM\n", ndev->dev_addr); ret = wilc_set_mac_address(vif, ndev->dev_addr); if (ret) { @@ -618,9 +621,6 @@ static int wilc_mac_open(struct net_device *ndev) return ret; }
- wilc_set_operation_mode(vif, wilc_get_vif_idx(vif), vif->iftype, - vif->idx); - mgmt_regs.interface_stypes = vif->mgmt_reg_stypes; /* so we detect a change */ vif->mgmt_reg_stypes = 0; diff --git a/drivers/net/wireless/realtek/rtl8xxxu/core.c b/drivers/net/wireless/realtek/rtl8xxxu/core.c index 7891c988dd5f..f95898f68d68 100644 --- a/drivers/net/wireless/realtek/rtl8xxxu/core.c +++ b/drivers/net/wireless/realtek/rtl8xxxu/core.c @@ -5058,10 +5058,12 @@ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif, }
if (changed & BSS_CHANGED_BEACON_ENABLED) { - if (bss_conf->enable_beacon) + if (bss_conf->enable_beacon) { rtl8xxxu_start_tx_beacon(priv); - else + schedule_delayed_work(&priv->update_beacon_work, 0); + } else { rtl8xxxu_stop_tx_beacon(priv); + } }
if (changed & BSS_CHANGED_BEACON) diff --git a/drivers/net/wireless/realtek/rtlwifi/efuse.c b/drivers/net/wireless/realtek/rtlwifi/efuse.c index 82cf5fb5175f..6518e77b89f5 100644 --- a/drivers/net/wireless/realtek/rtlwifi/efuse.c +++ b/drivers/net/wireless/realtek/rtlwifi/efuse.c @@ -162,10 +162,19 @@ void efuse_write_1byte(struct ieee80211_hw *hw, u16 address, u8 value) void read_efuse_byte(struct ieee80211_hw *hw, u16 _offset, u8 *pbuf) { struct rtl_priv *rtlpriv = rtl_priv(hw); + u16 max_attempts = 10000; u32 value32; u8 readbyte; u16 retry;
+ /* + * In case of USB devices, transfer speeds are limited, hence + * efuse I/O reads could be (way) slower. So, decrease (a lot) + * the read attempts in case of failures. + */ + if (rtlpriv->rtlhal.interface == INTF_USB) + max_attempts = 10; + rtl_write_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL] + 1, (_offset & 0xff)); readbyte = rtl_read_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL] + 2); @@ -178,7 +187,7 @@ void read_efuse_byte(struct ieee80211_hw *hw, u16 _offset, u8 *pbuf)
retry = 0; value32 = rtl_read_dword(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL]); - while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < 10000)) { + while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < max_attempts)) { value32 = rtl_read_dword(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL]); retry++; diff --git a/drivers/net/wireless/realtek/rtw89/cam.c b/drivers/net/wireless/realtek/rtw89/cam.c index 4476fc7e53db..8d140b94cb44 100644 --- a/drivers/net/wireless/realtek/rtw89/cam.c +++ b/drivers/net/wireless/realtek/rtw89/cam.c @@ -211,25 +211,17 @@ static int rtw89_cam_get_addr_cam_key_idx(struct rtw89_addr_cam_entry *addr_cam, return 0; }
-static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta, - const struct rtw89_sec_cam_entry *sec_cam, - bool inform_fw) +static int __rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, + const struct rtw89_sec_cam_entry *sec_cam, + bool inform_fw) { - struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta); - struct rtw89_vif *rtwvif; struct rtw89_addr_cam_entry *addr_cam; unsigned int i; int ret = 0;
- if (!vif) { - rtw89_err(rtwdev, "No iface for deleting sec cam\n"); - return -EINVAL; - } - - rtwvif = (struct rtw89_vif *)vif->drv_priv; - addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta); + addr_cam = rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
for_each_set_bit(i, addr_cam->sec_cam_map, RTW89_SEC_CAM_IN_ADDR_CAM) { if (addr_cam->sec_ent[i] != sec_cam->sec_cam_idx) @@ -239,11 +231,11 @@ static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev, }
if (inform_fw) { - ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta); + ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link); if (ret) rtw89_err(rtwdev, "failed to update dctl cam del key: %d\n", ret); - ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL); + ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL); if (ret) rtw89_err(rtwdev, "failed to update cam del key: %d\n", ret); } @@ -251,25 +243,17 @@ static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev, return ret; }
-static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta, - struct ieee80211_key_conf *key, - struct rtw89_sec_cam_entry *sec_cam) +static int __rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, + struct ieee80211_key_conf *key, + struct rtw89_sec_cam_entry *sec_cam) { - struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta); - struct rtw89_vif *rtwvif; struct rtw89_addr_cam_entry *addr_cam; u8 key_idx = 0; int ret;
- if (!vif) { - rtw89_err(rtwdev, "No iface for adding sec cam\n"); - return -EINVAL; - } - - rtwvif = (struct rtw89_vif *)vif->drv_priv; - addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta); + addr_cam = rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
if (key->cipher == WLAN_CIPHER_SUITE_WEP40 || key->cipher == WLAN_CIPHER_SUITE_WEP104) @@ -285,13 +269,13 @@ static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev, addr_cam->sec_ent_keyid[key_idx] = key->keyidx; addr_cam->sec_ent[key_idx] = sec_cam->sec_cam_idx; set_bit(key_idx, addr_cam->sec_cam_map); - ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta); + ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link); if (ret) { rtw89_err(rtwdev, "failed to update dctl cam sec entry: %d\n", ret); return ret; } - ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL); + ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL); if (ret) { rtw89_err(rtwdev, "failed to update addr cam sec entry: %d\n", ret); @@ -302,6 +286,92 @@ static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev, return 0; }
+static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + const struct rtw89_sec_cam_entry *sec_cam, + bool inform_fw) +{ + struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta); + struct rtw89_sta_link *rtwsta_link; + struct rtw89_vif_link *rtwvif_link; + struct rtw89_vif *rtwvif; + unsigned int link_id; + int ret; + + if (!vif) { + rtw89_err(rtwdev, "No iface for deleting sec cam\n"); + return -EINVAL; + } + + rtwvif = vif_to_rtwvif(vif); + + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) { + rtwsta_link = rtwsta ? rtwsta->links[link_id] : NULL; + if (rtwsta && !rtwsta_link) + continue; + + ret = __rtw89_cam_detach_sec_cam(rtwdev, rtwvif_link, rtwsta_link, + sec_cam, inform_fw); + if (ret) + return ret; + } + + return 0; +} + +static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct ieee80211_key_conf *key, + struct rtw89_sec_cam_entry *sec_cam) +{ + struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta); + struct rtw89_sta_link *rtwsta_link; + struct rtw89_vif_link *rtwvif_link; + struct rtw89_vif *rtwvif; + unsigned int link_id; + int key_link_id; + int ret; + + if (!vif) { + rtw89_err(rtwdev, "No iface for adding sec cam\n"); + return -EINVAL; + } + + rtwvif = vif_to_rtwvif(vif); + + key_link_id = ieee80211_vif_is_mld(vif) ? key->link_id : 0; + if (key_link_id >= 0) { + rtwvif_link = rtwvif->links[key_link_id]; + rtwsta_link = rtwsta ? rtwsta->links[key_link_id] : NULL; + + if (!rtwvif_link || (rtwsta && !rtwsta_link)) { + rtw89_err(rtwdev, "No drv link for adding sec cam\n"); + return -ENOLINK; + } + + return __rtw89_cam_attach_sec_cam(rtwdev, rtwvif_link, + rtwsta_link, key, sec_cam); + } + + /* key_link_id < 0: MLD pairwise key */ + if (!rtwsta) { + rtw89_err(rtwdev, "No sta for adding MLD pairwise sec cam\n"); + return -EINVAL; + } + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + ret = __rtw89_cam_attach_sec_cam(rtwdev, rtwvif_link, + rtwsta_link, key, sec_cam); + if (ret) + return ret; + } + + return 0; +} + static int rtw89_cam_sec_key_install(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, struct ieee80211_sta *sta, @@ -485,10 +555,10 @@ void rtw89_cam_deinit_bssid_cam(struct rtw89_dev *rtwdev, clear_bit(bssid_cam->bssid_cam_idx, cam_info->bssid_cam_map); }
-void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { - struct rtw89_addr_cam_entry *addr_cam = &rtwvif->addr_cam; - struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam; + struct rtw89_addr_cam_entry *addr_cam = &rtwvif_link->addr_cam; + struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam;
rtw89_cam_deinit_addr_cam(rtwdev, addr_cam); rtw89_cam_deinit_bssid_cam(rtwdev, bssid_cam); @@ -593,7 +663,7 @@ static int rtw89_cam_get_avail_bssid_cam(struct rtw89_dev *rtwdev, }
int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, struct rtw89_bssid_cam_entry *bssid_cam, const u8 *bssid) { @@ -613,7 +683,7 @@ int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev, }
bssid_cam->bssid_cam_idx = bssid_cam_idx; - bssid_cam->phy_idx = rtwvif->phy_idx; + bssid_cam->phy_idx = rtwvif_link->phy_idx; bssid_cam->len = BSSID_CAM_ENT_SIZE; bssid_cam->offset = 0; bssid_cam->valid = true; @@ -622,20 +692,21 @@ int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev, return 0; }
-void rtw89_cam_bssid_changed(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +void rtw89_cam_bssid_changed(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { - struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam; + struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam;
- ether_addr_copy(bssid_cam->bssid, rtwvif->bssid); + ether_addr_copy(bssid_cam->bssid, rtwvif_link->bssid); }
-int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { - struct rtw89_addr_cam_entry *addr_cam = &rtwvif->addr_cam; - struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam; + struct rtw89_addr_cam_entry *addr_cam = &rtwvif_link->addr_cam; + struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam; int ret;
- ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif, bssid_cam, rtwvif->bssid); + ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif_link, bssid_cam, + rtwvif_link->bssid); if (ret) { rtw89_err(rtwdev, "failed to init bssid cam\n"); return ret; @@ -651,19 +722,27 @@ int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) }
int rtw89_cam_fill_bssid_cam_info(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, u8 *cmd) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, u8 *cmd) { - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); - struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif, rtwsta); - u8 bss_color = vif->bss_conf.he_bss_color.color; + struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif_link, + rtwsta_link); + struct ieee80211_bss_conf *bss_conf; + u8 bss_color; u8 bss_mask;
- if (vif->bss_conf.nontransmitted) + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false); + bss_color = bss_conf->he_bss_color.color; + + if (bss_conf->nontransmitted) bss_mask = RTW89_BSSID_MATCH_5_BYTES; else bss_mask = RTW89_BSSID_MATCH_ALL;
+ rcu_read_unlock(); + FWCMD_SET_ADDR_BSSID_IDX(cmd, bssid_cam->bssid_cam_idx); FWCMD_SET_ADDR_BSSID_OFFSET(cmd, bssid_cam->offset); FWCMD_SET_ADDR_BSSID_LEN(cmd, bssid_cam->len); @@ -694,19 +773,30 @@ static u8 rtw89_cam_addr_hash(u8 start, const u8 *addr) }
void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, const u8 *scan_mac_addr, u8 *cmd) { - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); - struct rtw89_addr_cam_entry *addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta); - struct ieee80211_sta *sta = rtwsta_to_sta_safe(rtwsta); - const u8 *sma = scan_mac_addr ? scan_mac_addr : rtwvif->mac_addr; + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + struct rtw89_addr_cam_entry *addr_cam = + rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link); + struct ieee80211_sta *sta = rtwsta_link_to_sta_safe(rtwsta_link); + struct ieee80211_link_sta *link_sta; + const u8 *sma = scan_mac_addr ? scan_mac_addr : rtwvif_link->mac_addr; u8 sma_hash, tma_hash, addr_msk_start; u8 sma_start = 0; u8 tma_start = 0; - u8 *tma = sta ? sta->addr : rtwvif->bssid; + const u8 *tma; + + rcu_read_lock(); + + if (sta) { + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + tma = link_sta->addr; + } else { + tma = rtwvif_link->bssid; + }
if (addr_cam->addr_mask != 0) { addr_msk_start = __ffs(addr_cam->addr_mask); @@ -723,10 +813,10 @@ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev, FWCMD_SET_ADDR_LEN(cmd, addr_cam->len);
FWCMD_SET_ADDR_VALID(cmd, addr_cam->valid); - FWCMD_SET_ADDR_NET_TYPE(cmd, rtwvif->net_type); - FWCMD_SET_ADDR_BCN_HIT_COND(cmd, rtwvif->bcn_hit_cond); - FWCMD_SET_ADDR_HIT_RULE(cmd, rtwvif->hit_rule); - FWCMD_SET_ADDR_BB_SEL(cmd, rtwvif->phy_idx); + FWCMD_SET_ADDR_NET_TYPE(cmd, rtwvif_link->net_type); + FWCMD_SET_ADDR_BCN_HIT_COND(cmd, rtwvif_link->bcn_hit_cond); + FWCMD_SET_ADDR_HIT_RULE(cmd, rtwvif_link->hit_rule); + FWCMD_SET_ADDR_BB_SEL(cmd, rtwvif_link->phy_idx); FWCMD_SET_ADDR_ADDR_MASK(cmd, addr_cam->addr_mask); FWCMD_SET_ADDR_MASK_SEL(cmd, addr_cam->mask_sel); FWCMD_SET_ADDR_SMA_HASH(cmd, sma_hash); @@ -748,20 +838,21 @@ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev, FWCMD_SET_ADDR_TMA4(cmd, tma[4]); FWCMD_SET_ADDR_TMA5(cmd, tma[5]);
- FWCMD_SET_ADDR_PORT_INT(cmd, rtwvif->port); - FWCMD_SET_ADDR_TSF_SYNC(cmd, rtwvif->port); - FWCMD_SET_ADDR_TF_TRS(cmd, rtwvif->trigger); - FWCMD_SET_ADDR_LSIG_TXOP(cmd, rtwvif->lsig_txop); - FWCMD_SET_ADDR_TGT_IND(cmd, rtwvif->tgt_ind); - FWCMD_SET_ADDR_FRM_TGT_IND(cmd, rtwvif->frm_tgt_ind); - FWCMD_SET_ADDR_MACID(cmd, rtwsta ? rtwsta->mac_id : rtwvif->mac_id); - if (rtwvif->net_type == RTW89_NET_TYPE_INFRA) + FWCMD_SET_ADDR_PORT_INT(cmd, rtwvif_link->port); + FWCMD_SET_ADDR_TSF_SYNC(cmd, rtwvif_link->port); + FWCMD_SET_ADDR_TF_TRS(cmd, rtwvif_link->trigger); + FWCMD_SET_ADDR_LSIG_TXOP(cmd, rtwvif_link->lsig_txop); + FWCMD_SET_ADDR_TGT_IND(cmd, rtwvif_link->tgt_ind); + FWCMD_SET_ADDR_FRM_TGT_IND(cmd, rtwvif_link->frm_tgt_ind); + FWCMD_SET_ADDR_MACID(cmd, rtwsta_link ? rtwsta_link->mac_id : + rtwvif_link->mac_id); + if (rtwvif_link->net_type == RTW89_NET_TYPE_INFRA) FWCMD_SET_ADDR_AID12(cmd, vif->cfg.aid & 0xfff); - else if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) + else if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) FWCMD_SET_ADDR_AID12(cmd, sta ? sta->aid & 0xfff : 0); - FWCMD_SET_ADDR_WOL_PATTERN(cmd, rtwvif->wowlan_pattern); - FWCMD_SET_ADDR_WOL_UC(cmd, rtwvif->wowlan_uc); - FWCMD_SET_ADDR_WOL_MAGIC(cmd, rtwvif->wowlan_magic); + FWCMD_SET_ADDR_WOL_PATTERN(cmd, rtwvif_link->wowlan_pattern); + FWCMD_SET_ADDR_WOL_UC(cmd, rtwvif_link->wowlan_uc); + FWCMD_SET_ADDR_WOL_MAGIC(cmd, rtwvif_link->wowlan_magic); FWCMD_SET_ADDR_WAPI(cmd, addr_cam->wapi); FWCMD_SET_ADDR_SEC_ENT_MODE(cmd, addr_cam->sec_ent_mode); FWCMD_SET_ADDR_SEC_ENT0_KEYID(cmd, addr_cam->sec_ent_keyid[0]); @@ -780,18 +871,22 @@ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev, FWCMD_SET_ADDR_SEC_ENT4(cmd, addr_cam->sec_ent[4]); FWCMD_SET_ADDR_SEC_ENT5(cmd, addr_cam->sec_ent[5]); FWCMD_SET_ADDR_SEC_ENT6(cmd, addr_cam->sec_ent[6]); + + rcu_read_unlock(); }
void rtw89_cam_fill_dctl_sec_cam_info_v1(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, struct rtw89_h2c_dctlinfo_ud_v1 *h2c) { - struct rtw89_addr_cam_entry *addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta); + struct rtw89_addr_cam_entry *addr_cam = + rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link); struct rtw89_wow_param *rtw_wow = &rtwdev->wow; u8 *ptk_tx_iv = rtw_wow->key_info.ptk_tx_iv;
- h2c->c0 = le32_encode_bits(rtwsta ? rtwsta->mac_id : rtwvif->mac_id, + h2c->c0 = le32_encode_bits(rtwsta_link ? rtwsta_link->mac_id : + rtwvif_link->mac_id, DCTLINFO_V1_C0_MACID) | le32_encode_bits(1, DCTLINFO_V1_C0_OP);
@@ -862,15 +957,17 @@ void rtw89_cam_fill_dctl_sec_cam_info_v1(struct rtw89_dev *rtwdev, }
void rtw89_cam_fill_dctl_sec_cam_info_v2(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, struct rtw89_h2c_dctlinfo_ud_v2 *h2c) { - struct rtw89_addr_cam_entry *addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta); + struct rtw89_addr_cam_entry *addr_cam = + rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link); struct rtw89_wow_param *rtw_wow = &rtwdev->wow; u8 *ptk_tx_iv = rtw_wow->key_info.ptk_tx_iv;
- h2c->c0 = le32_encode_bits(rtwsta ? rtwsta->mac_id : rtwvif->mac_id, + h2c->c0 = le32_encode_bits(rtwsta_link ? rtwsta_link->mac_id : + rtwvif_link->mac_id, DCTLINFO_V2_C0_MACID) | le32_encode_bits(1, DCTLINFO_V2_C0_OP);
diff --git a/drivers/net/wireless/realtek/rtw89/cam.h b/drivers/net/wireless/realtek/rtw89/cam.h index 5d7b624c2dd4..a6f72edd30fe 100644 --- a/drivers/net/wireless/realtek/rtw89/cam.h +++ b/drivers/net/wireless/realtek/rtw89/cam.h @@ -526,34 +526,34 @@ struct rtw89_h2c_dctlinfo_ud_v2 { #define DCTLINFO_V2_W12_MLD_TA_BSSID_H_V1 GENMASK(15, 0) #define DCTLINFO_V2_W12_ALL GENMASK(15, 0)
-int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif *vif); -void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *vif); +int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif); +void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif); int rtw89_cam_init_addr_cam(struct rtw89_dev *rtwdev, struct rtw89_addr_cam_entry *addr_cam, const struct rtw89_bssid_cam_entry *bssid_cam); void rtw89_cam_deinit_addr_cam(struct rtw89_dev *rtwdev, struct rtw89_addr_cam_entry *addr_cam); int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, struct rtw89_bssid_cam_entry *bssid_cam, const u8 *bssid); void rtw89_cam_deinit_bssid_cam(struct rtw89_dev *rtwdev, struct rtw89_bssid_cam_entry *bssid_cam); void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev, - struct rtw89_vif *vif, - struct rtw89_sta *rtwsta, + struct rtw89_vif_link *vif, + struct rtw89_sta_link *rtwsta_link, const u8 *scan_mac_addr, u8 *cmd); void rtw89_cam_fill_dctl_sec_cam_info_v1(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, struct rtw89_h2c_dctlinfo_ud_v1 *h2c); void rtw89_cam_fill_dctl_sec_cam_info_v2(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, struct rtw89_h2c_dctlinfo_ud_v2 *h2c); int rtw89_cam_fill_bssid_cam_info(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, u8 *cmd); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, u8 *cmd); int rtw89_cam_sec_key_add(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, struct ieee80211_sta *sta, @@ -564,6 +564,6 @@ int rtw89_cam_sec_key_del(struct rtw89_dev *rtwdev, struct ieee80211_key_conf *key, bool inform_fw); void rtw89_cam_bssid_changed(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif); + struct rtw89_vif_link *rtwvif_link); void rtw89_cam_reset_keys(struct rtw89_dev *rtwdev); #endif diff --git a/drivers/net/wireless/realtek/rtw89/chan.c b/drivers/net/wireless/realtek/rtw89/chan.c index 7070c85e2c28..ba6332da8019 100644 --- a/drivers/net/wireless/realtek/rtw89/chan.c +++ b/drivers/net/wireless/realtek/rtw89/chan.c @@ -234,6 +234,18 @@ void rtw89_entity_init(struct rtw89_dev *rtwdev) rtw89_config_default_chandef(rtwdev); }
+static bool rtw89_vif_is_active_role(struct rtw89_vif *rtwvif) +{ + struct rtw89_vif_link *rtwvif_link; + unsigned int link_id; + + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + if (rtwvif_link->chanctx_assigned) + return true; + + return false; +} + static void rtw89_entity_calculate_weight(struct rtw89_dev *rtwdev, struct rtw89_entity_weight *w) { @@ -255,7 +267,7 @@ static void rtw89_entity_calculate_weight(struct rtw89_dev *rtwdev, }
rtw89_for_each_rtwvif(rtwdev, rtwvif) { - if (rtwvif->chanctx_assigned) + if (rtw89_vif_is_active_role(rtwvif)) w->active_roles++; } } @@ -387,9 +399,9 @@ int rtw89_iterate_mcc_roles(struct rtw89_dev *rtwdev, static u32 rtw89_mcc_get_tbtt_ofst(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *role, u64 tsf) { - struct rtw89_vif *rtwvif = role->rtwvif; + struct rtw89_vif_link *rtwvif_link = role->rtwvif_link; u32 bcn_intvl_us = ieee80211_tu_to_usec(role->beacon_interval); - u64 sync_tsf = READ_ONCE(rtwvif->sync_bcn_tsf); + u64 sync_tsf = READ_ONCE(rtwvif_link->sync_bcn_tsf); u32 remainder;
if (tsf < sync_tsf) { @@ -413,8 +425,8 @@ static int __mcc_fw_req_tsf(struct rtw89_dev *rtwdev, u64 *tsf_ref, u64 *tsf_aux int ret;
req.group = mcc->group; - req.macid_x = ref->rtwvif->mac_id; - req.macid_y = aux->rtwvif->mac_id; + req.macid_x = ref->rtwvif_link->mac_id; + req.macid_y = aux->rtwvif_link->mac_id; ret = rtw89_fw_h2c_mcc_req_tsf(rtwdev, &req, &rpt); if (ret) { rtw89_debug(rtwdev, RTW89_DBG_CHAN, @@ -440,10 +452,10 @@ static int __mrc_fw_req_tsf(struct rtw89_dev *rtwdev, u64 *tsf_ref, u64 *tsf_aux BUILD_BUG_ON(RTW89_MAC_MRC_MAX_REQ_TSF_NUM < NUM_OF_RTW89_MCC_ROLES);
arg.num = 2; - arg.infos[0].band = ref->rtwvif->mac_idx; - arg.infos[0].port = ref->rtwvif->port; - arg.infos[1].band = aux->rtwvif->mac_idx; - arg.infos[1].port = aux->rtwvif->port; + arg.infos[0].band = ref->rtwvif_link->mac_idx; + arg.infos[0].port = ref->rtwvif_link->port; + arg.infos[1].band = aux->rtwvif_link->mac_idx; + arg.infos[1].port = aux->rtwvif_link->port;
ret = rtw89_fw_h2c_mrc_req_tsf(rtwdev, &arg, &rpt); if (ret) { @@ -522,23 +534,31 @@ u32 rtw89_mcc_role_fw_macid_bitmap_to_u32(struct rtw89_mcc_role *mcc_role)
static void rtw89_mcc_role_macid_sta_iter(void *data, struct ieee80211_sta *sta) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_vif *rtwvif = rtwsta->rtwvif; struct rtw89_mcc_role *mcc_role = data; - struct rtw89_vif *target = mcc_role->rtwvif; + struct rtw89_vif *target = mcc_role->rtwvif_link->rtwvif; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_vif *rtwvif = rtwsta->rtwvif; + struct rtw89_dev *rtwdev = rtwsta->rtwdev; + struct rtw89_sta_link *rtwsta_link;
if (rtwvif != target) return;
- rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwsta->mac_id); + rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0); + if (unlikely(!rtwsta_link)) { + rtw89_err(rtwdev, "mcc sta macid: find no link on HW-0\n"); + return; + } + + rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwsta_link->mac_id); }
static void rtw89_mcc_fill_role_macid_bitmap(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *mcc_role) { - struct rtw89_vif *rtwvif = mcc_role->rtwvif; + struct rtw89_vif_link *rtwvif_link = mcc_role->rtwvif_link;
- rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwvif->mac_id); + rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwvif_link->mac_id); ieee80211_iterate_stations_atomic(rtwdev->hw, rtw89_mcc_role_macid_sta_iter, mcc_role); @@ -564,8 +584,9 @@ static void rtw89_mcc_fill_role_policy(struct rtw89_dev *rtwdev, static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *mcc_role) { - struct ieee80211_vif *vif = rtwvif_to_vif(mcc_role->rtwvif); + struct rtw89_vif_link *rtwvif_link = mcc_role->rtwvif_link; struct ieee80211_p2p_noa_desc *noa_desc; + struct ieee80211_bss_conf *bss_conf; u32 bcn_intvl_us = ieee80211_tu_to_usec(mcc_role->beacon_interval); u32 max_toa_us, max_tob_us, max_dur_us; u32 start_time, interval, duration; @@ -576,13 +597,18 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev, if (!mcc_role->is_go && !mcc_role->is_gc) return;
+ rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + /* find the first periodic NoA */ for (i = 0; i < RTW89_P2P_MAX_NOA_NUM; i++) { - noa_desc = &vif->bss_conf.p2p_noa_attr.desc[i]; + noa_desc = &bss_conf->p2p_noa_attr.desc[i]; if (noa_desc->count == 255) goto fill; }
+ rcu_read_unlock(); return;
fill: @@ -590,6 +616,8 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev, interval = le32_to_cpu(noa_desc->interval); duration = le32_to_cpu(noa_desc->duration);
+ rcu_read_unlock(); + if (interval != bcn_intvl_us) { rtw89_debug(rtwdev, RTW89_DBG_CHAN, "MCC role limit: mismatch interval: %d vs. %d\n", @@ -597,7 +625,7 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev, return; }
- ret = rtw89_mac_port_get_tsf(rtwdev, mcc_role->rtwvif, &tsf); + ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif_link, &tsf); if (ret) { rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret); return; @@ -632,15 +660,21 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev, }
static int rtw89_mcc_fill_role(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, struct rtw89_mcc_role *role) { - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct ieee80211_bss_conf *bss_conf; const struct rtw89_chan *chan;
memset(role, 0, sizeof(*role)); - role->rtwvif = rtwvif; - role->beacon_interval = vif->bss_conf.beacon_int; + role->rtwvif_link = rtwvif_link; + + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + role->beacon_interval = bss_conf->beacon_int; + + rcu_read_unlock();
if (!role->beacon_interval) { rtw89_warn(rtwdev, @@ -650,10 +684,10 @@ static int rtw89_mcc_fill_role(struct rtw89_dev *rtwdev,
role->duration = role->beacon_interval / 2;
- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx); + chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx); role->is_2ghz = chan->band_type == RTW89_BAND_2G; - role->is_go = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_GO; - role->is_gc = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT; + role->is_go = rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_GO; + role->is_gc = rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT;
rtw89_mcc_fill_role_macid_bitmap(rtwdev, role); rtw89_mcc_fill_role_policy(rtwdev, role); @@ -678,7 +712,7 @@ static void rtw89_mcc_fill_bt_role(struct rtw89_dev *rtwdev) }
struct rtw89_mcc_fill_role_selector { - struct rtw89_vif *bind_vif[NUM_OF_RTW89_CHANCTX]; + struct rtw89_vif_link *bind_vif[NUM_OF_RTW89_CHANCTX]; };
static_assert((u8)NUM_OF_RTW89_CHANCTX >= NUM_OF_RTW89_MCC_ROLES); @@ -689,7 +723,7 @@ static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev, void *data) { struct rtw89_mcc_fill_role_selector *sel = data; - struct rtw89_vif *role_vif = sel->bind_vif[ordered_idx]; + struct rtw89_vif_link *role_vif = sel->bind_vif[ordered_idx]; int ret;
if (!role_vif) { @@ -712,21 +746,28 @@ static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev, static int rtw89_mcc_fill_all_roles(struct rtw89_dev *rtwdev) { struct rtw89_mcc_fill_role_selector sel = {}; + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; int ret;
rtw89_for_each_rtwvif(rtwdev, rtwvif) { - if (!rtwvif->chanctx_assigned) + if (!rtw89_vif_is_active_role(rtwvif)) + continue; + + rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0); + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, "mcc fill roles: find no link on HW-0\n"); continue; + }
- if (sel.bind_vif[rtwvif->chanctx_idx]) { + if (sel.bind_vif[rtwvif_link->chanctx_idx]) { rtw89_warn(rtwdev, "MCC skip extra vif <macid %d> on chanctx[%d]\n", - rtwvif->mac_id, rtwvif->chanctx_idx); + rtwvif_link->mac_id, rtwvif_link->chanctx_idx); continue; }
- sel.bind_vif[rtwvif->chanctx_idx] = rtwvif; + sel.bind_vif[rtwvif_link->chanctx_idx] = rtwvif_link; }
ret = rtw89_iterate_mcc_roles(rtwdev, rtw89_mcc_fill_role_iterator, &sel); @@ -754,13 +795,13 @@ static void rtw89_mcc_assign_pattern(struct rtw89_dev *rtwdev, memset(&pattern->courtesy, 0, sizeof(pattern->courtesy));
if (pattern->tob_aux <= 0 || pattern->toa_aux <= 0) { - pattern->courtesy.macid_tgt = aux->rtwvif->mac_id; - pattern->courtesy.macid_src = ref->rtwvif->mac_id; + pattern->courtesy.macid_tgt = aux->rtwvif_link->mac_id; + pattern->courtesy.macid_src = ref->rtwvif_link->mac_id; pattern->courtesy.slot_num = RTW89_MCC_DFLT_COURTESY_SLOT; pattern->courtesy.enable = true; } else if (pattern->tob_ref <= 0 || pattern->toa_ref <= 0) { - pattern->courtesy.macid_tgt = ref->rtwvif->mac_id; - pattern->courtesy.macid_src = aux->rtwvif->mac_id; + pattern->courtesy.macid_tgt = ref->rtwvif_link->mac_id; + pattern->courtesy.macid_src = aux->rtwvif_link->mac_id; pattern->courtesy.slot_num = RTW89_MCC_DFLT_COURTESY_SLOT; pattern->courtesy.enable = true; } @@ -1263,7 +1304,7 @@ static void rtw89_mcc_sync_tbtt(struct rtw89_dev *rtwdev, u64 tsf_src; int ret;
- ret = rtw89_mac_port_get_tsf(rtwdev, src->rtwvif, &tsf_src); + ret = rtw89_mac_port_get_tsf(rtwdev, src->rtwvif_link, &tsf_src); if (ret) { rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret); return; @@ -1280,12 +1321,12 @@ static void rtw89_mcc_sync_tbtt(struct rtw89_dev *rtwdev, div_u64_rem(tbtt_tgt, bcn_intvl_src_us, &remainder); tsf_ofst_tgt = bcn_intvl_src_us - remainder;
- config->sync.macid_tgt = tgt->rtwvif->mac_id; - config->sync.band_tgt = tgt->rtwvif->mac_idx; - config->sync.port_tgt = tgt->rtwvif->port; - config->sync.macid_src = src->rtwvif->mac_id; - config->sync.band_src = src->rtwvif->mac_idx; - config->sync.port_src = src->rtwvif->port; + config->sync.macid_tgt = tgt->rtwvif_link->mac_id; + config->sync.band_tgt = tgt->rtwvif_link->mac_idx; + config->sync.port_tgt = tgt->rtwvif_link->port; + config->sync.macid_src = src->rtwvif_link->mac_id; + config->sync.band_src = src->rtwvif_link->mac_idx; + config->sync.port_src = src->rtwvif_link->port; config->sync.offset = tsf_ofst_tgt / 1024; config->sync.enable = true;
@@ -1294,7 +1335,7 @@ static void rtw89_mcc_sync_tbtt(struct rtw89_dev *rtwdev, config->sync.macid_tgt, config->sync.macid_src, config->sync.offset);
- rtw89_mac_port_tsf_sync(rtwdev, tgt->rtwvif, src->rtwvif, + rtw89_mac_port_tsf_sync(rtwdev, tgt->rtwvif_link, src->rtwvif_link, config->sync.offset); }
@@ -1305,13 +1346,13 @@ static int rtw89_mcc_fill_start_tsf(struct rtw89_dev *rtwdev) struct rtw89_mcc_config *config = &mcc->config; u32 bcn_intvl_ref_us = ieee80211_tu_to_usec(ref->beacon_interval); u32 tob_ref_us = ieee80211_tu_to_usec(config->pattern.tob_ref); - struct rtw89_vif *rtwvif = ref->rtwvif; + struct rtw89_vif_link *rtwvif_link = ref->rtwvif_link; u64 tsf, start_tsf; u32 cur_tbtt_ofst; u64 min_time; int ret;
- ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif, &tsf); + ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif_link, &tsf); if (ret) { rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret); return ret; @@ -1390,13 +1431,13 @@ static int __mcc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *ro const struct rtw89_chan *chan; int ret;
- chan = rtw89_chan_get(rtwdev, role->rtwvif->chanctx_idx); + chan = rtw89_chan_get(rtwdev, role->rtwvif_link->chanctx_idx); req.central_ch_seg0 = chan->channel; req.primary_ch = chan->primary_channel; req.bandwidth = chan->band_width; req.ch_band_type = chan->band_type;
- req.macid = role->rtwvif->mac_id; + req.macid = role->rtwvif_link->mac_id; req.group = mcc->group; req.c2h_rpt = policy->c2h_rpt; req.tx_null_early = policy->tx_null_early; @@ -1421,7 +1462,7 @@ static int __mcc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *ro }
ret = rtw89_fw_h2c_mcc_macid_bitmap(rtwdev, mcc->group, - role->rtwvif->mac_id, + role->rtwvif_link->mac_id, role->macid_bitmap); if (ret) { rtw89_debug(rtwdev, RTW89_DBG_CHAN, @@ -1448,7 +1489,7 @@ void __mrc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *role, slot_arg->duration = role->duration; slot_arg->role_num = 1;
- chan = rtw89_chan_get(rtwdev, role->rtwvif->chanctx_idx); + chan = rtw89_chan_get(rtwdev, role->rtwvif_link->chanctx_idx);
slot_arg->roles[0].role_type = RTW89_H2C_MRC_ROLE_WIFI; slot_arg->roles[0].is_master = role == ref; @@ -1458,7 +1499,7 @@ void __mrc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *role, slot_arg->roles[0].primary_ch = chan->primary_channel; slot_arg->roles[0].en_tx_null = !policy->dis_tx_null; slot_arg->roles[0].null_early = policy->tx_null_early; - slot_arg->roles[0].macid = role->rtwvif->mac_id; + slot_arg->roles[0].macid = role->rtwvif_link->mac_id; slot_arg->roles[0].macid_main_bitmap = rtw89_mcc_role_fw_macid_bitmap_to_u32(role); } @@ -1569,7 +1610,7 @@ static int __mcc_fw_start(struct rtw89_dev *rtwdev, bool replace) } }
- req.macid = ref->rtwvif->mac_id; + req.macid = ref->rtwvif_link->mac_id; req.tsf_high = config->start_tsf >> 32; req.tsf_low = config->start_tsf;
@@ -1598,7 +1639,7 @@ static void __mrc_fw_add_courtesy(struct rtw89_dev *rtwdev, if (!courtesy->enable) return;
- if (courtesy->macid_src == ref->rtwvif->mac_id) { + if (courtesy->macid_src == ref->rtwvif_link->mac_id) { slot_arg_src = &arg->slots[ref->slot_idx]; slot_idx_tgt = aux->slot_idx; } else { @@ -1717,9 +1758,9 @@ static int __mcc_fw_set_duration_no_bt(struct rtw89_dev *rtwdev, bool sync_chang struct rtw89_fw_mcc_duration req = { .group = mcc->group, .btc_in_group = false, - .start_macid = ref->rtwvif->mac_id, - .macid_x = ref->rtwvif->mac_id, - .macid_y = aux->rtwvif->mac_id, + .start_macid = ref->rtwvif_link->mac_id, + .macid_x = ref->rtwvif_link->mac_id, + .macid_y = aux->rtwvif_link->mac_id, .duration_x = ref->duration, .duration_y = aux->duration, .start_tsf_high = config->start_tsf >> 32, @@ -1813,18 +1854,18 @@ static void rtw89_mcc_handle_beacon_noa(struct rtw89_dev *rtwdev, bool enable) struct ieee80211_p2p_noa_desc noa_desc = {}; u64 start_time = config->start_tsf; u32 interval = config->mcc_interval; - struct rtw89_vif *rtwvif_go; + struct rtw89_vif_link *rtwvif_go; u32 duration;
if (mcc->mode != RTW89_MCC_MODE_GO_STA) return;
if (ref->is_go) { - rtwvif_go = ref->rtwvif; + rtwvif_go = ref->rtwvif_link; start_time += ieee80211_tu_to_usec(ref->duration); duration = config->mcc_interval - ref->duration; } else if (aux->is_go) { - rtwvif_go = aux->rtwvif; + rtwvif_go = aux->rtwvif_link; start_time += ieee80211_tu_to_usec(pattern->tob_ref) + ieee80211_tu_to_usec(config->beacon_offset) + ieee80211_tu_to_usec(pattern->toa_aux); @@ -1865,9 +1906,9 @@ static void rtw89_mcc_start_beacon_noa(struct rtw89_dev *rtwdev) return;
if (ref->is_go) - rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif, true); + rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif_link, true); else if (aux->is_go) - rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif, true); + rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif_link, true);
rtw89_mcc_handle_beacon_noa(rtwdev, true); } @@ -1882,9 +1923,9 @@ static void rtw89_mcc_stop_beacon_noa(struct rtw89_dev *rtwdev) return;
if (ref->is_go) - rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif, false); + rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif_link, false); else if (aux->is_go) - rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif, false); + rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif_link, false);
rtw89_mcc_handle_beacon_noa(rtwdev, false); } @@ -1942,7 +1983,7 @@ struct rtw89_mcc_stop_sel { static void rtw89_mcc_stop_sel_fill(struct rtw89_mcc_stop_sel *sel, const struct rtw89_mcc_role *mcc_role) { - sel->mac_id = mcc_role->rtwvif->mac_id; + sel->mac_id = mcc_role->rtwvif_link->mac_id; sel->slot_idx = mcc_role->slot_idx; }
@@ -1953,7 +1994,7 @@ static int rtw89_mcc_stop_sel_iterator(struct rtw89_dev *rtwdev, { struct rtw89_mcc_stop_sel *sel = data;
- if (!mcc_role->rtwvif->chanctx_assigned) + if (!mcc_role->rtwvif_link->chanctx_assigned) return 0;
rtw89_mcc_stop_sel_fill(sel, mcc_role); @@ -2081,7 +2122,7 @@ static int __mcc_fw_upd_macid_bitmap(struct rtw89_dev *rtwdev, int ret;
ret = rtw89_fw_h2c_mcc_macid_bitmap(rtwdev, mcc->group, - upd->rtwvif->mac_id, + upd->rtwvif_link->mac_id, upd->macid_bitmap); if (ret) { rtw89_debug(rtwdev, RTW89_DBG_CHAN, @@ -2106,7 +2147,7 @@ static int __mrc_fw_upd_macid_bitmap(struct rtw89_dev *rtwdev, int i;
arg.sch_idx = mcc->group; - arg.macid = upd->rtwvif->mac_id; + arg.macid = upd->rtwvif_link->mac_id;
for (i = 0; i < 32; i++) { if (add & BIT(i)) { @@ -2144,7 +2185,7 @@ static int rtw89_mcc_upd_map_iterator(struct rtw89_dev *rtwdev, void *data) { struct rtw89_mcc_role upd = { - .rtwvif = mcc_role->rtwvif, + .rtwvif_link = mcc_role->rtwvif_link, }; int ret;
@@ -2370,6 +2411,24 @@ void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev) rtw89_queue_chanctx_work(rtwdev); }
+static void __rtw89_swap_chanctx(struct rtw89_vif *rtwvif, + enum rtw89_chanctx_idx idx1, + enum rtw89_chanctx_idx idx2) +{ + struct rtw89_vif_link *rtwvif_link; + unsigned int link_id; + + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) { + if (!rtwvif_link->chanctx_assigned) + continue; + + if (rtwvif_link->chanctx_idx == idx1) + rtwvif_link->chanctx_idx = idx2; + else if (rtwvif_link->chanctx_idx == idx2) + rtwvif_link->chanctx_idx = idx1; + } +} + static void rtw89_swap_chanctx(struct rtw89_dev *rtwdev, enum rtw89_chanctx_idx idx1, enum rtw89_chanctx_idx idx2) @@ -2386,14 +2445,8 @@ static void rtw89_swap_chanctx(struct rtw89_dev *rtwdev,
swap(hal->chanctx[idx1], hal->chanctx[idx2]);
- rtw89_for_each_rtwvif(rtwdev, rtwvif) { - if (!rtwvif->chanctx_assigned) - continue; - if (rtwvif->chanctx_idx == idx1) - rtwvif->chanctx_idx = idx2; - else if (rtwvif->chanctx_idx == idx2) - rtwvif->chanctx_idx = idx1; - } + rtw89_for_each_rtwvif(rtwdev, rtwvif) + __rtw89_swap_chanctx(rtwvif, idx1, idx2);
cur = atomic_read(&hal->roc_chanctx_idx); if (cur == idx1) @@ -2444,14 +2497,14 @@ void rtw89_chanctx_ops_change(struct rtw89_dev *rtwdev, }
int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, struct ieee80211_chanctx_conf *ctx) { struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv; struct rtw89_entity_weight w = {};
- rtwvif->chanctx_idx = cfg->idx; - rtwvif->chanctx_assigned = true; + rtwvif_link->chanctx_idx = cfg->idx; + rtwvif_link->chanctx_assigned = true; cfg->ref_count++;
if (cfg->idx == RTW89_CHANCTX_0) @@ -2469,7 +2522,7 @@ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev, }
void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, struct ieee80211_chanctx_conf *ctx) { struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv; @@ -2479,8 +2532,8 @@ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev, enum rtw89_entity_mode new; int ret;
- rtwvif->chanctx_idx = RTW89_CHANCTX_0; - rtwvif->chanctx_assigned = false; + rtwvif_link->chanctx_idx = RTW89_CHANCTX_0; + rtwvif_link->chanctx_assigned = false; cfg->ref_count--;
if (cfg->ref_count != 0) diff --git a/drivers/net/wireless/realtek/rtw89/chan.h b/drivers/net/wireless/realtek/rtw89/chan.h index c6d31984e575..4ed777ea5064 100644 --- a/drivers/net/wireless/realtek/rtw89/chan.h +++ b/drivers/net/wireless/realtek/rtw89/chan.h @@ -106,10 +106,10 @@ void rtw89_chanctx_ops_change(struct rtw89_dev *rtwdev, struct ieee80211_chanctx_conf *ctx, u32 changed); int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, struct ieee80211_chanctx_conf *ctx); void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, struct ieee80211_chanctx_conf *ctx);
#endif diff --git a/drivers/net/wireless/realtek/rtw89/coex.c b/drivers/net/wireless/realtek/rtw89/coex.c index 8d27374db83c..8d54d71fcf53 100644 --- a/drivers/net/wireless/realtek/rtw89/coex.c +++ b/drivers/net/wireless/realtek/rtw89/coex.c @@ -2492,6 +2492,8 @@ static void btc_fw_set_monreg(struct rtw89_dev *rtwdev) if (ver->fcxmreg == 7) { sz = struct_size(v7, regs, n); v7 = kmalloc(sz, GFP_KERNEL); + if (!v7) + return; v7->type = RPT_EN_MREG; v7->fver = ver->fcxmreg; v7->len = n; @@ -2506,6 +2508,8 @@ static void btc_fw_set_monreg(struct rtw89_dev *rtwdev) } else { sz = struct_size(v1, regs, n); v1 = kmalloc(sz, GFP_KERNEL); + if (!v1) + return; v1->fver = ver->fcxmreg; v1->reg_num = n; memcpy(v1->regs, chip->mon_reg, flex_array_size(v1, regs, n)); @@ -4989,18 +4993,16 @@ struct rtw89_txtime_data { bool reenable; };
-static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta) +static void __rtw89_tx_time_iter(struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, + struct rtw89_txtime_data *iter_data) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_txtime_data *iter_data = - (struct rtw89_txtime_data *)data; struct rtw89_dev *rtwdev = iter_data->rtwdev; - struct rtw89_vif *rtwvif = rtwsta->rtwvif; struct rtw89_btc *btc = &rtwdev->btc; struct rtw89_btc_cx *cx = &btc->cx; struct rtw89_btc_wl_info *wl = &cx->wl; struct rtw89_btc_wl_link_info *plink = NULL; - u8 port = rtwvif->port; + u8 port = rtwvif_link->port; u32 tx_time = iter_data->tx_time; u8 tx_retry = iter_data->tx_retry; u16 enable = iter_data->enable; @@ -5023,8 +5025,8 @@ static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
/* backup the original tx time before tx-limit on */ if (reenable) { - rtw89_mac_get_tx_time(rtwdev, rtwsta, &plink->tx_time); - rtw89_mac_get_tx_retry_limit(rtwdev, rtwsta, &plink->tx_retry); + rtw89_mac_get_tx_time(rtwdev, rtwsta_link, &plink->tx_time); + rtw89_mac_get_tx_retry_limit(rtwdev, rtwsta_link, &plink->tx_retry); rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): reenable, tx_time=%d tx_retry= %d\n", __func__, plink->tx_time, plink->tx_retry); @@ -5032,22 +5034,37 @@ static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
/* restore the original tx time if no tx-limit */ if (!enable) { - rtw89_mac_set_tx_time(rtwdev, rtwsta, true, plink->tx_time); - rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta, true, + rtw89_mac_set_tx_time(rtwdev, rtwsta_link, true, plink->tx_time); + rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta_link, true, plink->tx_retry); rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): restore, tx_time=%d tx_retry= %d\n", __func__, plink->tx_time, plink->tx_retry);
} else { - rtw89_mac_set_tx_time(rtwdev, rtwsta, false, tx_time); - rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta, false, tx_retry); + rtw89_mac_set_tx_time(rtwdev, rtwsta_link, false, tx_time); + rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta_link, false, tx_retry); rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): set, tx_time=%d tx_retry= %d\n", __func__, tx_time, tx_retry); } }
+static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta) +{ + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_txtime_data *iter_data = + (struct rtw89_txtime_data *)data; + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + __rtw89_tx_time_iter(rtwvif_link, rtwsta_link, iter_data); + } +} + static void _set_wl_tx_limit(struct rtw89_dev *rtwdev) { struct rtw89_btc *btc = &rtwdev->btc; @@ -7481,13 +7498,16 @@ static void _update_bt_info(struct rtw89_dev *rtwdev, u8 *buf, u32 len) _run_coex(rtwdev, BTC_RSN_UPDATE_BT_INFO); }
-void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, enum btc_role_state state) +void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, + enum btc_role_state state) { const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); - struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta); + rtwvif_link->chanctx_idx); + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + struct ieee80211_bss_conf *bss_conf; + struct ieee80211_link_sta *link_sta; struct rtw89_btc *btc = &rtwdev->btc; const struct rtw89_btc_ver *ver = btc->ver; struct rtw89_btc_wl_info *wl = &btc->cx.wl; @@ -7495,51 +7515,59 @@ void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif struct rtw89_btc_wl_link_info *wlinfo = NULL; u8 mode = 0, rlink_id, link_mode_ori, pta_req_mac_ori, wa_type;
+ rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false); + rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], state=%d\n", state); rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], role is STA=%d\n", vif->type == NL80211_IFTYPE_STATION); - rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], port=%d\n", rtwvif->port); + rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], port=%d\n", rtwvif_link->port); rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], band=%d ch=%d bw=%d\n", chan->band_type, chan->channel, chan->band_width); rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], associated=%d\n", state == BTC_ROLE_MSTS_STA_CONN_END); rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], bcn_period=%d dtim_period=%d\n", - vif->bss_conf.beacon_int, vif->bss_conf.dtim_period); + bss_conf->beacon_int, bss_conf->dtim_period); + + if (rtwsta_link) { + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
- if (rtwsta) { rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], STA mac_id=%d\n", - rtwsta->mac_id); + rtwsta_link->mac_id);
rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], STA support HE=%d VHT=%d HT=%d\n", - sta->deflink.he_cap.has_he, - sta->deflink.vht_cap.vht_supported, - sta->deflink.ht_cap.ht_supported); - if (sta->deflink.he_cap.has_he) + link_sta->he_cap.has_he, + link_sta->vht_cap.vht_supported, + link_sta->ht_cap.ht_supported); + if (link_sta->he_cap.has_he) mode |= BIT(BTC_WL_MODE_HE); - if (sta->deflink.vht_cap.vht_supported) + if (link_sta->vht_cap.vht_supported) mode |= BIT(BTC_WL_MODE_VHT); - if (sta->deflink.ht_cap.ht_supported) + if (link_sta->ht_cap.ht_supported) mode |= BIT(BTC_WL_MODE_HT);
r.mode = mode; }
- if (rtwvif->wifi_role >= RTW89_WIFI_ROLE_MLME_MAX) + if (rtwvif_link->wifi_role >= RTW89_WIFI_ROLE_MLME_MAX) { + rcu_read_unlock(); return; + }
rtw89_debug(rtwdev, RTW89_DBG_BTC, - "[BTC], wifi_role=%d\n", rtwvif->wifi_role); + "[BTC], wifi_role=%d\n", rtwvif_link->wifi_role);
- r.role = rtwvif->wifi_role; - r.phy = rtwvif->phy_idx; - r.pid = rtwvif->port; + r.role = rtwvif_link->wifi_role; + r.phy = rtwvif_link->phy_idx; + r.pid = rtwvif_link->port; r.active = true; r.connected = MLME_LINKED; - r.bcn_period = vif->bss_conf.beacon_int; - r.dtim_period = vif->bss_conf.dtim_period; + r.bcn_period = bss_conf->beacon_int; + r.dtim_period = bss_conf->dtim_period; r.band = chan->band_type; r.ch = chan->channel; r.bw = chan->band_width; @@ -7547,10 +7575,12 @@ void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif r.chdef.center_ch = chan->channel; r.chdef.bw = chan->band_width; r.chdef.chan = chan->primary_channel; - ether_addr_copy(r.mac_addr, rtwvif->mac_addr); + ether_addr_copy(r.mac_addr, rtwvif_link->mac_addr);
- if (rtwsta && vif->type == NL80211_IFTYPE_STATION) - r.mac_id = rtwsta->mac_id; + rcu_read_unlock(); + + if (rtwsta_link && vif->type == NL80211_IFTYPE_STATION) + r.mac_id = rtwsta_link->mac_id;
btc->dm.cnt_notify[BTC_NCNT_ROLE_INFO]++;
@@ -7781,26 +7811,26 @@ struct rtw89_btc_wl_sta_iter_data { bool is_traffic_change; };
-static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta) +static +void __rtw89_btc_ntfy_wl_sta_iter(struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, + struct rtw89_btc_wl_sta_iter_data *iter_data) { - struct rtw89_btc_wl_sta_iter_data *iter_data = - (struct rtw89_btc_wl_sta_iter_data *)data; + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; struct rtw89_dev *rtwdev = iter_data->rtwdev; struct rtw89_btc *btc = &rtwdev->btc; struct rtw89_btc_dm *dm = &btc->dm; const struct rtw89_btc_ver *ver = btc->ver; struct rtw89_btc_wl_info *wl = &btc->cx.wl; struct rtw89_btc_wl_link_info *link_info = NULL; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; struct rtw89_traffic_stats *link_info_t = NULL; - struct rtw89_vif *rtwvif = rtwsta->rtwvif; struct rtw89_traffic_stats *stats = &rtwvif->stats; const struct rtw89_chip_info *chip = rtwdev->chip; struct rtw89_btc_wl_role_info *r; struct rtw89_btc_wl_role_info_v1 *r1; u32 last_tx_rate, last_rx_rate; u16 last_tx_lvl, last_rx_lvl; - u8 port = rtwvif->port; + u8 port = rtwvif_link->port; u8 rssi; u8 busy = 0; u8 dir = 0; @@ -7808,11 +7838,11 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta) u8 i = 0; bool is_sta_change = false, is_traffic_change = false;
- rssi = ewma_rssi_read(&rtwsta->avg_rssi) >> RSSI_FACTOR; + rssi = ewma_rssi_read(&rtwsta_link->avg_rssi) >> RSSI_FACTOR; rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], rssi=%d\n", rssi);
link_info = &wl->link_info[port]; - link_info->stat.traffic = rtwvif->stats; + link_info->stat.traffic = *stats; link_info_t = &link_info->stat.traffic;
if (link_info->connected == MLME_NO_LINK) { @@ -7860,19 +7890,19 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta) iter_data->busy_all |= busy; iter_data->dir_all |= BIT(dir);
- if (rtwsta->rx_hw_rate <= RTW89_HW_RATE_CCK2 && + if (rtwsta_link->rx_hw_rate <= RTW89_HW_RATE_CCK2 && last_rx_rate > RTW89_HW_RATE_CCK2 && link_info_t->rx_tfc_lv > RTW89_TFC_IDLE) link_info->rx_rate_drop_cnt++;
- if (last_tx_rate != rtwsta->ra_report.hw_rate || - last_rx_rate != rtwsta->rx_hw_rate || + if (last_tx_rate != rtwsta_link->ra_report.hw_rate || + last_rx_rate != rtwsta_link->rx_hw_rate || last_tx_lvl != link_info_t->tx_tfc_lv || last_rx_lvl != link_info_t->rx_tfc_lv) is_traffic_change = true;
- link_info_t->tx_rate = rtwsta->ra_report.hw_rate; - link_info_t->rx_rate = rtwsta->rx_hw_rate; + link_info_t->tx_rate = rtwsta_link->ra_report.hw_rate; + link_info_t->rx_rate = rtwsta_link->rx_hw_rate;
if (link_info->role == RTW89_WIFI_ROLE_STATION || link_info->role == RTW89_WIFI_ROLE_P2P_CLIENT) { @@ -7884,19 +7914,19 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta) r = &wl->role_info; r->active_role[port].tx_lvl = stats->tx_tfc_lv; r->active_role[port].rx_lvl = stats->rx_tfc_lv; - r->active_role[port].tx_rate = rtwsta->ra_report.hw_rate; - r->active_role[port].rx_rate = rtwsta->rx_hw_rate; + r->active_role[port].tx_rate = rtwsta_link->ra_report.hw_rate; + r->active_role[port].rx_rate = rtwsta_link->rx_hw_rate; } else if (ver->fwlrole == 1) { r1 = &wl->role_info_v1; r1->active_role_v1[port].tx_lvl = stats->tx_tfc_lv; r1->active_role_v1[port].rx_lvl = stats->rx_tfc_lv; - r1->active_role_v1[port].tx_rate = rtwsta->ra_report.hw_rate; - r1->active_role_v1[port].rx_rate = rtwsta->rx_hw_rate; + r1->active_role_v1[port].tx_rate = rtwsta_link->ra_report.hw_rate; + r1->active_role_v1[port].rx_rate = rtwsta_link->rx_hw_rate; } else if (ver->fwlrole == 2) { dm->trx_info.tx_lvl = stats->tx_tfc_lv; dm->trx_info.rx_lvl = stats->rx_tfc_lv; - dm->trx_info.tx_rate = rtwsta->ra_report.hw_rate; - dm->trx_info.rx_rate = rtwsta->rx_hw_rate; + dm->trx_info.tx_rate = rtwsta_link->ra_report.hw_rate; + dm->trx_info.rx_rate = rtwsta_link->rx_hw_rate; }
dm->trx_info.tx_tp = link_info_t->tx_throughput; @@ -7916,6 +7946,21 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta) iter_data->is_traffic_change = true; }
+static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta) +{ + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_btc_wl_sta_iter_data *iter_data = + (struct rtw89_btc_wl_sta_iter_data *)data; + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + __rtw89_btc_ntfy_wl_sta_iter(rtwvif_link, rtwsta_link, iter_data); + } +} + #define BTC_NHM_CHK_INTVL 20
void rtw89_btc_ntfy_wl_sta(struct rtw89_dev *rtwdev) diff --git a/drivers/net/wireless/realtek/rtw89/coex.h b/drivers/net/wireless/realtek/rtw89/coex.h index de53b56632f7..dbdb56e063ef 100644 --- a/drivers/net/wireless/realtek/rtw89/coex.h +++ b/drivers/net/wireless/realtek/rtw89/coex.h @@ -271,8 +271,10 @@ void rtw89_btc_ntfy_eapol_packet_work(struct work_struct *work); void rtw89_btc_ntfy_arp_packet_work(struct work_struct *work); void rtw89_btc_ntfy_dhcp_packet_work(struct work_struct *work); void rtw89_btc_ntfy_icmp_packet_work(struct work_struct *work); -void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, enum btc_role_state state); +void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, + enum btc_role_state state); void rtw89_btc_ntfy_radio_state(struct rtw89_dev *rtwdev, enum btc_rfctrl rf_state); void rtw89_btc_ntfy_wl_rfk(struct rtw89_dev *rtwdev, u8 phy_map, enum btc_wl_rfk_type type, diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c index 4553810634c6..5b8e65f6de6a 100644 --- a/drivers/net/wireless/realtek/rtw89/core.c +++ b/drivers/net/wireless/realtek/rtw89/core.c @@ -436,15 +436,6 @@ int rtw89_set_channel(struct rtw89_dev *rtwdev) return 0; }
-void rtw89_get_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, - struct rtw89_chan *chan) -{ - const struct cfg80211_chan_def *chandef; - - chandef = rtw89_chandef_get(rtwdev, rtwvif->chanctx_idx); - rtw89_get_channel_params(chandef, chan); -} - static enum rtw89_core_tx_type rtw89_core_get_tx_type(struct rtw89_dev *rtwdev, struct sk_buff *skb) @@ -463,8 +454,9 @@ rtw89_core_tx_update_ampdu_info(struct rtw89_dev *rtwdev, struct rtw89_core_tx_request *tx_req, enum btc_pkt_type pkt_type) { - struct ieee80211_sta *sta = tx_req->sta; + struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link; struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info; + struct ieee80211_link_sta *link_sta; struct sk_buff *skb = tx_req->skb; struct rtw89_sta *rtwsta; u8 ampdu_num; @@ -478,21 +470,26 @@ rtw89_core_tx_update_ampdu_info(struct rtw89_dev *rtwdev, if (!(IEEE80211_SKB_CB(skb)->flags & IEEE80211_TX_CTL_AMPDU)) return;
- if (!sta) { + if (!rtwsta_link) { rtw89_warn(rtwdev, "cannot set ampdu info without sta\n"); return; }
tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK; - rtwsta = (struct rtw89_sta *)sta->drv_priv; + rtwsta = rtwsta_link->rtwsta; + + rcu_read_lock();
+ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false); ampdu_num = (u8)((rtwsta->ampdu_params[tid].agg_num ? rtwsta->ampdu_params[tid].agg_num : - 4 << sta->deflink.ht_cap.ampdu_factor) - 1); + 4 << link_sta->ht_cap.ampdu_factor) - 1);
desc_info->agg_en = true; - desc_info->ampdu_density = sta->deflink.ht_cap.ampdu_density; + desc_info->ampdu_density = link_sta->ht_cap.ampdu_density; desc_info->ampdu_num = ampdu_num; + + rcu_read_unlock(); }
static void @@ -569,9 +566,13 @@ static u16 rtw89_core_get_mgmt_rate(struct rtw89_dev *rtwdev, const struct rtw89_chan *chan) { struct sk_buff *skb = tx_req->skb; + struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link; + struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link; struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb); struct ieee80211_vif *vif = tx_info->control.vif; + struct ieee80211_bss_conf *bss_conf; u16 lowest_rate; + u16 rate;
if (tx_info->flags & IEEE80211_TX_CTL_NO_CCK_RATE || (vif && vif->p2p)) @@ -581,25 +582,35 @@ static u16 rtw89_core_get_mgmt_rate(struct rtw89_dev *rtwdev, else lowest_rate = RTW89_HW_RATE_OFDM6;
- if (!vif || !vif->bss_conf.basic_rates || !tx_req->sta) + if (!rtwvif_link) return lowest_rate;
- return __ffs(vif->bss_conf.basic_rates) + lowest_rate; + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false); + if (!bss_conf->basic_rates || !rtwsta_link) { + rate = lowest_rate; + goto out; + } + + rate = __ffs(bss_conf->basic_rates) + lowest_rate; + +out: + rcu_read_unlock(); + + return rate; }
static u8 rtw89_core_tx_get_mac_id(struct rtw89_dev *rtwdev, struct rtw89_core_tx_request *tx_req) { - struct ieee80211_vif *vif = tx_req->vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - struct ieee80211_sta *sta = tx_req->sta; - struct rtw89_sta *rtwsta; + struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link; + struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
- if (!sta) - return rtwvif->mac_id; + if (!rtwsta_link) + return rtwvif_link->mac_id;
- rtwsta = (struct rtw89_sta *)sta->drv_priv; - return rtwsta->mac_id; + return rtwsta_link->mac_id; }
static void rtw89_core_tx_update_llc_hdr(struct rtw89_dev *rtwdev, @@ -618,11 +629,10 @@ rtw89_core_tx_update_mgmt_info(struct rtw89_dev *rtwdev, struct rtw89_core_tx_request *tx_req) { const struct rtw89_chip_info *chip = rtwdev->chip; - struct ieee80211_vif *vif = tx_req->vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link; struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info; const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); + rtwvif_link->chanctx_idx); struct sk_buff *skb = tx_req->skb; u8 qsel, ch_dma;
@@ -631,7 +641,7 @@ rtw89_core_tx_update_mgmt_info(struct rtw89_dev *rtwdev,
desc_info->qsel = qsel; desc_info->ch_dma = ch_dma; - desc_info->port = desc_info->hiq ? rtwvif->port : 0; + desc_info->port = desc_info->hiq ? rtwvif_link->port : 0; desc_info->mac_id = rtw89_core_tx_get_mac_id(rtwdev, tx_req); desc_info->hw_ssn_sel = RTW89_MGMT_HW_SSN_SEL; desc_info->hw_seq_mode = RTW89_MGMT_HW_SEQ_MODE; @@ -701,26 +711,36 @@ __rtw89_core_tx_check_he_qos_htc(struct rtw89_dev *rtwdev, struct rtw89_core_tx_request *tx_req, enum btc_pkt_type pkt_type) { - struct ieee80211_sta *sta = tx_req->sta; - struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta); + struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link; struct sk_buff *skb = tx_req->skb; struct ieee80211_hdr *hdr = (void *)skb->data; + struct ieee80211_link_sta *link_sta; __le16 fc = hdr->frame_control;
/* AP IOT issue with EAPoL, ARP and DHCP */ if (pkt_type < PACKET_MAX) return false;
- if (!sta || !sta->deflink.he_cap.has_he) + if (!rtwsta_link) return false;
+ rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false); + if (!link_sta->he_cap.has_he) { + rcu_read_unlock(); + return false; + } + + rcu_read_unlock(); + if (!ieee80211_is_data_qos(fc)) return false;
if (skb_headroom(skb) < IEEE80211_HT_CTL_LEN) return false;
- if (rtwsta && rtwsta->ra_report.might_fallback_legacy) + if (rtwsta_link && rtwsta_link->ra_report.might_fallback_legacy) return false;
return true; @@ -730,8 +750,7 @@ static void __rtw89_core_tx_adjust_he_qos_htc(struct rtw89_dev *rtwdev, struct rtw89_core_tx_request *tx_req) { - struct ieee80211_sta *sta = tx_req->sta; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link; struct sk_buff *skb = tx_req->skb; struct ieee80211_hdr *hdr = (void *)skb->data; __le16 fc = hdr->frame_control; @@ -747,7 +766,7 @@ __rtw89_core_tx_adjust_he_qos_htc(struct rtw89_dev *rtwdev, hdr = data; htc = data + hdr_len; hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_ORDER); - *htc = rtwsta->htc_template ? rtwsta->htc_template : + *htc = rtwsta_link->htc_template ? rtwsta_link->htc_template : le32_encode_bits(RTW89_HTC_VARIANT_HE, RTW89_HTC_MASK_VARIANT) | le32_encode_bits(RTW89_HTC_VARIANT_HE_CID_CAS, RTW89_HTC_MASK_CTL_ID);
@@ -761,8 +780,7 @@ rtw89_core_tx_update_he_qos_htc(struct rtw89_dev *rtwdev, enum btc_pkt_type pkt_type) { struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info; - struct ieee80211_vif *vif = tx_req->vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
if (!__rtw89_core_tx_check_he_qos_htc(rtwdev, tx_req, pkt_type)) goto desc_bk; @@ -773,23 +791,25 @@ rtw89_core_tx_update_he_qos_htc(struct rtw89_dev *rtwdev, desc_info->a_ctrl_bsr = true;
desc_bk: - if (!rtwvif || rtwvif->last_a_ctrl == desc_info->a_ctrl_bsr) + if (!rtwvif_link || rtwvif_link->last_a_ctrl == desc_info->a_ctrl_bsr) return;
- rtwvif->last_a_ctrl = desc_info->a_ctrl_bsr; + rtwvif_link->last_a_ctrl = desc_info->a_ctrl_bsr; desc_info->bk = true; }
static u16 rtw89_core_get_data_rate(struct rtw89_dev *rtwdev, struct rtw89_core_tx_request *tx_req) { - struct ieee80211_vif *vif = tx_req->vif; - struct ieee80211_sta *sta = tx_req->sta; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif->rate_pattern; - enum rtw89_chanctx_idx idx = rtwvif->chanctx_idx; + struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link; + struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link; + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif_link->rate_pattern; + enum rtw89_chanctx_idx idx = rtwvif_link->chanctx_idx; const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, idx); + struct ieee80211_link_sta *link_sta; u16 lowest_rate; + u16 rate;
if (rate_pattern->enable) return rate_pattern->rate; @@ -801,20 +821,31 @@ static u16 rtw89_core_get_data_rate(struct rtw89_dev *rtwdev, else lowest_rate = RTW89_HW_RATE_OFDM6;
- if (!sta || !sta->deflink.supp_rates[chan->band_type]) + if (!rtwsta_link) return lowest_rate;
- return __ffs(sta->deflink.supp_rates[chan->band_type]) + lowest_rate; + rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false); + if (!link_sta->supp_rates[chan->band_type]) { + rate = lowest_rate; + goto out; + } + + rate = __ffs(link_sta->supp_rates[chan->band_type]) + lowest_rate; + +out: + rcu_read_unlock(); + + return rate; }
static void rtw89_core_tx_update_data_info(struct rtw89_dev *rtwdev, struct rtw89_core_tx_request *tx_req) { - struct ieee80211_vif *vif = tx_req->vif; - struct ieee80211_sta *sta = tx_req->sta; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta); + struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link; + struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link; struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info; struct sk_buff *skb = tx_req->skb; u8 tid, tid_indicate; @@ -829,10 +860,10 @@ rtw89_core_tx_update_data_info(struct rtw89_dev *rtwdev, desc_info->tid_indicate = tid_indicate; desc_info->qsel = qsel; desc_info->mac_id = rtw89_core_tx_get_mac_id(rtwdev, tx_req); - desc_info->port = desc_info->hiq ? rtwvif->port : 0; - desc_info->er_cap = rtwsta ? rtwsta->er_cap : false; - desc_info->stbc = rtwsta ? rtwsta->ra.stbc_cap : false; - desc_info->ldpc = rtwsta ? rtwsta->ra.ldpc_cap : false; + desc_info->port = desc_info->hiq ? rtwvif_link->port : 0; + desc_info->er_cap = rtwsta_link ? rtwsta_link->er_cap : false; + desc_info->stbc = rtwsta_link ? rtwsta_link->ra.stbc_cap : false; + desc_info->ldpc = rtwsta_link ? rtwsta_link->ra.ldpc_cap : false;
/* enable wd_info for AMPDU */ desc_info->en_wd_info = true; @@ -1027,13 +1058,34 @@ int rtw89_h2c_tx(struct rtw89_dev *rtwdev, int rtw89_core_tx_write(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, struct ieee80211_sta *sta, struct sk_buff *skb, int *qsel) { + struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta); + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); struct rtw89_core_tx_request tx_req = {0}; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_sta_link *rtwsta_link = NULL; + struct rtw89_vif_link *rtwvif_link; int ret;
+ /* By default, driver writes tx via the link on HW-0. And then, + * according to links' status, HW can change tx to another link. + */ + + if (rtwsta) { + rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0); + if (unlikely(!rtwsta_link)) { + rtw89_err(rtwdev, "tx: find no sta link on HW-0\n"); + return -ENOLINK; + } + } + + rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0); + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, "tx: find no vif link on HW-0\n"); + return -ENOLINK; + } + tx_req.skb = skb; - tx_req.sta = sta; - tx_req.vif = vif; + tx_req.rtwvif_link = rtwvif_link; + tx_req.rtwsta_link = rtwsta_link;
rtw89_traffic_stats_accu(rtwdev, &rtwdev->stats, skb, true); rtw89_traffic_stats_accu(rtwdev, &rtwvif->stats, skb, true); @@ -1514,16 +1566,24 @@ static u8 rtw89_get_data_rate_nss(struct rtw89_dev *rtwdev, u16 data_rate) static void rtw89_core_rx_process_phy_ppdu_iter(void *data, struct ieee80211_sta *sta) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; struct rtw89_rx_phy_ppdu *phy_ppdu = (struct rtw89_rx_phy_ppdu *)data; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); struct rtw89_dev *rtwdev = rtwsta->rtwdev; struct rtw89_hal *hal = &rtwdev->hal; + struct rtw89_sta_link *rtwsta_link; u8 ant_num = hal->ant_diversity ? 2 : rtwdev->chip->rf_path_num; u8 ant_pos = U8_MAX; u8 evm_pos = 0; int i;
- if (rtwsta->mac_id != phy_ppdu->mac_id || !phy_ppdu->to_self) + /* FIXME: For single link, taking link on HW-0 here is okay. But, when + * enabling multiple active links, we should determine the right link. + */ + rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0); + if (unlikely(!rtwsta_link)) + return; + + if (rtwsta_link->mac_id != phy_ppdu->mac_id || !phy_ppdu->to_self) return;
if (hal->ant_diversity && hal->antenna_rx) { @@ -1531,22 +1591,24 @@ static void rtw89_core_rx_process_phy_ppdu_iter(void *data, evm_pos = ant_pos; }
- ewma_rssi_add(&rtwsta->avg_rssi, phy_ppdu->rssi_avg); + ewma_rssi_add(&rtwsta_link->avg_rssi, phy_ppdu->rssi_avg);
if (ant_pos < ant_num) { - ewma_rssi_add(&rtwsta->rssi[ant_pos], phy_ppdu->rssi[0]); + ewma_rssi_add(&rtwsta_link->rssi[ant_pos], phy_ppdu->rssi[0]); } else { for (i = 0; i < rtwdev->chip->rf_path_num; i++) - ewma_rssi_add(&rtwsta->rssi[i], phy_ppdu->rssi[i]); + ewma_rssi_add(&rtwsta_link->rssi[i], phy_ppdu->rssi[i]); }
if (phy_ppdu->ofdm.has && (phy_ppdu->has_data || phy_ppdu->has_bcn)) { - ewma_snr_add(&rtwsta->avg_snr, phy_ppdu->ofdm.avg_snr); + ewma_snr_add(&rtwsta_link->avg_snr, phy_ppdu->ofdm.avg_snr); if (rtw89_get_data_rate_nss(rtwdev, phy_ppdu->rate) == 1) { - ewma_evm_add(&rtwsta->evm_1ss, phy_ppdu->ofdm.evm_min); + ewma_evm_add(&rtwsta_link->evm_1ss, phy_ppdu->ofdm.evm_min); } else { - ewma_evm_add(&rtwsta->evm_min[evm_pos], phy_ppdu->ofdm.evm_min); - ewma_evm_add(&rtwsta->evm_max[evm_pos], phy_ppdu->ofdm.evm_max); + ewma_evm_add(&rtwsta_link->evm_min[evm_pos], + phy_ppdu->ofdm.evm_min); + ewma_evm_add(&rtwsta_link->evm_max[evm_pos], + phy_ppdu->ofdm.evm_max); } } } @@ -1876,17 +1938,19 @@ struct rtw89_vif_rx_stats_iter_data { };
static void rtw89_stats_trigger_frame(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, + struct rtw89_vif_link *rtwvif_link, + struct ieee80211_bss_conf *bss_conf, struct sk_buff *skb) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; struct ieee80211_trigger *tf = (struct ieee80211_trigger *)skb->data; + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; u8 *pos, *end, type, tf_bw; u16 aid, tf_rua;
- if (!ether_addr_equal(vif->bss_conf.bssid, tf->ta) || - rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION || - rtwvif->net_type == RTW89_NET_TYPE_NO_LINK) + if (!ether_addr_equal(bss_conf->bssid, tf->ta) || + rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION || + rtwvif_link->net_type == RTW89_NET_TYPE_NO_LINK) return;
type = le64_get_bits(tf->common_info, IEEE80211_TRIGGER_TYPE_MASK); @@ -1915,7 +1979,7 @@ static void rtw89_stats_trigger_frame(struct rtw89_dev *rtwdev, rtwdev->stats.rx_tf_acc++; if (tf_bw == IEEE80211_TRIGGER_ULBW_160_80P80MHZ && rua <= NL80211_RATE_INFO_HE_RU_ALLOC_106) - rtwvif->pwr_diff_en = true; + rtwvif_link->pwr_diff_en = true; break; }
@@ -1986,7 +2050,7 @@ static void rtw89_core_cancel_6ghz_probe_tx(struct rtw89_dev *rtwdev, ieee80211_queue_work(rtwdev->hw, &rtwdev->cancel_6ghz_probe_work); }
-static void rtw89_vif_sync_bcn_tsf(struct rtw89_vif *rtwvif, +static void rtw89_vif_sync_bcn_tsf(struct rtw89_vif_link *rtwvif_link, struct ieee80211_hdr *hdr, size_t len) { struct ieee80211_mgmt *mgmt = (typeof(mgmt))hdr; @@ -1994,20 +2058,22 @@ static void rtw89_vif_sync_bcn_tsf(struct rtw89_vif *rtwvif, if (len < offsetof(typeof(*mgmt), u.beacon.variable)) return;
- WRITE_ONCE(rtwvif->sync_bcn_tsf, le64_to_cpu(mgmt->u.beacon.timestamp)); + WRITE_ONCE(rtwvif_link->sync_bcn_tsf, le64_to_cpu(mgmt->u.beacon.timestamp)); }
static void rtw89_vif_rx_stats_iter(void *data, u8 *mac, struct ieee80211_vif *vif) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; struct rtw89_vif_rx_stats_iter_data *iter_data = data; struct rtw89_dev *rtwdev = iter_data->rtwdev; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); struct rtw89_pkt_stat *pkt_stat = &rtwdev->phystat.cur_pkt_stat; struct rtw89_rx_desc_info *desc_info = iter_data->desc_info; struct sk_buff *skb = iter_data->skb; struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; struct rtw89_rx_phy_ppdu *phy_ppdu = iter_data->phy_ppdu; + struct ieee80211_bss_conf *bss_conf; + struct rtw89_vif_link *rtwvif_link; const u8 *bssid = iter_data->bssid;
if (rtwdev->scanning && @@ -2015,33 +2081,46 @@ static void rtw89_vif_rx_stats_iter(void *data, u8 *mac, ieee80211_is_probe_resp(hdr->frame_control))) rtw89_core_cancel_6ghz_probe_tx(rtwdev, skb);
- if (!vif->bss_conf.bssid) - return; + rcu_read_lock(); + + /* FIXME: For single link, taking link on HW-0 here is okay. But, when + * enabling multiple active links, we should determine the right link. + */ + rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0); + if (unlikely(!rtwvif_link)) + goto out; + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false); + if (!bss_conf->bssid) + goto out;
if (ieee80211_is_trigger(hdr->frame_control)) { - rtw89_stats_trigger_frame(rtwdev, vif, skb); - return; + rtw89_stats_trigger_frame(rtwdev, rtwvif_link, bss_conf, skb); + goto out; }
- if (!ether_addr_equal(vif->bss_conf.bssid, bssid)) - return; + if (!ether_addr_equal(bss_conf->bssid, bssid)) + goto out;
if (ieee80211_is_beacon(hdr->frame_control)) { if (vif->type == NL80211_IFTYPE_STATION && !test_bit(RTW89_FLAG_WOWLAN, rtwdev->flags)) { - rtw89_vif_sync_bcn_tsf(rtwvif, hdr, skb->len); + rtw89_vif_sync_bcn_tsf(rtwvif_link, hdr, skb->len); rtw89_fw_h2c_rssi_offload(rtwdev, phy_ppdu); } pkt_stat->beacon_nr++; }
- if (!ether_addr_equal(vif->addr, hdr->addr1)) - return; + if (!ether_addr_equal(bss_conf->addr, hdr->addr1)) + goto out;
if (desc_info->data_rate < RTW89_HW_RATE_NR) pkt_stat->rx_rate_cnt[desc_info->data_rate]++;
rtw89_traffic_stats_accu(rtwdev, &rtwvif->stats, skb, false); + +out: + rcu_read_unlock(); }
static void rtw89_core_rx_stats(struct rtw89_dev *rtwdev, @@ -2432,15 +2511,23 @@ void rtw89_core_stats_sta_rx_status_iter(void *data, struct ieee80211_sta *sta) struct rtw89_core_iter_rx_status *iter_data = (struct rtw89_core_iter_rx_status *)data; struct ieee80211_rx_status *rx_status = iter_data->rx_status; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; struct rtw89_rx_desc_info *desc_info = iter_data->desc_info; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_sta_link *rtwsta_link; u8 mac_id = iter_data->mac_id;
- if (mac_id != rtwsta->mac_id) + /* FIXME: For single link, taking link on HW-0 here is okay. But, when + * enabling multiple active links, we should determine the right link. + */ + rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0); + if (unlikely(!rtwsta_link)) return;
- rtwsta->rx_status = *rx_status; - rtwsta->rx_hw_rate = desc_info->data_rate; + if (mac_id != rtwsta_link->mac_id) + return; + + rtwsta_link->rx_status = *rx_status; + rtwsta_link->rx_hw_rate = desc_info->data_rate; }
static void rtw89_core_stats_sta_rx_status(struct rtw89_dev *rtwdev, @@ -2546,6 +2633,10 @@ static enum rtw89_ps_mode rtw89_update_ps_mode(struct rtw89_dev *rtwdev) { const struct rtw89_chip_info *chip = rtwdev->chip;
+ /* FIXME: Fix __rtw89_enter_ps_mode() to consider MLO cases. */ + if (rtwdev->support_mlo) + return RTW89_PS_MODE_NONE; + if (rtw89_disable_ps_mode || !chip->ps_mode_supported || RTW89_CHK_FW_FEATURE(NO_DEEP_PS, &rtwdev->fw)) return RTW89_PS_MODE_NONE; @@ -2658,7 +2749,7 @@ static void rtw89_core_ba_work(struct work_struct *work) list_for_each_entry_safe(rtwtxq, tmp, &rtwdev->ba_list, list) { struct ieee80211_txq *txq = rtw89_txq_to_txq(rtwtxq); struct ieee80211_sta *sta = txq->sta; - struct rtw89_sta *rtwsta = sta ? (struct rtw89_sta *)sta->drv_priv : NULL; + struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta); u8 tid = txq->tid;
if (!sta) { @@ -2686,8 +2777,8 @@ static void rtw89_core_ba_work(struct work_struct *work) spin_unlock_bh(&rtwdev->ba_lock); }
-static void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev, - struct ieee80211_sta *sta) +void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev, + struct ieee80211_sta *sta) { struct rtw89_txq *rtwtxq, *tmp;
@@ -2701,8 +2792,8 @@ static void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev, spin_unlock_bh(&rtwdev->ba_lock); }
-static void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev, - struct ieee80211_sta *sta) +void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev, + struct ieee80211_sta *sta) { struct rtw89_txq *rtwtxq, *tmp;
@@ -2718,10 +2809,10 @@ static void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev, spin_unlock_bh(&rtwdev->ba_lock); }
-static void rtw89_core_free_sta_pending_roc_tx(struct rtw89_dev *rtwdev, - struct ieee80211_sta *sta) +void rtw89_core_free_sta_pending_roc_tx(struct rtw89_dev *rtwdev, + struct ieee80211_sta *sta) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); struct sk_buff *skb, *tmp;
skb_queue_walk_safe(&rtwsta->roc_queue, skb, tmp) { @@ -2762,7 +2853,7 @@ static void rtw89_core_txq_check_agg(struct rtw89_dev *rtwdev, struct ieee80211_hw *hw = rtwdev->hw; struct ieee80211_txq *txq = rtw89_txq_to_txq(rtwtxq); struct ieee80211_sta *sta = txq->sta; - struct rtw89_sta *rtwsta = sta ? (struct rtw89_sta *)sta->drv_priv : NULL; + struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
if (test_bit(RTW89_TXQ_F_FORBID_BA, &rtwtxq->flags)) return; @@ -2838,10 +2929,19 @@ static bool rtw89_core_txq_agg_wait(struct rtw89_dev *rtwdev, bool *sched_txq, bool *reinvoke) { struct rtw89_txq *rtwtxq = (struct rtw89_txq *)txq->drv_priv; - struct ieee80211_sta *sta = txq->sta; - struct rtw89_sta *rtwsta = sta ? (struct rtw89_sta *)sta->drv_priv : NULL; + struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(txq->sta); + struct rtw89_sta_link *rtwsta_link;
- if (!sta || rtwsta->max_agg_wait <= 0) + if (!rtwsta) + return false; + + rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0); + if (unlikely(!rtwsta_link)) { + rtw89_err(rtwdev, "agg wait: find no link on HW-0\n"); + return false; + } + + if (rtwsta_link->max_agg_wait <= 0) return false;
if (rtwdev->stats.tx_tfc_lv <= RTW89_TFC_MID) @@ -2855,7 +2955,7 @@ static bool rtw89_core_txq_agg_wait(struct rtw89_dev *rtwdev, return false; }
- if (*frame_cnt == 1 && rtwtxq->wait_cnt < rtwsta->max_agg_wait) { + if (*frame_cnt == 1 && rtwtxq->wait_cnt < rtwsta_link->max_agg_wait) { *reinvoke = true; rtwtxq->wait_cnt++; return true; @@ -2879,7 +2979,7 @@ static void rtw89_core_txq_schedule(struct rtw89_dev *rtwdev, u8 ac, bool *reinv ieee80211_txq_schedule_start(hw, ac); while ((txq = ieee80211_next_txq(hw, ac))) { rtwtxq = (struct rtw89_txq *)txq->drv_priv; - rtwvif = (struct rtw89_vif *)txq->vif->drv_priv; + rtwvif = vif_to_rtwvif(txq->vif);
if (rtwvif->offchan) { ieee80211_return_txq(hw, txq, true); @@ -2955,16 +3055,23 @@ static void rtw89_forbid_ba_work(struct work_struct *w) static void rtw89_core_sta_pending_tx_iter(void *data, struct ieee80211_sta *sta) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_vif *rtwvif_target = data, *rtwvif = rtwsta->rtwvif; - struct rtw89_dev *rtwdev = rtwvif->rtwdev; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_dev *rtwdev = rtwsta->rtwdev; + struct rtw89_vif *rtwvif = rtwsta->rtwvif; struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct rtw89_vif_link *target = data; + struct rtw89_vif_link *rtwvif_link; struct sk_buff *skb, *tmp; + unsigned int link_id; int qsel, ret;
- if (rtwvif->chanctx_idx != rtwvif_target->chanctx_idx) - return; + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + if (rtwvif_link->chanctx_idx == target->chanctx_idx) + goto bottom; + + return;
+bottom: if (skb_queue_len(&rtwsta->roc_queue) == 0) return;
@@ -2982,17 +3089,17 @@ static void rtw89_core_sta_pending_tx_iter(void *data, }
static void rtw89_core_handle_sta_pending_tx(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { ieee80211_iterate_stations_atomic(rtwdev->hw, rtw89_core_sta_pending_tx_iter, - rtwvif); + rtwvif_link); }
static int rtw89_core_send_nullfunc(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool qos, bool ps) + struct rtw89_vif_link *rtwvif_link, bool qos, bool ps) { - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); struct ieee80211_sta *sta; struct ieee80211_hdr *hdr; struct sk_buff *skb; @@ -3002,7 +3109,7 @@ static int rtw89_core_send_nullfunc(struct rtw89_dev *rtwdev, return 0;
rcu_read_lock(); - sta = ieee80211_find_sta(vif, vif->bss_conf.bssid); + sta = ieee80211_find_sta(vif, vif->cfg.ap_addr); if (!sta) { ret = -EINVAL; goto out; @@ -3040,27 +3147,43 @@ void rtw89_roc_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; struct ieee80211_hw *hw = rtwdev->hw; struct rtw89_roc *roc = &rtwvif->roc; + struct rtw89_vif_link *rtwvif_link; struct cfg80211_chan_def roc_chan; - struct rtw89_vif *tmp; + struct rtw89_vif *tmp_vif; int ret;
lockdep_assert_held(&rtwdev->mutex);
rtw89_leave_ips_by_hwflags(rtwdev); rtw89_leave_lps(rtwdev); + + rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0); + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, "roc start: find no link on HW-0\n"); + return; + } + rtw89_chanctx_pause(rtwdev, RTW89_CHANCTX_PAUSE_REASON_ROC);
- ret = rtw89_core_send_nullfunc(rtwdev, rtwvif, true, true); + ret = rtw89_core_send_nullfunc(rtwdev, rtwvif_link, true, true); if (ret) rtw89_debug(rtwdev, RTW89_DBG_TXRX, "roc send null-1 failed: %d\n", ret);
- rtw89_for_each_rtwvif(rtwdev, tmp) - if (tmp->chanctx_idx == rtwvif->chanctx_idx) - tmp->offchan = true; + rtw89_for_each_rtwvif(rtwdev, tmp_vif) { + struct rtw89_vif_link *tmp_link; + unsigned int link_id; + + rtw89_vif_for_each_link(tmp_vif, tmp_link, link_id) { + if (tmp_link->chanctx_idx == rtwvif_link->chanctx_idx) { + tmp_vif->offchan = true; + break; + } + } + }
cfg80211_chandef_create(&roc_chan, &roc->chan, NL80211_CHAN_NO_HT); - rtw89_config_roc_chandef(rtwdev, rtwvif->chanctx_idx, &roc_chan); + rtw89_config_roc_chandef(rtwdev, rtwvif_link->chanctx_idx, &roc_chan); rtw89_set_channel(rtwdev); rtw89_write32_clr(rtwdev, rtw89_mac_reg_by_idx(rtwdev, mac->rx_fltr, RTW89_MAC_0), @@ -3077,7 +3200,8 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; struct ieee80211_hw *hw = rtwdev->hw; struct rtw89_roc *roc = &rtwvif->roc; - struct rtw89_vif *tmp; + struct rtw89_vif_link *rtwvif_link; + struct rtw89_vif *tmp_vif; int ret;
lockdep_assert_held(&rtwdev->mutex); @@ -3087,24 +3211,29 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) rtw89_leave_ips_by_hwflags(rtwdev); rtw89_leave_lps(rtwdev);
+ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0); + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, "roc end: find no link on HW-0\n"); + return; + } + rtw89_write32_mask(rtwdev, rtw89_mac_reg_by_idx(rtwdev, mac->rx_fltr, RTW89_MAC_0), B_AX_RX_FLTR_CFG_MASK, rtwdev->hal.rx_fltr);
roc->state = RTW89_ROC_IDLE; - rtw89_config_roc_chandef(rtwdev, rtwvif->chanctx_idx, NULL); + rtw89_config_roc_chandef(rtwdev, rtwvif_link->chanctx_idx, NULL); rtw89_chanctx_proceed(rtwdev); - ret = rtw89_core_send_nullfunc(rtwdev, rtwvif, true, false); + ret = rtw89_core_send_nullfunc(rtwdev, rtwvif_link, true, false); if (ret) rtw89_debug(rtwdev, RTW89_DBG_TXRX, "roc send null-0 failed: %d\n", ret);
- rtw89_for_each_rtwvif(rtwdev, tmp) - if (tmp->chanctx_idx == rtwvif->chanctx_idx) - tmp->offchan = false; + rtw89_for_each_rtwvif(rtwdev, tmp_vif) + tmp_vif->offchan = false;
- rtw89_core_handle_sta_pending_tx(rtwdev, rtwvif); + rtw89_core_handle_sta_pending_tx(rtwdev, rtwvif_link); queue_work(rtwdev->txq_wq, &rtwdev->txq_work);
if (hw->conf.flags & IEEE80211_CONF_IDLE) @@ -3188,39 +3317,52 @@ static bool rtw89_traffic_stats_calc(struct rtw89_dev *rtwdev,
static bool rtw89_traffic_stats_track(struct rtw89_dev *rtwdev) { + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id; bool tfc_changed;
tfc_changed = rtw89_traffic_stats_calc(rtwdev, &rtwdev->stats); + rtw89_for_each_rtwvif(rtwdev, rtwvif) { rtw89_traffic_stats_calc(rtwdev, &rtwvif->stats); - rtw89_fw_h2c_tp_offload(rtwdev, rtwvif); + + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_fw_h2c_tp_offload(rtwdev, rtwvif_link); }
return tfc_changed; }
-static void rtw89_vif_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void rtw89_vif_enter_lps(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { - if ((rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION && - rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT) || - rtwvif->tdls_peer) + if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION && + rtwvif_link->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT) return;
- if (rtwvif->offchan) - return; - - if (rtwvif->stats.tx_tfc_lv == RTW89_TFC_IDLE && - rtwvif->stats.rx_tfc_lv == RTW89_TFC_IDLE) - rtw89_enter_lps(rtwdev, rtwvif, true); + rtw89_enter_lps(rtwdev, rtwvif_link, true); }
static void rtw89_enter_lps_track(struct rtw89_dev *rtwdev) { + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id;
- rtw89_for_each_rtwvif(rtwdev, rtwvif) - rtw89_vif_enter_lps(rtwdev, rtwvif); + rtw89_for_each_rtwvif(rtwdev, rtwvif) { + if (rtwvif->tdls_peer) + continue; + if (rtwvif->offchan) + continue; + + if (rtwvif->stats.tx_tfc_lv != RTW89_TFC_IDLE || + rtwvif->stats.rx_tfc_lv != RTW89_TFC_IDLE) + continue; + + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_vif_enter_lps(rtwdev, rtwvif_link); + } }
static void rtw89_core_rfk_track(struct rtw89_dev *rtwdev) @@ -3234,14 +3376,16 @@ static void rtw89_core_rfk_track(struct rtw89_dev *rtwdev) rtw89_chip_rfk_track(rtwdev); }
-void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif) +void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct ieee80211_bss_conf *bss_conf) { enum rtw89_entity_mode mode = rtw89_get_entity_mode(rtwdev);
if (mode == RTW89_ENTITY_MODE_MCC) rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_P2P_PS_CHANGE); else - rtw89_process_p2p_ps(rtwdev, vif); + rtw89_process_p2p_ps(rtwdev, rtwvif_link, bss_conf); }
void rtw89_traffic_stats_init(struct rtw89_dev *rtwdev, @@ -3326,7 +3470,8 @@ void rtw89_core_release_all_bits_map(unsigned long *addr, unsigned int nbits) }
int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx) + struct rtw89_sta_link *rtwsta_link, u8 tid, + u8 *cam_idx) { const struct rtw89_chip_info *chip = rtwdev->chip; struct rtw89_cam_info *cam_info = &rtwdev->cam_info; @@ -3363,7 +3508,7 @@ int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev, }
entry->tid = tid; - list_add_tail(&entry->list, &rtwsta->ba_cam_list); + list_add_tail(&entry->list, &rtwsta_link->ba_cam_list);
*cam_idx = idx;
@@ -3371,7 +3516,8 @@ int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev, }
int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx) + struct rtw89_sta_link *rtwsta_link, u8 tid, + u8 *cam_idx) { struct rtw89_cam_info *cam_info = &rtwdev->cam_info; struct rtw89_ba_cam_entry *entry = NULL, *tmp; @@ -3379,7 +3525,7 @@ int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev,
lockdep_assert_held(&rtwdev->mutex);
- list_for_each_entry_safe(entry, tmp, &rtwsta->ba_cam_list, list) { + list_for_each_entry_safe(entry, tmp, &rtwsta_link->ba_cam_list, list) { if (entry->tid != tid) continue;
@@ -3396,24 +3542,25 @@ int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev,
#define RTW89_TYPE_MAPPING(_type) \ case NL80211_IFTYPE_ ## _type: \ - rtwvif->wifi_role = RTW89_WIFI_ROLE_ ## _type; \ + rtwvif_link->wifi_role = RTW89_WIFI_ROLE_ ## _type; \ break -void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc) +void rtw89_vif_type_mapping(struct rtw89_vif_link *rtwvif_link, bool assoc) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + const struct ieee80211_bss_conf *bss_conf;
switch (vif->type) { case NL80211_IFTYPE_STATION: if (vif->p2p) - rtwvif->wifi_role = RTW89_WIFI_ROLE_P2P_CLIENT; + rtwvif_link->wifi_role = RTW89_WIFI_ROLE_P2P_CLIENT; else - rtwvif->wifi_role = RTW89_WIFI_ROLE_STATION; + rtwvif_link->wifi_role = RTW89_WIFI_ROLE_STATION; break; case NL80211_IFTYPE_AP: if (vif->p2p) - rtwvif->wifi_role = RTW89_WIFI_ROLE_P2P_GO; + rtwvif_link->wifi_role = RTW89_WIFI_ROLE_P2P_GO; else - rtwvif->wifi_role = RTW89_WIFI_ROLE_AP; + rtwvif_link->wifi_role = RTW89_WIFI_ROLE_AP; break; RTW89_TYPE_MAPPING(ADHOC); RTW89_TYPE_MAPPING(MONITOR); @@ -3426,23 +3573,27 @@ void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc) switch (vif->type) { case NL80211_IFTYPE_AP: case NL80211_IFTYPE_MESH_POINT: - rtwvif->net_type = RTW89_NET_TYPE_AP_MODE; - rtwvif->self_role = RTW89_SELF_ROLE_AP; + rtwvif_link->net_type = RTW89_NET_TYPE_AP_MODE; + rtwvif_link->self_role = RTW89_SELF_ROLE_AP; break; case NL80211_IFTYPE_ADHOC: - rtwvif->net_type = RTW89_NET_TYPE_AD_HOC; - rtwvif->self_role = RTW89_SELF_ROLE_CLIENT; + rtwvif_link->net_type = RTW89_NET_TYPE_AD_HOC; + rtwvif_link->self_role = RTW89_SELF_ROLE_CLIENT; break; case NL80211_IFTYPE_STATION: if (assoc) { - rtwvif->net_type = RTW89_NET_TYPE_INFRA; - rtwvif->trigger = vif->bss_conf.he_support; + rtwvif_link->net_type = RTW89_NET_TYPE_INFRA; + + rcu_read_lock(); + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false); + rtwvif_link->trigger = bss_conf->he_support; + rcu_read_unlock(); } else { - rtwvif->net_type = RTW89_NET_TYPE_NO_LINK; - rtwvif->trigger = false; + rtwvif_link->net_type = RTW89_NET_TYPE_NO_LINK; + rtwvif_link->trigger = false; } - rtwvif->self_role = RTW89_SELF_ROLE_CLIENT; - rtwvif->addr_cam.sec_ent_mode = RTW89_ADDR_CAM_SEC_NORMAL; + rtwvif_link->self_role = RTW89_SELF_ROLE_CLIENT; + rtwvif_link->addr_cam.sec_ent_mode = RTW89_ADDR_CAM_SEC_NORMAL; break; case NL80211_IFTYPE_MONITOR: break; @@ -3452,137 +3603,110 @@ void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc) } }
-int rtw89_core_sta_add(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) +int rtw89_core_sta_link_add(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link); struct rtw89_hal *hal = &rtwdev->hal; u8 ant_num = hal->ant_diversity ? 2 : rtwdev->chip->rf_path_num; int i; int ret;
- rtwsta->rtwdev = rtwdev; - rtwsta->rtwvif = rtwvif; - rtwsta->prev_rssi = 0; - INIT_LIST_HEAD(&rtwsta->ba_cam_list); - skb_queue_head_init(&rtwsta->roc_queue); - - for (i = 0; i < ARRAY_SIZE(sta->txq); i++) - rtw89_core_txq_init(rtwdev, sta->txq[i]); - - ewma_rssi_init(&rtwsta->avg_rssi); - ewma_snr_init(&rtwsta->avg_snr); - ewma_evm_init(&rtwsta->evm_1ss); + rtwsta_link->prev_rssi = 0; + INIT_LIST_HEAD(&rtwsta_link->ba_cam_list); + ewma_rssi_init(&rtwsta_link->avg_rssi); + ewma_snr_init(&rtwsta_link->avg_snr); + ewma_evm_init(&rtwsta_link->evm_1ss); for (i = 0; i < ant_num; i++) { - ewma_rssi_init(&rtwsta->rssi[i]); - ewma_evm_init(&rtwsta->evm_min[i]); - ewma_evm_init(&rtwsta->evm_max[i]); + ewma_rssi_init(&rtwsta_link->rssi[i]); + ewma_evm_init(&rtwsta_link->evm_min[i]); + ewma_evm_init(&rtwsta_link->evm_max[i]); }
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) { - /* for station mode, assign the mac_id from itself */ - rtwsta->mac_id = rtwvif->mac_id; - /* must do rtw89_reg_6ghz_recalc() before rfk channel */ - ret = rtw89_reg_6ghz_recalc(rtwdev, rtwvif, true); + ret = rtw89_reg_6ghz_recalc(rtwdev, rtwvif_link, true); if (ret) return ret;
- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta, + rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, rtwsta_link, BTC_ROLE_MSTS_STA_CONN_START); - rtw89_chip_rfk_channel(rtwdev, rtwvif); + rtw89_chip_rfk_channel(rtwdev, rtwvif_link); } else if (vif->type == NL80211_IFTYPE_AP || sta->tdls) { - rtwsta->mac_id = rtw89_acquire_mac_id(rtwdev); - if (rtwsta->mac_id == RTW89_MAX_MAC_ID_NUM) - return -ENOSPC; - - ret = rtw89_mac_set_macid_pause(rtwdev, rtwsta->mac_id, false); + ret = rtw89_mac_set_macid_pause(rtwdev, rtwsta_link->mac_id, false); if (ret) { - rtw89_release_mac_id(rtwdev, rtwsta->mac_id); rtw89_warn(rtwdev, "failed to send h2c macid pause\n"); return ret; }
- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta, + ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, rtwsta_link, RTW89_ROLE_CREATE); if (ret) { - rtw89_release_mac_id(rtwdev, rtwsta->mac_id); rtw89_warn(rtwdev, "failed to send h2c role info\n"); return ret; }
- ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif, rtwsta); + ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link); if (ret) return ret;
- ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif, rtwsta); + ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif_link, rtwsta_link); if (ret) return ret; - - rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE); }
return 0; }
-int rtw89_core_sta_disassoc(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) +int rtw89_core_sta_link_disassoc(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
if (vif->type == NL80211_IFTYPE_STATION) - rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, false); - - rtwdev->total_sta_assoc--; - if (sta->tdls) - rtwvif->tdls_peer--; - rtwsta->disassoc = true; + rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, false);
return 0; }
-int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) +int rtw89_core_sta_link_disconnect(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link); int ret;
- rtw89_mac_bf_monitor_calc(rtwdev, sta, true); - rtw89_mac_bf_disassoc(rtwdev, vif, sta); - rtw89_core_free_sta_pending_ba(rtwdev, sta); - rtw89_core_free_sta_pending_forbid_ba(rtwdev, sta); - rtw89_core_free_sta_pending_roc_tx(rtwdev, sta); + rtw89_mac_bf_monitor_calc(rtwdev, rtwsta_link, true); + rtw89_mac_bf_disassoc(rtwdev, rtwvif_link, rtwsta_link);
if (vif->type == NL80211_IFTYPE_AP || sta->tdls) - rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta->addr_cam); + rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta_link->addr_cam); if (sta->tdls) - rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta->bssid_cam); + rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta_link->bssid_cam);
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) { - rtw89_vif_type_mapping(vif, false); - rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif, true); + rtw89_vif_type_mapping(rtwvif_link, false); + rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif_link, true); }
- ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, sta); + ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link); if (ret) { rtw89_warn(rtwdev, "failed to send h2c cmac table\n"); return ret; }
- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, true); + ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, rtwsta_link, true); if (ret) { rtw89_warn(rtwdev, "failed to send h2c join info\n"); return ret; }
/* update cam aid mac_id net_type */ - ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL); + ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL); if (ret) { rtw89_warn(rtwdev, "failed to send h2c cam\n"); return ret; @@ -3591,106 +3715,114 @@ int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev, return ret; }
-int rtw89_core_sta_assoc(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) +int rtw89_core_sta_link_assoc(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif, rtwsta); + const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link); + struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif_link, + rtwsta_link); const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); + rtwvif_link->chanctx_idx); int ret;
if (vif->type == NL80211_IFTYPE_AP || sta->tdls) { if (sta->tdls) { - ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif, bssid_cam, sta->addr); + struct ieee80211_link_sta *link_sta; + + rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif_link, bssid_cam, + link_sta->addr); if (ret) { rtw89_warn(rtwdev, "failed to send h2c init bssid cam for TDLS\n"); + rcu_read_unlock(); return ret; } + + rcu_read_unlock(); }
- ret = rtw89_cam_init_addr_cam(rtwdev, &rtwsta->addr_cam, bssid_cam); + ret = rtw89_cam_init_addr_cam(rtwdev, &rtwsta_link->addr_cam, bssid_cam); if (ret) { rtw89_warn(rtwdev, "failed to send h2c init addr cam\n"); return ret; } }
- ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, sta); + ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link); if (ret) { rtw89_warn(rtwdev, "failed to send h2c cmac table\n"); return ret; }
- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, false); + ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, rtwsta_link, false); if (ret) { rtw89_warn(rtwdev, "failed to send h2c join info\n"); return ret; }
/* update cam aid mac_id net_type */ - ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL); + ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL); if (ret) { rtw89_warn(rtwdev, "failed to send h2c cam\n"); return ret; }
- rtwdev->total_sta_assoc++; - if (sta->tdls) - rtwvif->tdls_peer++; - rtw89_phy_ra_assoc(rtwdev, sta); - rtw89_mac_bf_assoc(rtwdev, vif, sta); - rtw89_mac_bf_monitor_calc(rtwdev, sta, false); + rtw89_phy_ra_assoc(rtwdev, rtwsta_link); + rtw89_mac_bf_assoc(rtwdev, rtwvif_link, rtwsta_link); + rtw89_mac_bf_monitor_calc(rtwdev, rtwsta_link, false);
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) { - struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; + struct ieee80211_bss_conf *bss_conf;
+ rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); if (bss_conf->he_support && !(bss_conf->he_oper.params & IEEE80211_HE_OPERATION_ER_SU_DISABLE)) - rtwsta->er_cap = true; + rtwsta_link->er_cap = true; + + rcu_read_unlock();
- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta, + rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, rtwsta_link, BTC_ROLE_MSTS_STA_CONN_END); - rtw89_core_get_no_ul_ofdma_htc(rtwdev, &rtwsta->htc_template, chan); - rtw89_phy_ul_tb_assoc(rtwdev, rtwvif); + rtw89_core_get_no_ul_ofdma_htc(rtwdev, &rtwsta_link->htc_template, chan); + rtw89_phy_ul_tb_assoc(rtwdev, rtwvif_link);
- ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif, rtwsta->mac_id); + ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif_link, rtwsta_link->mac_id); if (ret) { rtw89_warn(rtwdev, "failed to send h2c general packet\n"); return ret; }
- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true); + rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true); }
return ret; }
-int rtw89_core_sta_remove(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) +int rtw89_core_sta_link_remove(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link); int ret;
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) { - rtw89_reg_6ghz_recalc(rtwdev, rtwvif, false); - rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta, + rtw89_reg_6ghz_recalc(rtwdev, rtwvif_link, false); + rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, rtwsta_link, BTC_ROLE_MSTS_STA_DIS_CONN); } else if (vif->type == NL80211_IFTYPE_AP || sta->tdls) { - rtw89_release_mac_id(rtwdev, rtwsta->mac_id); - - ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta, + ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, rtwsta_link, RTW89_ROLE_REMOVE); if (ret) { rtw89_warn(rtwdev, "failed to send h2c role info\n"); return ret; } - - rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE); }
return 0; @@ -4152,15 +4284,16 @@ static void rtw89_core_ppdu_sts_init(struct rtw89_dev *rtwdev) void rtw89_core_update_beacon_work(struct work_struct *work) { struct rtw89_dev *rtwdev; - struct rtw89_vif *rtwvif = container_of(work, struct rtw89_vif, - update_beacon_work); + struct rtw89_vif_link *rtwvif_link = container_of(work, struct rtw89_vif_link, + update_beacon_work);
- if (rtwvif->net_type != RTW89_NET_TYPE_AP_MODE) + if (rtwvif_link->net_type != RTW89_NET_TYPE_AP_MODE) return;
- rtwdev = rtwvif->rtwdev; + rtwdev = rtwvif_link->rtwvif->rtwdev; + mutex_lock(&rtwdev->mutex); - rtw89_chip_h2c_update_beacon(rtwdev, rtwvif); + rtw89_chip_h2c_update_beacon(rtwdev, rtwvif_link); mutex_unlock(&rtwdev->mutex); }
@@ -4355,6 +4488,168 @@ void rtw89_release_mac_id(struct rtw89_dev *rtwdev, u8 mac_id) clear_bit(mac_id, rtwdev->mac_id_map); }
+void rtw89_init_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, + u8 mac_id, u8 port) +{ + const struct rtw89_chip_info *chip = rtwdev->chip; + u8 support_link_num = chip->support_link_num; + u8 support_mld_num = 0; + unsigned int link_id; + u8 index; + + bitmap_zero(rtwvif->links_inst_map, __RTW89_MLD_MAX_LINK_NUM); + for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) + rtwvif->links[link_id] = NULL; + + rtwvif->rtwdev = rtwdev; + + if (rtwdev->support_mlo) { + rtwvif->links_inst_valid_num = support_link_num; + support_mld_num = chip->support_macid_num / support_link_num; + } else { + rtwvif->links_inst_valid_num = 1; + } + + for (index = 0; index < rtwvif->links_inst_valid_num; index++) { + struct rtw89_vif_link *inst = &rtwvif->links_inst[index]; + + inst->rtwvif = rtwvif; + inst->mac_id = mac_id + index * support_mld_num; + inst->mac_idx = RTW89_MAC_0 + index; + inst->phy_idx = RTW89_PHY_0 + index; + + /* multi-link use the same port id on different HW bands */ + inst->port = port; + } +} + +void rtw89_init_sta(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, + struct rtw89_sta *rtwsta, u8 mac_id) +{ + const struct rtw89_chip_info *chip = rtwdev->chip; + u8 support_link_num = chip->support_link_num; + u8 support_mld_num = 0; + unsigned int link_id; + u8 index; + + bitmap_zero(rtwsta->links_inst_map, __RTW89_MLD_MAX_LINK_NUM); + for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) + rtwsta->links[link_id] = NULL; + + rtwsta->rtwdev = rtwdev; + rtwsta->rtwvif = rtwvif; + + if (rtwdev->support_mlo) { + rtwsta->links_inst_valid_num = support_link_num; + support_mld_num = chip->support_macid_num / support_link_num; + } else { + rtwsta->links_inst_valid_num = 1; + } + + for (index = 0; index < rtwsta->links_inst_valid_num; index++) { + struct rtw89_sta_link *inst = &rtwsta->links_inst[index]; + + inst->rtwvif_link = &rtwvif->links_inst[index]; + + inst->rtwsta = rtwsta; + inst->mac_id = mac_id + index * support_mld_num; + } +} + +struct rtw89_vif_link *rtw89_vif_set_link(struct rtw89_vif *rtwvif, + unsigned int link_id) +{ + struct rtw89_vif_link *rtwvif_link = rtwvif->links[link_id]; + u8 index; + int ret; + + if (rtwvif_link) + return rtwvif_link; + + index = find_first_zero_bit(rtwvif->links_inst_map, + rtwvif->links_inst_valid_num); + if (index == rtwvif->links_inst_valid_num) { + ret = -EBUSY; + goto err; + } + + rtwvif_link = &rtwvif->links_inst[index]; + rtwvif_link->link_id = link_id; + + set_bit(index, rtwvif->links_inst_map); + rtwvif->links[link_id] = rtwvif_link; + return rtwvif_link; + +err: + rtw89_err(rtwvif->rtwdev, "vif (link_id %u) failed to set link: %d\n", + link_id, ret); + return NULL; +} + +void rtw89_vif_unset_link(struct rtw89_vif *rtwvif, unsigned int link_id) +{ + struct rtw89_vif_link **container = &rtwvif->links[link_id]; + struct rtw89_vif_link *link = *container; + u8 index; + + if (!link) + return; + + index = rtw89_vif_link_inst_get_index(link); + clear_bit(index, rtwvif->links_inst_map); + *container = NULL; +} + +struct rtw89_sta_link *rtw89_sta_set_link(struct rtw89_sta *rtwsta, + unsigned int link_id) +{ + struct rtw89_vif *rtwvif = rtwsta->rtwvif; + struct rtw89_vif_link *rtwvif_link = rtwvif->links[link_id]; + struct rtw89_sta_link *rtwsta_link = rtwsta->links[link_id]; + u8 index; + int ret; + + if (rtwsta_link) + return rtwsta_link; + + if (!rtwvif_link) { + ret = -ENOLINK; + goto err; + } + + index = rtw89_vif_link_inst_get_index(rtwvif_link); + if (test_bit(index, rtwsta->links_inst_map)) { + ret = -EBUSY; + goto err; + } + + rtwsta_link = &rtwsta->links_inst[index]; + rtwsta_link->link_id = link_id; + + set_bit(index, rtwsta->links_inst_map); + rtwsta->links[link_id] = rtwsta_link; + return rtwsta_link; + +err: + rtw89_err(rtwsta->rtwdev, "sta (link_id %u) failed to set link: %d\n", + link_id, ret); + return NULL; +} + +void rtw89_sta_unset_link(struct rtw89_sta *rtwsta, unsigned int link_id) +{ + struct rtw89_sta_link **container = &rtwsta->links[link_id]; + struct rtw89_sta_link *link = *container; + u8 index; + + if (!link) + return; + + index = rtw89_sta_link_inst_get_index(link); + clear_bit(index, rtwsta->links_inst_map); + *container = NULL; +} + int rtw89_core_init(struct rtw89_dev *rtwdev) { struct rtw89_btc *btc = &rtwdev->btc; @@ -4444,38 +4739,44 @@ void rtw89_core_deinit(struct rtw89_dev *rtwdev) } EXPORT_SYMBOL(rtw89_core_deinit);
-void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, const u8 *mac_addr, bool hw_scan) { const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); + rtwvif_link->chanctx_idx);
rtwdev->scanning = true; rtw89_leave_lps(rtwdev); if (hw_scan) rtw89_leave_ips_by_hwflags(rtwdev);
- ether_addr_copy(rtwvif->mac_addr, mac_addr); + ether_addr_copy(rtwvif_link->mac_addr, mac_addr); rtw89_btc_ntfy_scan_start(rtwdev, RTW89_PHY_0, chan->band_type); - rtw89_chip_rfk_scan(rtwdev, rtwvif, true); + rtw89_chip_rfk_scan(rtwdev, rtwvif_link, true); rtw89_hci_recalc_int_mit(rtwdev); rtw89_phy_config_edcca(rtwdev, true);
- rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, mac_addr); + rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, mac_addr); }
void rtw89_core_scan_complete(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, bool hw_scan) + struct rtw89_vif_link *rtwvif_link, bool hw_scan) { - struct rtw89_vif *rtwvif = vif ? (struct rtw89_vif *)vif->drv_priv : NULL; + struct ieee80211_bss_conf *bss_conf;
- if (!rtwvif) + if (!rtwvif_link) return;
- ether_addr_copy(rtwvif->mac_addr, vif->addr); - rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL); + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + ether_addr_copy(rtwvif_link->mac_addr, bss_conf->addr); + + rcu_read_unlock(); + + rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
- rtw89_chip_rfk_scan(rtwdev, rtwvif, false); + rtw89_chip_rfk_scan(rtwdev, rtwvif_link, false); rtw89_btc_ntfy_scan_finish(rtwdev, RTW89_PHY_0); rtw89_phy_config_edcca(rtwdev, false);
@@ -4688,17 +4989,39 @@ int rtw89_chip_info_setup(struct rtw89_dev *rtwdev) } EXPORT_SYMBOL(rtw89_chip_info_setup);
+void rtw89_chip_cfg_txpwr_ul_tb_offset(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) +{ + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + const struct rtw89_chip_info *chip = rtwdev->chip; + struct ieee80211_bss_conf *bss_conf; + + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false); + if (!bss_conf->he_support || !vif->cfg.assoc) { + rcu_read_unlock(); + return; + } + + rcu_read_unlock(); + + if (chip->ops->set_txpwr_ul_tb_offset) + chip->ops->set_txpwr_ul_tb_offset(rtwdev, 0, rtwvif_link->mac_idx); +} + static int rtw89_core_register_hw(struct rtw89_dev *rtwdev) { const struct rtw89_chip_info *chip = rtwdev->chip; + u8 n = rtwdev->support_mlo ? chip->support_link_num : 1; struct ieee80211_hw *hw = rtwdev->hw; struct rtw89_efuse *efuse = &rtwdev->efuse; struct rtw89_hal *hal = &rtwdev->hal; int ret; int tx_headroom = IEEE80211_HT_CTL_LEN;
- hw->vif_data_size = sizeof(struct rtw89_vif); - hw->sta_data_size = sizeof(struct rtw89_sta); + hw->vif_data_size = struct_size_t(struct rtw89_vif, links_inst, n); + hw->sta_data_size = struct_size_t(struct rtw89_sta, links_inst, n); hw->txq_data_size = sizeof(struct rtw89_txq); hw->chanctx_data_size = sizeof(struct rtw89_chanctx_cfg);
diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h index 4ed9034fdb46..de33320b1354 100644 --- a/drivers/net/wireless/realtek/rtw89/core.h +++ b/drivers/net/wireless/realtek/rtw89/core.h @@ -829,6 +829,8 @@ enum rtw89_phy_idx { RTW89_PHY_MAX };
+#define __RTW89_MLD_MAX_LINK_NUM 2 + enum rtw89_chanctx_idx { RTW89_CHANCTX_0 = 0, RTW89_CHANCTX_1 = 1, @@ -1166,8 +1168,8 @@ struct rtw89_core_tx_request { enum rtw89_core_tx_type tx_type;
struct sk_buff *skb; - struct ieee80211_vif *vif; - struct ieee80211_sta *sta; + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; struct rtw89_tx_desc_info desc_info; };
@@ -3354,12 +3356,13 @@ struct rtw89_sec_cam_entry { u8 key[32]; };
-struct rtw89_sta { +struct rtw89_sta_link { + struct rtw89_sta *rtwsta; + unsigned int link_id; + u8 mac_id; - bool disassoc; bool er_cap; - struct rtw89_dev *rtwdev; - struct rtw89_vif *rtwvif; + struct rtw89_vif_link *rtwvif_link; struct rtw89_ra_info ra; struct rtw89_ra_report ra_report; int max_agg_wait; @@ -3370,15 +3373,12 @@ struct rtw89_sta { struct ewma_evm evm_1ss; struct ewma_evm evm_min[RF_PATH_MAX]; struct ewma_evm evm_max[RF_PATH_MAX]; - struct rtw89_ampdu_params ampdu_params[IEEE80211_NUM_TIDS]; - DECLARE_BITMAP(ampdu_map, IEEE80211_NUM_TIDS); struct ieee80211_rx_status rx_status; u16 rx_hw_rate; __le32 htc_template; struct rtw89_addr_cam_entry addr_cam; /* AP mode or TDLS peer only */ struct rtw89_bssid_cam_entry bssid_cam; /* TDLS peer only */ struct list_head ba_cam_list; - struct sk_buff_head roc_queue;
bool use_cfg_mask; struct cfg80211_bitrate_mask mask; @@ -3460,10 +3460,10 @@ struct rtw89_p2p_noa_setter { u8 noa_index; };
-struct rtw89_vif { - struct list_head list; - struct rtw89_dev *rtwdev; - struct rtw89_roc roc; +struct rtw89_vif_link { + struct rtw89_vif *rtwvif; + unsigned int link_id; + bool chanctx_assigned; /* only valid when running with chanctx_ops */ enum rtw89_chanctx_idx chanctx_idx; enum rtw89_reg_6ghz_power reg_6ghz_power; @@ -3473,7 +3473,6 @@ struct rtw89_vif { u8 port; u8 mac_addr[ETH_ALEN]; u8 bssid[ETH_ALEN]; - __be32 ip_addr; u8 phy_idx; u8 mac_idx; u8 net_type; @@ -3484,7 +3483,6 @@ struct rtw89_vif { u8 hit_rule; u8 last_noa_nr; u64 sync_bcn_tsf; - bool offchan; bool trigger; bool lsig_txop; u8 tgt_ind; @@ -3498,15 +3496,11 @@ struct rtw89_vif { bool pre_pwr_diff_en; bool pwr_diff_en; u8 def_tri_idx; - u32 tdls_peer; struct work_struct update_beacon_work; struct rtw89_addr_cam_entry addr_cam; struct rtw89_bssid_cam_entry bssid_cam; struct ieee80211_tx_queue_params tx_params[IEEE80211_NUM_ACS]; - struct rtw89_traffic_stats stats; struct rtw89_phy_rate_pattern rate_pattern; - struct cfg80211_scan_request *scan_req; - struct ieee80211_scan_ies *scan_ies; struct list_head general_pkt_list; struct rtw89_p2p_noa_setter p2p_noa; }; @@ -3599,11 +3593,11 @@ struct rtw89_chip_ops { void (*rfk_hw_init)(struct rtw89_dev *rtwdev); void (*rfk_init)(struct rtw89_dev *rtwdev); void (*rfk_init_late)(struct rtw89_dev *rtwdev); - void (*rfk_channel)(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif); + void (*rfk_channel)(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link); void (*rfk_band_changed)(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx, const struct rtw89_chan *chan); - void (*rfk_scan)(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, + void (*rfk_scan)(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool start); void (*rfk_track)(struct rtw89_dev *rtwdev); void (*power_trim)(struct rtw89_dev *rtwdev); @@ -3646,23 +3640,25 @@ struct rtw89_chip_ops { u32 *tx_en, enum rtw89_sch_tx_sel sel); int (*resume_sch_tx)(struct rtw89_dev *rtwdev, u8 mac_idx, u32 tx_en); int (*h2c_dctl_sec_cam)(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int (*h2c_default_cmac_tbl)(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int (*h2c_assoc_cmac_tbl)(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int (*h2c_ampdu_cmac_tbl)(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int (*h2c_default_dmac_tbl)(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int (*h2c_update_beacon)(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif); - int (*h2c_ba_cam)(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, + struct rtw89_vif_link *rtwvif_link); + int (*h2c_ba_cam)(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, bool valid, struct ieee80211_ampdu_params *params);
void (*btc_set_rfe)(struct rtw89_dev *rtwdev); @@ -5196,7 +5192,7 @@ struct rtw89_early_h2c { };
struct rtw89_hw_scan_info { - struct ieee80211_vif *scanning_vif; + struct rtw89_vif_link *scanning_vif; struct list_head pkt_list[NUM_NL80211_BANDS]; struct rtw89_chan op_chan; bool abort; @@ -5371,7 +5367,7 @@ struct rtw89_wow_aoac_report { };
struct rtw89_wow_param { - struct ieee80211_vif *wow_vif; + struct rtw89_vif_link *rtwvif_link; DECLARE_BITMAP(flags, RTW89_WOW_FLAG_NUM); struct rtw89_wow_cam_info patterns[RTW89_MAX_PATTERN_NUM]; struct rtw89_wow_key_info key_info; @@ -5408,7 +5404,7 @@ struct rtw89_mcc_policy { };
struct rtw89_mcc_role { - struct rtw89_vif *rtwvif; + struct rtw89_vif_link *rtwvif_link; struct rtw89_mcc_policy policy; struct rtw89_mcc_limit limit;
@@ -5608,6 +5604,121 @@ struct rtw89_dev { u8 priv[] __aligned(sizeof(void *)); };
+struct rtw89_vif { + struct rtw89_dev *rtwdev; + struct list_head list; + + u8 mac_addr[ETH_ALEN]; + __be32 ip_addr; + + struct rtw89_traffic_stats stats; + u32 tdls_peer; + + struct ieee80211_scan_ies *scan_ies; + struct cfg80211_scan_request *scan_req; + + struct rtw89_roc roc; + bool offchan; + + u8 links_inst_valid_num; + DECLARE_BITMAP(links_inst_map, __RTW89_MLD_MAX_LINK_NUM); + struct rtw89_vif_link *links[IEEE80211_MLD_MAX_NUM_LINKS]; + struct rtw89_vif_link links_inst[] __counted_by(links_inst_valid_num); +}; + +static inline bool rtw89_vif_assign_link_is_valid(struct rtw89_vif_link **rtwvif_link, + const struct rtw89_vif *rtwvif, + unsigned int link_id) +{ + *rtwvif_link = rtwvif->links[link_id]; + return !!*rtwvif_link; +} + +#define rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) \ + for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) \ + if (rtw89_vif_assign_link_is_valid(&(rtwvif_link), rtwvif, link_id)) + +struct rtw89_sta { + struct rtw89_dev *rtwdev; + struct rtw89_vif *rtwvif; + + bool disassoc; + + struct sk_buff_head roc_queue; + + struct rtw89_ampdu_params ampdu_params[IEEE80211_NUM_TIDS]; + DECLARE_BITMAP(ampdu_map, IEEE80211_NUM_TIDS); + + u8 links_inst_valid_num; + DECLARE_BITMAP(links_inst_map, __RTW89_MLD_MAX_LINK_NUM); + struct rtw89_sta_link *links[IEEE80211_MLD_MAX_NUM_LINKS]; + struct rtw89_sta_link links_inst[] __counted_by(links_inst_valid_num); +}; + +static inline bool rtw89_sta_assign_link_is_valid(struct rtw89_sta_link **rtwsta_link, + const struct rtw89_sta *rtwsta, + unsigned int link_id) +{ + *rtwsta_link = rtwsta->links[link_id]; + return !!*rtwsta_link; +} + +#define rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) \ + for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) \ + if (rtw89_sta_assign_link_is_valid(&(rtwsta_link), rtwsta, link_id)) + +static inline u8 rtw89_vif_get_main_macid(struct rtw89_vif *rtwvif) +{ + /* const after init, so no need to check if active first */ + return rtwvif->links_inst[0].mac_id; +} + +static inline u8 rtw89_vif_get_main_port(struct rtw89_vif *rtwvif) +{ + /* const after init, so no need to check if active first */ + return rtwvif->links_inst[0].port; +} + +static inline struct rtw89_vif_link * +rtw89_vif_get_link_inst(struct rtw89_vif *rtwvif, u8 index) +{ + if (index >= rtwvif->links_inst_valid_num || + !test_bit(index, rtwvif->links_inst_map)) + return NULL; + return &rtwvif->links_inst[index]; +} + +static inline +u8 rtw89_vif_link_inst_get_index(struct rtw89_vif_link *rtwvif_link) +{ + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; + + return rtwvif_link - rtwvif->links_inst; +} + +static inline u8 rtw89_sta_get_main_macid(struct rtw89_sta *rtwsta) +{ + /* const after init, so no need to check if active first */ + return rtwsta->links_inst[0].mac_id; +} + +static inline struct rtw89_sta_link * +rtw89_sta_get_link_inst(struct rtw89_sta *rtwsta, u8 index) +{ + if (index >= rtwsta->links_inst_valid_num || + !test_bit(index, rtwsta->links_inst_map)) + return NULL; + return &rtwsta->links_inst[index]; +} + +static inline +u8 rtw89_sta_link_inst_get_index(struct rtw89_sta_link *rtwsta_link) +{ + struct rtw89_sta *rtwsta = rtwsta_link->rtwsta; + + return rtwsta_link - rtwsta->links_inst; +} + static inline int rtw89_hci_tx_write(struct rtw89_dev *rtwdev, struct rtw89_core_tx_request *tx_req) { @@ -5972,9 +6083,26 @@ static inline struct ieee80211_vif *rtwvif_to_vif_safe(struct rtw89_vif *rtwvif) return rtwvif ? rtwvif_to_vif(rtwvif) : NULL; }
+static inline +struct ieee80211_vif *rtwvif_link_to_vif(struct rtw89_vif_link *rtwvif_link) +{ + return rtwvif_to_vif(rtwvif_link->rtwvif); +} + +static inline +struct ieee80211_vif *rtwvif_link_to_vif_safe(struct rtw89_vif_link *rtwvif_link) +{ + return rtwvif_link ? rtwvif_link_to_vif(rtwvif_link) : NULL; +} + +static inline struct rtw89_vif *vif_to_rtwvif(struct ieee80211_vif *vif) +{ + return (struct rtw89_vif *)vif->drv_priv; +} + static inline struct rtw89_vif *vif_to_rtwvif_safe(struct ieee80211_vif *vif) { - return vif ? (struct rtw89_vif *)vif->drv_priv : NULL; + return vif ? vif_to_rtwvif(vif) : NULL; }
static inline struct ieee80211_sta *rtwsta_to_sta(struct rtw89_sta *rtwsta) @@ -5989,11 +6117,88 @@ static inline struct ieee80211_sta *rtwsta_to_sta_safe(struct rtw89_sta *rtwsta) return rtwsta ? rtwsta_to_sta(rtwsta) : NULL; }
+static inline +struct ieee80211_sta *rtwsta_link_to_sta(struct rtw89_sta_link *rtwsta_link) +{ + return rtwsta_to_sta(rtwsta_link->rtwsta); +} + +static inline +struct ieee80211_sta *rtwsta_link_to_sta_safe(struct rtw89_sta_link *rtwsta_link) +{ + return rtwsta_link ? rtwsta_link_to_sta(rtwsta_link) : NULL; +} + +static inline struct rtw89_sta *sta_to_rtwsta(struct ieee80211_sta *sta) +{ + return (struct rtw89_sta *)sta->drv_priv; +} + static inline struct rtw89_sta *sta_to_rtwsta_safe(struct ieee80211_sta *sta) { - return sta ? (struct rtw89_sta *)sta->drv_priv : NULL; + return sta ? sta_to_rtwsta(sta) : NULL; }
+static inline struct ieee80211_bss_conf * +__rtw89_vif_rcu_dereference_link(struct rtw89_vif_link *rtwvif_link, bool *nolink) +{ + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + struct ieee80211_bss_conf *bss_conf; + + bss_conf = rcu_dereference(vif->link_conf[rtwvif_link->link_id]); + if (unlikely(!bss_conf)) { + *nolink = true; + return &vif->bss_conf; + } + + *nolink = false; + return bss_conf; +} + +#define rtw89_vif_rcu_dereference_link(rtwvif_link, assert) \ +({ \ + typeof(rtwvif_link) p = rtwvif_link; \ + struct ieee80211_bss_conf *bss_conf; \ + bool nolink; \ + \ + bss_conf = __rtw89_vif_rcu_dereference_link(p, &nolink); \ + if (unlikely(nolink) && (assert)) \ + rtw89_err(p->rtwvif->rtwdev, \ + "%s: cannot find exact bss_conf for link_id %u\n",\ + __func__, p->link_id); \ + bss_conf; \ +}) + +static inline struct ieee80211_link_sta * +__rtw89_sta_rcu_dereference_link(struct rtw89_sta_link *rtwsta_link, bool *nolink) +{ + struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link); + struct ieee80211_link_sta *link_sta; + + link_sta = rcu_dereference(sta->link[rtwsta_link->link_id]); + if (unlikely(!link_sta)) { + *nolink = true; + return &sta->deflink; + } + + *nolink = false; + return link_sta; +} + +#define rtw89_sta_rcu_dereference_link(rtwsta_link, assert) \ +({ \ + typeof(rtwsta_link) p = rtwsta_link; \ + struct ieee80211_link_sta *link_sta; \ + bool nolink; \ + \ + link_sta = __rtw89_sta_rcu_dereference_link(p, &nolink); \ + if (unlikely(nolink) && (assert)) \ + rtw89_err(p->rtwsta->rtwdev, \ + "%s: cannot find exact link_sta for link_id %u\n",\ + __func__, p->link_id); \ + link_sta; \ +}) + static inline u8 rtw89_hw_to_rate_info_bw(enum rtw89_bandwidth hw_bw) { if (hw_bw == RTW89_CHANNEL_WIDTH_160) @@ -6078,29 +6283,29 @@ enum nl80211_he_ru_alloc rtw89_he_rua_to_ru_alloc(u16 rua) }
static inline -struct rtw89_addr_cam_entry *rtw89_get_addr_cam_of(struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) +struct rtw89_addr_cam_entry *rtw89_get_addr_cam_of(struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - if (rtwsta) { - struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta); + if (rtwsta_link) { + struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls) - return &rtwsta->addr_cam; + if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls) + return &rtwsta_link->addr_cam; } - return &rtwvif->addr_cam; + return &rtwvif_link->addr_cam; }
static inline -struct rtw89_bssid_cam_entry *rtw89_get_bssid_cam_of(struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) +struct rtw89_bssid_cam_entry *rtw89_get_bssid_cam_of(struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - if (rtwsta) { - struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta); + if (rtwsta_link) { + struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
if (sta->tdls) - return &rtwsta->bssid_cam; + return &rtwsta_link->bssid_cam; } - return &rtwvif->bssid_cam; + return &rtwvif_link->bssid_cam; }
static inline @@ -6159,11 +6364,10 @@ const struct rtw89_chan_rcd *rtw89_chan_rcd_get(struct rtw89_dev *rtwdev, static inline const struct rtw89_chan *rtw89_scan_chan_get(struct rtw89_dev *rtwdev) { - struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif; - struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif); + struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
- if (rtwvif) - return rtw89_chan_get(rtwdev, rtwvif->chanctx_idx); + if (rtwvif_link) + return rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx); else return rtw89_chan_get(rtwdev, RTW89_CHANCTX_0); } @@ -6240,12 +6444,12 @@ static inline void rtw89_chip_rfk_init_late(struct rtw89_dev *rtwdev) }
static inline void rtw89_chip_rfk_channel(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_chip_info *chip = rtwdev->chip;
if (chip->ops->rfk_channel) - chip->ops->rfk_channel(rtwdev, rtwvif); + chip->ops->rfk_channel(rtwdev, rtwvif_link); }
static inline void rtw89_chip_rfk_band_changed(struct rtw89_dev *rtwdev, @@ -6259,12 +6463,12 @@ static inline void rtw89_chip_rfk_band_changed(struct rtw89_dev *rtwdev, }
static inline void rtw89_chip_rfk_scan(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool start) + struct rtw89_vif_link *rtwvif_link, bool start) { const struct rtw89_chip_info *chip = rtwdev->chip;
if (chip->ops->rfk_scan) - chip->ops->rfk_scan(rtwdev, rtwvif, start); + chip->ops->rfk_scan(rtwdev, rtwvif_link, start); }
static inline void rtw89_chip_rfk_track(struct rtw89_dev *rtwdev) @@ -6347,20 +6551,6 @@ static inline void rtw89_chip_cfg_txrx_path(struct rtw89_dev *rtwdev) chip->ops->cfg_txrx_path(rtwdev); }
-static inline -void rtw89_chip_cfg_txpwr_ul_tb_offset(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif) -{ - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - const struct rtw89_chip_info *chip = rtwdev->chip; - - if (!vif->bss_conf.he_support || !vif->cfg.assoc) - return; - - if (chip->ops->set_txpwr_ul_tb_offset) - chip->ops->set_txpwr_ul_tb_offset(rtwdev, 0, rtwvif->mac_idx); -} - static inline void rtw89_chip_digital_pwr_comp(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx) { @@ -6457,14 +6647,14 @@ int rtw89_chip_resume_sch_tx(struct rtw89_dev *rtwdev, u8 mac_idx, u32 tx_en)
static inline int rtw89_chip_h2c_dctl_sec_cam(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { const struct rtw89_chip_info *chip = rtwdev->chip;
if (!chip->ops->h2c_dctl_sec_cam) return 0; - return chip->ops->h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta); + return chip->ops->h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link); }
static inline u8 *get_hdr_bssid(struct ieee80211_hdr *hdr) @@ -6479,13 +6669,14 @@ static inline u8 *get_hdr_bssid(struct ieee80211_hdr *hdr) return hdr->addr3; }
-static inline bool rtw89_sta_has_beamformer_cap(struct ieee80211_sta *sta) +static inline +bool rtw89_sta_has_beamformer_cap(struct ieee80211_link_sta *link_sta) { - if ((sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) || - (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE) || - (sta->deflink.he_cap.he_cap_elem.phy_cap_info[3] & + if ((link_sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) || + (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE) || + (link_sta->he_cap.he_cap_elem.phy_cap_info[3] & IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER) || - (sta->deflink.he_cap.he_cap_elem.phy_cap_info[4] & + (link_sta->he_cap.he_cap_elem.phy_cap_info[4] & IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER)) return true; return false; @@ -6605,21 +6796,21 @@ void rtw89_core_napi_start(struct rtw89_dev *rtwdev); void rtw89_core_napi_stop(struct rtw89_dev *rtwdev); int rtw89_core_napi_init(struct rtw89_dev *rtwdev); void rtw89_core_napi_deinit(struct rtw89_dev *rtwdev); -int rtw89_core_sta_add(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta); -int rtw89_core_sta_assoc(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta); -int rtw89_core_sta_disassoc(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta); -int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta); -int rtw89_core_sta_remove(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta); +int rtw89_core_sta_link_add(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); +int rtw89_core_sta_link_assoc(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); +int rtw89_core_sta_link_disassoc(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); +int rtw89_core_sta_link_disconnect(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); +int rtw89_core_sta_link_remove(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); void rtw89_core_set_tid_config(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta, struct cfg80211_tid_config *tid_config); @@ -6635,22 +6826,40 @@ struct rtw89_dev *rtw89_alloc_ieee80211_hw(struct device *device, void rtw89_free_ieee80211_hw(struct rtw89_dev *rtwdev); u8 rtw89_acquire_mac_id(struct rtw89_dev *rtwdev); void rtw89_release_mac_id(struct rtw89_dev *rtwdev, u8 mac_id); +void rtw89_init_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, + u8 mac_id, u8 port); +void rtw89_init_sta(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, + struct rtw89_sta *rtwsta, u8 mac_id); +struct rtw89_vif_link *rtw89_vif_set_link(struct rtw89_vif *rtwvif, + unsigned int link_id); +void rtw89_vif_unset_link(struct rtw89_vif *rtwvif, unsigned int link_id); +struct rtw89_sta_link *rtw89_sta_set_link(struct rtw89_sta *rtwsta, + unsigned int link_id); +void rtw89_sta_unset_link(struct rtw89_sta *rtwsta, unsigned int link_id); void rtw89_core_set_chip_txpwr(struct rtw89_dev *rtwdev); void rtw89_get_default_chandef(struct cfg80211_chan_def *chandef); void rtw89_get_channel_params(const struct cfg80211_chan_def *chandef, struct rtw89_chan *chan); int rtw89_set_channel(struct rtw89_dev *rtwdev); -void rtw89_get_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, - struct rtw89_chan *chan); u8 rtw89_core_acquire_bit_map(unsigned long *addr, unsigned long size); void rtw89_core_release_bit_map(unsigned long *addr, u8 bit); void rtw89_core_release_all_bits_map(unsigned long *addr, unsigned int nbits); int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx); + struct rtw89_sta_link *rtwsta_link, u8 tid, + u8 *cam_idx); int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx); -void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc); + struct rtw89_sta_link *rtwsta_link, u8 tid, + u8 *cam_idx); +void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev, + struct ieee80211_sta *sta); +void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev, + struct ieee80211_sta *sta); +void rtw89_core_free_sta_pending_roc_tx(struct rtw89_dev *rtwdev, + struct ieee80211_sta *sta); +void rtw89_vif_type_mapping(struct rtw89_vif_link *rtwvif_link, bool assoc); int rtw89_chip_info_setup(struct rtw89_dev *rtwdev); +void rtw89_chip_cfg_txpwr_ul_tb_offset(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link); bool rtw89_ra_report_to_bitrate(struct rtw89_dev *rtwdev, u8 rpt_rate, u16 *bitrate); int rtw89_regd_setup(struct rtw89_dev *rtwdev); int rtw89_regd_init(struct rtw89_dev *rtwdev, @@ -6667,13 +6876,15 @@ void rtw89_core_update_beacon_work(struct work_struct *work); void rtw89_roc_work(struct work_struct *work); void rtw89_roc_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif); void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif); -void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, const u8 *mac_addr, bool hw_scan); void rtw89_core_scan_complete(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, bool hw_scan); -int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, bool hw_scan); +int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool active); -void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif); +void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct ieee80211_bss_conf *bss_conf); void rtw89_core_ntfy_btc_event(struct rtw89_dev *rtwdev, enum rtw89_btc_hmsg event);
#endif diff --git a/drivers/net/wireless/realtek/rtw89/debug.c b/drivers/net/wireless/realtek/rtw89/debug.c index 29f85210f919..7391f131229a 100644 --- a/drivers/net/wireless/realtek/rtw89/debug.c +++ b/drivers/net/wireless/realtek/rtw89/debug.c @@ -3506,7 +3506,9 @@ static ssize_t rtw89_debug_priv_fw_log_manual_set(struct file *filp, return count; }
-static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta) +static void rtw89_sta_link_info_get_iter(struct seq_file *m, + struct rtw89_dev *rtwdev, + struct rtw89_sta_link *rtwsta_link) { static const char * const he_gi_str[] = { [NL80211_RATE_INFO_HE_GI_0_8] = "0.8", @@ -3518,20 +3520,26 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta) [NL80211_RATE_INFO_EHT_GI_1_6] = "1.6", [NL80211_RATE_INFO_EHT_GI_3_2] = "3.2", }; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rate_info *rate = &rtwsta->ra_report.txrate; - struct ieee80211_rx_status *status = &rtwsta->rx_status; - struct seq_file *m = (struct seq_file *)data; - struct rtw89_dev *rtwdev = rtwsta->rtwdev; + struct rate_info *rate = &rtwsta_link->ra_report.txrate; + struct ieee80211_rx_status *status = &rtwsta_link->rx_status; struct rtw89_hal *hal = &rtwdev->hal; u8 ant_num = hal->ant_diversity ? 2 : rtwdev->chip->rf_path_num; bool ant_asterisk = hal->tx_path_diversity || hal->ant_diversity; + struct ieee80211_link_sta *link_sta; u8 evm_min, evm_max, evm_1ss; + u16 max_rc_amsdu_len; u8 rssi; u8 snr; int i;
- seq_printf(m, "TX rate [%d]: ", rtwsta->mac_id); + rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + max_rc_amsdu_len = link_sta->agg.max_rc_amsdu_len; + + rcu_read_unlock(); + + seq_printf(m, "TX rate [%u, %u]: ", rtwsta_link->mac_id, rtwsta_link->link_id);
if (rate->flags & RATE_INFO_FLAGS_MCS) seq_printf(m, "HT MCS-%d%s", rate->mcs, @@ -3549,13 +3557,13 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta) eht_gi_str[rate->eht_gi] : "N/A"); else seq_printf(m, "Legacy %d", rate->legacy); - seq_printf(m, "%s", rtwsta->ra_report.might_fallback_legacy ? " FB_G" : ""); + seq_printf(m, "%s", rtwsta_link->ra_report.might_fallback_legacy ? " FB_G" : ""); seq_printf(m, " BW:%u", rtw89_rate_info_bw_to_mhz(rate->bw)); - seq_printf(m, "\t(hw_rate=0x%x)", rtwsta->ra_report.hw_rate); - seq_printf(m, "\t==> agg_wait=%d (%d)\n", rtwsta->max_agg_wait, - sta->deflink.agg.max_rc_amsdu_len); + seq_printf(m, " (hw_rate=0x%x)", rtwsta_link->ra_report.hw_rate); + seq_printf(m, " ==> agg_wait=%d (%d)\n", rtwsta_link->max_agg_wait, + max_rc_amsdu_len);
- seq_printf(m, "RX rate [%d]: ", rtwsta->mac_id); + seq_printf(m, "RX rate [%u, %u]: ", rtwsta_link->mac_id, rtwsta_link->link_id);
switch (status->encoding) { case RX_ENC_LEGACY: @@ -3582,24 +3590,24 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta) break; } seq_printf(m, " BW:%u", rtw89_rate_info_bw_to_mhz(status->bw)); - seq_printf(m, "\t(hw_rate=0x%x)\n", rtwsta->rx_hw_rate); + seq_printf(m, " (hw_rate=0x%x)\n", rtwsta_link->rx_hw_rate);
- rssi = ewma_rssi_read(&rtwsta->avg_rssi); + rssi = ewma_rssi_read(&rtwsta_link->avg_rssi); seq_printf(m, "RSSI: %d dBm (raw=%d, prev=%d) [", - RTW89_RSSI_RAW_TO_DBM(rssi), rssi, rtwsta->prev_rssi); + RTW89_RSSI_RAW_TO_DBM(rssi), rssi, rtwsta_link->prev_rssi); for (i = 0; i < ant_num; i++) { - rssi = ewma_rssi_read(&rtwsta->rssi[i]); + rssi = ewma_rssi_read(&rtwsta_link->rssi[i]); seq_printf(m, "%d%s%s", RTW89_RSSI_RAW_TO_DBM(rssi), ant_asterisk && (hal->antenna_tx & BIT(i)) ? "*" : "", i + 1 == ant_num ? "" : ", "); } seq_puts(m, "]\n");
- evm_1ss = ewma_evm_read(&rtwsta->evm_1ss); + evm_1ss = ewma_evm_read(&rtwsta_link->evm_1ss); seq_printf(m, "EVM: [%2u.%02u, ", evm_1ss >> 2, (evm_1ss & 0x3) * 25); for (i = 0; i < (hal->ant_diversity ? 2 : 1); i++) { - evm_min = ewma_evm_read(&rtwsta->evm_min[i]); - evm_max = ewma_evm_read(&rtwsta->evm_max[i]); + evm_min = ewma_evm_read(&rtwsta_link->evm_min[i]); + evm_max = ewma_evm_read(&rtwsta_link->evm_max[i]);
seq_printf(m, "%s(%2u.%02u, %2u.%02u)", i == 0 ? "" : " ", evm_min >> 2, (evm_min & 0x3) * 25, @@ -3607,10 +3615,22 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta) } seq_puts(m, "]\t");
- snr = ewma_snr_read(&rtwsta->avg_snr); + snr = ewma_snr_read(&rtwsta_link->avg_snr); seq_printf(m, "SNR: %u\n", snr); }
+static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta) +{ + struct seq_file *m = (struct seq_file *)data; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_dev *rtwdev = rtwsta->rtwdev; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) + rtw89_sta_link_info_get_iter(m, rtwdev, rtwsta_link); +} + static void rtw89_debug_append_rx_rate(struct seq_file *m, struct rtw89_pkt_stat *pkt_stat, enum rtw89_hw_rate first_rate, int len) @@ -3737,28 +3757,41 @@ static void rtw89_dump_pkt_offload(struct seq_file *m, struct list_head *pkt_lis seq_puts(m, "\n"); }
+static void rtw89_vif_link_ids_get(struct seq_file *m, u8 *mac, + struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) +{ + struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam; + + seq_printf(m, " [%u] %pM\n", rtwvif_link->mac_id, rtwvif_link->mac_addr); + seq_printf(m, "\tlink_id=%u\n", rtwvif_link->link_id); + seq_printf(m, "\tbssid_cam_idx=%u\n", bssid_cam->bssid_cam_idx); + rtw89_dump_addr_cam(m, rtwdev, &rtwvif_link->addr_cam); + rtw89_dump_pkt_offload(m, &rtwvif_link->general_pkt_list, + "\tpkt_ofld[GENERAL]: "); +} + static void rtw89_vif_ids_get_iter(void *data, u8 *mac, struct ieee80211_vif *vif) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - struct rtw89_dev *rtwdev = rtwvif->rtwdev; struct seq_file *m = (struct seq_file *)data; - struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_dev *rtwdev = rtwvif->rtwdev; + struct rtw89_vif_link *rtwvif_link; + unsigned int link_id;
- seq_printf(m, "VIF [%d] %pM\n", rtwvif->mac_id, rtwvif->mac_addr); - seq_printf(m, "\tbssid_cam_idx=%u\n", bssid_cam->bssid_cam_idx); - rtw89_dump_addr_cam(m, rtwdev, &rtwvif->addr_cam); - rtw89_dump_pkt_offload(m, &rtwvif->general_pkt_list, "\tpkt_ofld[GENERAL]: "); + seq_printf(m, "VIF %pM\n", rtwvif->mac_addr); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_vif_link_ids_get(m, mac, rtwdev, rtwvif_link); }
-static void rtw89_dump_ba_cam(struct seq_file *m, struct rtw89_sta *rtwsta) +static void rtw89_dump_ba_cam(struct seq_file *m, struct rtw89_dev *rtwdev, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = rtwsta->rtwvif; - struct rtw89_dev *rtwdev = rtwvif->rtwdev; struct rtw89_ba_cam_entry *entry; bool first = true;
- list_for_each_entry(entry, &rtwsta->ba_cam_list, list) { + list_for_each_entry(entry, &rtwsta_link->ba_cam_list, list) { if (first) { seq_puts(m, "\tba_cam "); first = false; @@ -3771,16 +3804,36 @@ static void rtw89_dump_ba_cam(struct seq_file *m, struct rtw89_sta *rtwsta) seq_puts(m, "\n"); }
+static void rtw89_sta_link_ids_get(struct seq_file *m, + struct rtw89_dev *rtwdev, + struct rtw89_sta_link *rtwsta_link) +{ + struct ieee80211_link_sta *link_sta; + + rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + + seq_printf(m, " [%u] %pM\n", rtwsta_link->mac_id, link_sta->addr); + + rcu_read_unlock(); + + seq_printf(m, "\tlink_id=%u\n", rtwsta_link->link_id); + rtw89_dump_addr_cam(m, rtwdev, &rtwsta_link->addr_cam); + rtw89_dump_ba_cam(m, rtwdev, rtwsta_link); +} + static void rtw89_sta_ids_get_iter(void *data, struct ieee80211_sta *sta) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_dev *rtwdev = rtwsta->rtwdev; struct seq_file *m = (struct seq_file *)data; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_dev *rtwdev = rtwsta->rtwdev; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id;
- seq_printf(m, "STA [%d] %pM %s\n", rtwsta->mac_id, sta->addr, - sta->tdls ? "(TDLS)" : ""); - rtw89_dump_addr_cam(m, rtwdev, &rtwsta->addr_cam); - rtw89_dump_ba_cam(m, rtwsta); + seq_printf(m, "STA %pM %s\n", sta->addr, sta->tdls ? "(TDLS)" : ""); + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) + rtw89_sta_link_ids_get(m, rtwdev, rtwsta_link); }
static int rtw89_debug_priv_stations_get(struct seq_file *m, void *v) diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c index d9b0e7ebe619..13a7c39ceb6f 100644 --- a/drivers/net/wireless/realtek/rtw89/fw.c +++ b/drivers/net/wireless/realtek/rtw89/fw.c @@ -1741,8 +1741,8 @@ void rtw89_fw_log_dump(struct rtw89_dev *rtwdev, u8 *buf, u32 len) }
#define H2C_CAM_LEN 60 -int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, const u8 *scan_mac_addr) +int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, const u8 *scan_mac_addr) { struct sk_buff *skb; int ret; @@ -1753,8 +1753,9 @@ int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, return -ENOMEM; } skb_put(skb, H2C_CAM_LEN); - rtw89_cam_fill_addr_cam_info(rtwdev, rtwvif, rtwsta, scan_mac_addr, skb->data); - rtw89_cam_fill_bssid_cam_info(rtwdev, rtwvif, rtwsta, skb->data); + rtw89_cam_fill_addr_cam_info(rtwdev, rtwvif_link, rtwsta_link, scan_mac_addr, + skb->data); + rtw89_cam_fill_bssid_cam_info(rtwdev, rtwvif_link, rtwsta_link, skb->data);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, H2C_CAT_MAC, @@ -1776,8 +1777,8 @@ int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, }
int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { struct rtw89_h2c_dctlinfo_ud_v1 *h2c; u32 len = sizeof(*h2c); @@ -1792,7 +1793,7 @@ int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev, skb_put(skb, len); h2c = (struct rtw89_h2c_dctlinfo_ud_v1 *)skb->data;
- rtw89_cam_fill_dctl_sec_cam_info_v1(rtwdev, rtwvif, rtwsta, h2c); + rtw89_cam_fill_dctl_sec_cam_info_v1(rtwdev, rtwvif_link, rtwsta_link, h2c);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, H2C_CAT_MAC, @@ -1815,8 +1816,8 @@ int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev, EXPORT_SYMBOL(rtw89_fw_h2c_dctl_sec_cam_v1);
int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { struct rtw89_h2c_dctlinfo_ud_v2 *h2c; u32 len = sizeof(*h2c); @@ -1831,7 +1832,7 @@ int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev, skb_put(skb, len); h2c = (struct rtw89_h2c_dctlinfo_ud_v2 *)skb->data;
- rtw89_cam_fill_dctl_sec_cam_info_v2(rtwdev, rtwvif, rtwsta, h2c); + rtw89_cam_fill_dctl_sec_cam_info_v2(rtwdev, rtwvif_link, rtwsta_link, h2c);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, H2C_CAT_MAC, @@ -1854,10 +1855,10 @@ int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev, EXPORT_SYMBOL(rtw89_fw_h2c_dctl_sec_cam_v2);
int rtw89_fw_h2c_default_dmac_tbl_v2(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id; + u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id; struct rtw89_h2c_dctlinfo_ud_v2 *h2c; u32 len = sizeof(*h2c); struct sk_buff *skb; @@ -1908,21 +1909,24 @@ int rtw89_fw_h2c_default_dmac_tbl_v2(struct rtw89_dev *rtwdev, } EXPORT_SYMBOL(rtw89_fw_h2c_default_dmac_tbl_v2);
-int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, +int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, bool valid, struct ieee80211_ampdu_params *params) { const struct rtw89_chip_info *chip = rtwdev->chip; - struct rtw89_vif *rtwvif = rtwsta->rtwvif; struct rtw89_h2c_ba_cam *h2c; - u8 macid = rtwsta->mac_id; + u8 macid = rtwsta_link->mac_id; u32 len = sizeof(*h2c); struct sk_buff *skb; u8 entry_idx; int ret;
ret = valid ? - rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx) : - rtw89_core_release_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx); + rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta_link, params->tid, + &entry_idx) : + rtw89_core_release_sta_ba_entry(rtwdev, rtwsta_link, params->tid, + &entry_idx); if (ret) { /* it still works even if we don't have static BA CAM, because * hardware can create dynamic BA CAM automatically. @@ -1960,7 +1964,8 @@ int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
if (chip->bacam_ver == RTW89_BACAM_V0_EXT) { h2c->w1 |= le32_encode_bits(1, RTW89_H2C_BA_CAM_W1_STD_EN) | - le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_BA_CAM_W1_BAND); + le32_encode_bits(rtwvif_link->mac_idx, + RTW89_H2C_BA_CAM_W1_BAND); }
end: @@ -2039,13 +2044,14 @@ void rtw89_fw_h2c_init_dynamic_ba_cam_v0_ext(struct rtw89_dev *rtwdev) } }
-int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, +int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, bool valid, struct ieee80211_ampdu_params *params) { const struct rtw89_chip_info *chip = rtwdev->chip; - struct rtw89_vif *rtwvif = rtwsta->rtwvif; struct rtw89_h2c_ba_cam_v1 *h2c; - u8 macid = rtwsta->mac_id; + u8 macid = rtwsta_link->mac_id; u32 len = sizeof(*h2c); struct sk_buff *skb; u8 entry_idx; @@ -2053,8 +2059,10 @@ int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, int ret;
ret = valid ? - rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx) : - rtw89_core_release_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx); + rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta_link, params->tid, + &entry_idx) : + rtw89_core_release_sta_ba_entry(rtwdev, rtwsta_link, params->tid, + &entry_idx); if (ret) { /* it still works even if we don't have static BA CAM, because * hardware can create dynamic BA CAM automatically. @@ -2092,7 +2100,8 @@ int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, entry_idx += chip->bacam_dynamic_num; /* std entry right after dynamic ones */ h2c->w1 = le32_encode_bits(entry_idx, RTW89_H2C_BA_CAM_V1_W1_ENTRY_IDX_MASK) | le32_encode_bits(1, RTW89_H2C_BA_CAM_V1_W1_STD_ENTRY_EN) | - le32_encode_bits(!!rtwvif->mac_idx, RTW89_H2C_BA_CAM_V1_W1_BAND_SEL); + le32_encode_bits(!!rtwvif_link->mac_idx, + RTW89_H2C_BA_CAM_V1_W1_BAND_SEL);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, H2C_CAT_MAC, @@ -2197,15 +2206,14 @@ int rtw89_fw_h2c_fw_log(struct rtw89_dev *rtwdev, bool enable) }
static struct sk_buff *rtw89_eapol_get(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { static const u8 gtkbody[] = {0xAA, 0xAA, 0x03, 0x00, 0x00, 0x00, 0x88, 0x8E, 0x01, 0x03, 0x00, 0x5F, 0x02, 0x03}; - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); - struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; u8 sec_hdr_len = rtw89_wow_get_sec_hdr_len(rtwdev); struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct rtw89_eapol_2_of_2 *eapol_pkt; + struct ieee80211_bss_conf *bss_conf; struct ieee80211_hdr_3addr *hdr; struct sk_buff *skb; u8 key_des_ver; @@ -2227,10 +2235,17 @@ static struct sk_buff *rtw89_eapol_get(struct rtw89_dev *rtwdev, hdr->frame_control = cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_FCTL_TODS | IEEE80211_FCTL_PROTECTED); + + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + ether_addr_copy(hdr->addr1, bss_conf->bssid); - ether_addr_copy(hdr->addr2, vif->addr); + ether_addr_copy(hdr->addr2, bss_conf->addr); ether_addr_copy(hdr->addr3, bss_conf->bssid);
+ rcu_read_unlock(); + skb_put_zero(skb, sec_hdr_len);
eapol_pkt = skb_put_zero(skb, sizeof(*eapol_pkt)); @@ -2241,11 +2256,10 @@ static struct sk_buff *rtw89_eapol_get(struct rtw89_dev *rtwdev, }
static struct sk_buff *rtw89_sa_query_get(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); - struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; u8 sec_hdr_len = rtw89_wow_get_sec_hdr_len(rtwdev); + struct ieee80211_bss_conf *bss_conf; struct ieee80211_hdr_3addr *hdr; struct rtw89_sa_query *sa_query; struct sk_buff *skb; @@ -2258,10 +2272,17 @@ static struct sk_buff *rtw89_sa_query_get(struct rtw89_dev *rtwdev, hdr->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT | IEEE80211_STYPE_ACTION | IEEE80211_FCTL_PROTECTED); + + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + ether_addr_copy(hdr->addr1, bss_conf->bssid); - ether_addr_copy(hdr->addr2, vif->addr); + ether_addr_copy(hdr->addr2, bss_conf->addr); ether_addr_copy(hdr->addr3, bss_conf->bssid);
+ rcu_read_unlock(); + skb_put_zero(skb, sec_hdr_len);
sa_query = skb_put_zero(skb, sizeof(*sa_query)); @@ -2272,8 +2293,9 @@ static struct sk_buff *rtw89_sa_query_get(struct rtw89_dev *rtwdev, }
static struct sk_buff *rtw89_arp_response_get(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; u8 sec_hdr_len = rtw89_wow_get_sec_hdr_len(rtwdev); struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct ieee80211_hdr_3addr *hdr; @@ -2295,9 +2317,9 @@ static struct sk_buff *rtw89_arp_response_get(struct rtw89_dev *rtwdev, fc = cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_FCTL_TODS);
hdr->frame_control = fc; - ether_addr_copy(hdr->addr1, rtwvif->bssid); - ether_addr_copy(hdr->addr2, rtwvif->mac_addr); - ether_addr_copy(hdr->addr3, rtwvif->bssid); + ether_addr_copy(hdr->addr1, rtwvif_link->bssid); + ether_addr_copy(hdr->addr2, rtwvif_link->mac_addr); + ether_addr_copy(hdr->addr3, rtwvif_link->bssid);
skb_put_zero(skb, sec_hdr_len);
@@ -2312,18 +2334,18 @@ static struct sk_buff *rtw89_arp_response_get(struct rtw89_dev *rtwdev, arp_hdr->ar_pln = 4; arp_hdr->ar_op = htons(ARPOP_REPLY);
- ether_addr_copy(arp_skb->sender_hw, rtwvif->mac_addr); + ether_addr_copy(arp_skb->sender_hw, rtwvif_link->mac_addr); arp_skb->sender_ip = rtwvif->ip_addr;
return skb; }
static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, enum rtw89_fw_pkt_ofld_type type, u8 *id) { - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); struct rtw89_pktofld_info *info; struct sk_buff *skb; int ret; @@ -2346,13 +2368,13 @@ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev, skb = ieee80211_nullfunc_get(rtwdev->hw, vif, -1, true); break; case RTW89_PKT_OFLD_TYPE_EAPOL_KEY: - skb = rtw89_eapol_get(rtwdev, rtwvif); + skb = rtw89_eapol_get(rtwdev, rtwvif_link); break; case RTW89_PKT_OFLD_TYPE_SA_QUERY: - skb = rtw89_sa_query_get(rtwdev, rtwvif); + skb = rtw89_sa_query_get(rtwdev, rtwvif_link); break; case RTW89_PKT_OFLD_TYPE_ARP_RSP: - skb = rtw89_arp_response_get(rtwdev, rtwvif); + skb = rtw89_arp_response_get(rtwdev, rtwvif_link); break; default: goto err; @@ -2367,7 +2389,7 @@ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev, if (ret) goto err;
- list_add_tail(&info->list, &rtwvif->general_pkt_list); + list_add_tail(&info->list, &rtwvif_link->general_pkt_list); *id = info->id; return 0;
@@ -2377,9 +2399,10 @@ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev, }
void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool notify_fw) + struct rtw89_vif_link *rtwvif_link, + bool notify_fw) { - struct list_head *pkt_list = &rtwvif->general_pkt_list; + struct list_head *pkt_list = &rtwvif_link->general_pkt_list; struct rtw89_pktofld_info *info, *tmp;
list_for_each_entry_safe(info, tmp, pkt_list, list) { @@ -2394,16 +2417,20 @@ void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev,
void rtw89_fw_release_general_pkt_list(struct rtw89_dev *rtwdev, bool notify_fw) { + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id;
rtw89_for_each_rtwvif(rtwdev, rtwvif) - rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif, notify_fw); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif_link, + notify_fw); }
#define H2C_GENERAL_PKT_LEN 6 #define H2C_GENERAL_PKT_ID_UND 0xff int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, u8 macid) + struct rtw89_vif_link *rtwvif_link, u8 macid) { u8 pkt_id_ps_poll = H2C_GENERAL_PKT_ID_UND; u8 pkt_id_null = H2C_GENERAL_PKT_ID_UND; @@ -2411,11 +2438,11 @@ int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, struct sk_buff *skb; int ret;
- rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif, + rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link, RTW89_PKT_OFLD_TYPE_PS_POLL, &pkt_id_ps_poll); - rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif, + rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link, RTW89_PKT_OFLD_TYPE_NULL_DATA, &pkt_id_null); - rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif, + rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link, RTW89_PKT_OFLD_TYPE_QOS_NULL, &pkt_id_qos_null);
skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_GENERAL_PKT_LEN); @@ -2494,10 +2521,10 @@ int rtw89_fw_h2c_lps_parm(struct rtw89_dev *rtwdev, return ret; }
-int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); + rtwvif_link->chanctx_idx); const struct rtw89_chip_info *chip = rtwdev->chip; struct rtw89_h2c_lps_ch_info *h2c; u32 len = sizeof(*h2c); @@ -2546,13 +2573,14 @@ int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) }
#define H2C_P2P_ACT_LEN 20 -int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, +int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct ieee80211_bss_conf *bss_conf, struct ieee80211_p2p_noa_desc *desc, u8 act, u8 noa_id) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - bool p2p_type_gc = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT; - u8 ctwindow_oppps = vif->bss_conf.p2p_noa_attr.oppps_ctwindow; + bool p2p_type_gc = rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT; + u8 ctwindow_oppps = bss_conf->p2p_noa_attr.oppps_ctwindow; struct sk_buff *skb; u8 *cmd; int ret; @@ -2565,7 +2593,7 @@ int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, skb_put(skb, H2C_P2P_ACT_LEN); cmd = skb->data;
- RTW89_SET_FWCMD_P2P_MACID(cmd, rtwvif->mac_id); + RTW89_SET_FWCMD_P2P_MACID(cmd, rtwvif_link->mac_id); RTW89_SET_FWCMD_P2P_P2PID(cmd, 0); RTW89_SET_FWCMD_P2P_NOAID(cmd, noa_id); RTW89_SET_FWCMD_P2P_ACT(cmd, act); @@ -2622,11 +2650,11 @@ static void __rtw89_fw_h2c_set_tx_path(struct rtw89_dev *rtwdev,
#define H2C_CMC_TBL_LEN 68 int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { const struct rtw89_chip_info *chip = rtwdev->chip; - u8 macid = rtwsta ? rtwsta->mac_id : rtwvif->mac_id; + u8 macid = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id; struct sk_buff *skb; int ret;
@@ -2648,7 +2676,7 @@ int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev, } SET_CMC_TBL_DOPPLER_CTRL(skb->data, 0); SET_CMC_TBL_TXPWR_TOLERENCE(skb->data, 0); - if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) + if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) SET_CMC_TBL_DATA_DCM(skb->data, 0);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, @@ -2671,10 +2699,10 @@ int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev, EXPORT_SYMBOL(rtw89_fw_h2c_default_cmac_tbl);
int rtw89_fw_h2c_default_cmac_tbl_g7(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id; + u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id; struct rtw89_h2c_cctlinfo_ud_g7 *h2c; u32 len = sizeof(*h2c); struct sk_buff *skb; @@ -2755,24 +2783,25 @@ int rtw89_fw_h2c_default_cmac_tbl_g7(struct rtw89_dev *rtwdev, EXPORT_SYMBOL(rtw89_fw_h2c_default_cmac_tbl_g7);
static void __get_sta_he_pkt_padding(struct rtw89_dev *rtwdev, - struct ieee80211_sta *sta, u8 *pads) + struct ieee80211_link_sta *link_sta, + u8 *pads) { bool ppe_th; u8 ppe16, ppe8; - u8 nss = min(sta->deflink.rx_nss, rtwdev->hal.tx_nss) - 1; - u8 ppe_thres_hdr = sta->deflink.he_cap.ppe_thres[0]; + u8 nss = min(link_sta->rx_nss, rtwdev->hal.tx_nss) - 1; + u8 ppe_thres_hdr = link_sta->he_cap.ppe_thres[0]; u8 ru_bitmap; u8 n, idx, sh; u16 ppe; int i;
ppe_th = FIELD_GET(IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT, - sta->deflink.he_cap.he_cap_elem.phy_cap_info[6]); + link_sta->he_cap.he_cap_elem.phy_cap_info[6]); if (!ppe_th) { u8 pad;
pad = FIELD_GET(IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_MASK, - sta->deflink.he_cap.he_cap_elem.phy_cap_info[9]); + link_sta->he_cap.he_cap_elem.phy_cap_info[9]);
for (i = 0; i < RTW89_PPE_BW_NUM; i++) pads[i] = pad; @@ -2794,7 +2823,7 @@ static void __get_sta_he_pkt_padding(struct rtw89_dev *rtwdev, sh = n & 7; n += IEEE80211_PPE_THRES_INFO_PPET_SIZE * 2;
- ppe = le16_to_cpu(*((__le16 *)&sta->deflink.he_cap.ppe_thres[idx])); + ppe = le16_to_cpu(*((__le16 *)&link_sta->he_cap.ppe_thres[idx])); ppe16 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK; sh += IEEE80211_PPE_THRES_INFO_PPET_SIZE; ppe8 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK; @@ -2809,23 +2838,35 @@ static void __get_sta_he_pkt_padding(struct rtw89_dev *rtwdev, }
int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); const struct rtw89_chip_info *chip = rtwdev->chip; - struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta); - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); + rtwvif_link->chanctx_idx); + struct ieee80211_link_sta *link_sta; struct sk_buff *skb; u8 pads[RTW89_PPE_BW_NUM]; - u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id; + u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id; u16 lowest_rate; int ret;
memset(pads, 0, sizeof(pads)); - if (sta && sta->deflink.he_cap.has_he) - __get_sta_he_pkt_padding(rtwdev, sta, pads); + + skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_CMC_TBL_LEN); + if (!skb) { + rtw89_err(rtwdev, "failed to alloc skb for fw dl\n"); + return -ENOMEM; + } + + rcu_read_lock(); + + if (rtwsta_link) + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + + if (rtwsta_link && link_sta->he_cap.has_he) + __get_sta_he_pkt_padding(rtwdev, link_sta, pads);
if (vif->p2p) lowest_rate = RTW89_HW_RATE_OFDM6; @@ -2834,11 +2875,6 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev, else lowest_rate = RTW89_HW_RATE_OFDM6;
- skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_CMC_TBL_LEN); - if (!skb) { - rtw89_err(rtwdev, "failed to alloc skb for fw dl\n"); - return -ENOMEM; - } skb_put(skb, H2C_CMC_TBL_LEN); SET_CTRL_INFO_MACID(skb->data, mac_id); SET_CTRL_INFO_OPERATION(skb->data, 1); @@ -2851,7 +2887,7 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev, SET_CMC_TBL_ULDL(skb->data, 1); else SET_CMC_TBL_ULDL(skb->data, 0); - SET_CMC_TBL_MULTI_PORT_ID(skb->data, rtwvif->port); + SET_CMC_TBL_MULTI_PORT_ID(skb->data, rtwvif_link->port); if (chip->h2c_cctl_func_id == H2C_FUNC_MAC_CCTLINFO_UD_V1) { SET_CMC_TBL_NOMINAL_PKT_PADDING_V1(skb->data, pads[RTW89_CHANNEL_WIDTH_20]); SET_CMC_TBL_NOMINAL_PKT_PADDING40_V1(skb->data, pads[RTW89_CHANNEL_WIDTH_40]); @@ -2863,12 +2899,14 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev, SET_CMC_TBL_NOMINAL_PKT_PADDING80(skb->data, pads[RTW89_CHANNEL_WIDTH_80]); SET_CMC_TBL_NOMINAL_PKT_PADDING160(skb->data, pads[RTW89_CHANNEL_WIDTH_160]); } - if (sta) + if (rtwsta_link) SET_CMC_TBL_BSR_QUEUE_SIZE_FORMAT(skb->data, - sta->deflink.he_cap.has_he); - if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) + link_sta->he_cap.has_he); + if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) SET_CMC_TBL_DATA_DCM(skb->data, 0);
+ rcu_read_unlock(); + rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, H2C_CAT_MAC, H2C_CL_MAC_FR_EXCHG, chip->h2c_cctl_func_id, 0, 1, @@ -2889,9 +2927,10 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev, EXPORT_SYMBOL(rtw89_fw_h2c_assoc_cmac_tbl);
static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev, - struct ieee80211_sta *sta, u8 *pads) + struct ieee80211_link_sta *link_sta, + u8 *pads) { - u8 nss = min(sta->deflink.rx_nss, rtwdev->hal.tx_nss) - 1; + u8 nss = min(link_sta->rx_nss, rtwdev->hal.tx_nss) - 1; u16 ppe_thres_hdr; u8 ppe16, ppe8; u8 n, idx, sh; @@ -2900,12 +2939,12 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev, u16 ppe; int i;
- ppe_th = !!u8_get_bits(sta->deflink.eht_cap.eht_cap_elem.phy_cap_info[5], + ppe_th = !!u8_get_bits(link_sta->eht_cap.eht_cap_elem.phy_cap_info[5], IEEE80211_EHT_PHY_CAP5_PPE_THRESHOLD_PRESENT); if (!ppe_th) { u8 pad;
- pad = u8_get_bits(sta->deflink.eht_cap.eht_cap_elem.phy_cap_info[5], + pad = u8_get_bits(link_sta->eht_cap.eht_cap_elem.phy_cap_info[5], IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_MASK);
for (i = 0; i < RTW89_PPE_BW_NUM; i++) @@ -2914,7 +2953,7 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev, return; }
- ppe_thres_hdr = get_unaligned_le16(sta->deflink.eht_cap.eht_ppe_thres); + ppe_thres_hdr = get_unaligned_le16(link_sta->eht_cap.eht_ppe_thres); ru_bitmap = u16_get_bits(ppe_thres_hdr, IEEE80211_EHT_PPE_THRES_RU_INDEX_BITMASK_MASK); n = hweight8(ru_bitmap); @@ -2931,7 +2970,7 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev, sh = n & 7; n += IEEE80211_EHT_PPE_THRES_INFO_PPET_SIZE * 2;
- ppe = get_unaligned_le16(sta->deflink.eht_cap.eht_ppe_thres + idx); + ppe = get_unaligned_le16(link_sta->eht_cap.eht_ppe_thres + idx); ppe16 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK; sh += IEEE80211_EHT_PPE_THRES_INFO_PPET_SIZE; ppe8 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK; @@ -2946,14 +2985,15 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev, }
int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta); - const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx); - u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id; + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx); + u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id; struct rtw89_h2c_cctlinfo_ud_g7 *h2c; + struct ieee80211_bss_conf *bss_conf; + struct ieee80211_link_sta *link_sta; u8 pads[RTW89_PPE_BW_NUM]; u32 len = sizeof(*h2c); struct sk_buff *skb; @@ -2961,11 +3001,24 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev, int ret;
memset(pads, 0, sizeof(pads)); - if (sta) { - if (sta->deflink.eht_cap.has_eht) - __get_sta_eht_pkt_padding(rtwdev, sta, pads); - else if (sta->deflink.he_cap.has_he) - __get_sta_he_pkt_padding(rtwdev, sta, pads); + + skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len); + if (!skb) { + rtw89_err(rtwdev, "failed to alloc skb for cmac g7\n"); + return -ENOMEM; + } + + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + + if (rtwsta_link) { + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + + if (link_sta->eht_cap.has_eht) + __get_sta_eht_pkt_padding(rtwdev, link_sta, pads); + else if (link_sta->he_cap.has_he) + __get_sta_he_pkt_padding(rtwdev, link_sta, pads); }
if (vif->p2p) @@ -2975,11 +3028,6 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev, else lowest_rate = RTW89_HW_RATE_OFDM6;
- skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len); - if (!skb) { - rtw89_err(rtwdev, "failed to alloc skb for cmac g7\n"); - return -ENOMEM; - } skb_put(skb, len); h2c = (struct rtw89_h2c_cctlinfo_ud_g7 *)skb->data;
@@ -3000,16 +3048,16 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev, h2c->w3 = le32_encode_bits(0, CCTLINFO_G7_W3_RTS_TXCNT_LMT_SEL); h2c->m3 = cpu_to_le32(CCTLINFO_G7_W3_RTS_TXCNT_LMT_SEL);
- h2c->w4 = le32_encode_bits(rtwvif->port, CCTLINFO_G7_W4_MULTI_PORT_ID); + h2c->w4 = le32_encode_bits(rtwvif_link->port, CCTLINFO_G7_W4_MULTI_PORT_ID); h2c->m4 = cpu_to_le32(CCTLINFO_G7_W4_MULTI_PORT_ID);
- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) { + if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) { h2c->w4 |= le32_encode_bits(0, CCTLINFO_G7_W4_DATA_DCM); h2c->m4 |= cpu_to_le32(CCTLINFO_G7_W4_DATA_DCM); }
- if (vif->bss_conf.eht_support) { - u16 punct = vif->bss_conf.chanreq.oper.punctured; + if (bss_conf->eht_support) { + u16 punct = bss_conf->chanreq.oper.punctured;
h2c->w4 |= le32_encode_bits(~punct, CCTLINFO_G7_W4_ACT_SUBCH_CBW); @@ -3036,12 +3084,14 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev, CCTLINFO_G7_W6_ULDL); h2c->m6 = cpu_to_le32(CCTLINFO_G7_W6_ULDL);
- if (sta) { - h2c->w8 = le32_encode_bits(sta->deflink.he_cap.has_he, + if (rtwsta_link) { + h2c->w8 = le32_encode_bits(link_sta->he_cap.has_he, CCTLINFO_G7_W8_BSR_QUEUE_SIZE_FORMAT); h2c->m8 = cpu_to_le32(CCTLINFO_G7_W8_BSR_QUEUE_SIZE_FORMAT); }
+ rcu_read_unlock(); + rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, H2C_CAT_MAC, H2C_CL_MAC_FR_EXCHG, H2C_FUNC_MAC_CCTLINFO_UD_G7, 0, 1, @@ -3062,10 +3112,10 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev, EXPORT_SYMBOL(rtw89_fw_h2c_assoc_cmac_tbl_g7);
int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + struct rtw89_sta *rtwsta = rtwsta_link->rtwsta; struct rtw89_h2c_cctlinfo_ud_g7 *h2c; u32 len = sizeof(*h2c); struct sk_buff *skb; @@ -3102,7 +3152,7 @@ int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev, else if (agg_num > 0x200 && agg_num <= 0x400) ba_bmap = 5;
- h2c->c0 = le32_encode_bits(rtwsta->mac_id, CCTLINFO_G7_C0_MACID) | + h2c->c0 = le32_encode_bits(rtwsta_link->mac_id, CCTLINFO_G7_C0_MACID) | le32_encode_bits(1, CCTLINFO_G7_C0_OP);
h2c->w3 = le32_encode_bits(ba_bmap, CCTLINFO_G7_W3_BA_BMAP); @@ -3128,7 +3178,7 @@ int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev, EXPORT_SYMBOL(rtw89_fw_h2c_ampdu_cmac_tbl_g7);
int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta) + struct rtw89_sta_link *rtwsta_link) { const struct rtw89_chip_info *chip = rtwdev->chip; struct sk_buff *skb; @@ -3140,15 +3190,15 @@ int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev, return -ENOMEM; } skb_put(skb, H2C_CMC_TBL_LEN); - SET_CTRL_INFO_MACID(skb->data, rtwsta->mac_id); + SET_CTRL_INFO_MACID(skb->data, rtwsta_link->mac_id); SET_CTRL_INFO_OPERATION(skb->data, 1); - if (rtwsta->cctl_tx_time) { + if (rtwsta_link->cctl_tx_time) { SET_CMC_TBL_AMPDU_TIME_SEL(skb->data, 1); - SET_CMC_TBL_AMPDU_MAX_TIME(skb->data, rtwsta->ampdu_max_time); + SET_CMC_TBL_AMPDU_MAX_TIME(skb->data, rtwsta_link->ampdu_max_time); } - if (rtwsta->cctl_tx_retry_limit) { + if (rtwsta_link->cctl_tx_retry_limit) { SET_CMC_TBL_DATA_TXCNT_LMT_SEL(skb->data, 1); - SET_CMC_TBL_DATA_TX_CNT_LMT(skb->data, rtwsta->data_tx_cnt_lmt); + SET_CMC_TBL_DATA_TX_CNT_LMT(skb->data, rtwsta_link->data_tx_cnt_lmt); }
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, @@ -3170,7 +3220,7 @@ int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev, }
int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta) + struct rtw89_sta_link *rtwsta_link) { const struct rtw89_chip_info *chip = rtwdev->chip; struct sk_buff *skb; @@ -3185,7 +3235,7 @@ int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev, return -ENOMEM; } skb_put(skb, H2C_CMC_TBL_LEN); - SET_CTRL_INFO_MACID(skb->data, rtwsta->mac_id); + SET_CTRL_INFO_MACID(skb->data, rtwsta_link->mac_id); SET_CTRL_INFO_OPERATION(skb->data, 1);
__rtw89_fw_h2c_set_tx_path(rtwdev, skb); @@ -3209,11 +3259,11 @@ int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev, }
int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + rtwvif_link->chanctx_idx); + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); struct rtw89_h2c_bcn_upd *h2c; struct sk_buff *skb_beacon; struct ieee80211_hdr *hdr; @@ -3240,7 +3290,7 @@ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev, return -ENOMEM; }
- noa_len = rtw89_p2p_noa_fetch(rtwvif, &noa_data); + noa_len = rtw89_p2p_noa_fetch(rtwvif_link, &noa_data); if (noa_len && (noa_len <= skb_tailroom(skb_beacon) || pskb_expand_head(skb_beacon, 0, noa_len, GFP_KERNEL) == 0)) { @@ -3260,11 +3310,11 @@ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev, skb_put(skb, len); h2c = (struct rtw89_h2c_bcn_upd *)skb->data;
- h2c->w0 = le32_encode_bits(rtwvif->port, RTW89_H2C_BCN_UPD_W0_PORT) | + h2c->w0 = le32_encode_bits(rtwvif_link->port, RTW89_H2C_BCN_UPD_W0_PORT) | le32_encode_bits(0, RTW89_H2C_BCN_UPD_W0_MBSSID) | - le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_BCN_UPD_W0_BAND) | + le32_encode_bits(rtwvif_link->mac_idx, RTW89_H2C_BCN_UPD_W0_BAND) | le32_encode_bits(tim_offset | BIT(7), RTW89_H2C_BCN_UPD_W0_GRP_IE_OFST); - h2c->w1 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_BCN_UPD_W1_MACID) | + h2c->w1 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_BCN_UPD_W1_MACID) | le32_encode_bits(RTW89_MGMT_HW_SSN_SEL, RTW89_H2C_BCN_UPD_W1_SSN_SEL) | le32_encode_bits(RTW89_MGMT_HW_SEQ_MODE, RTW89_H2C_BCN_UPD_W1_SSN_MODE) | le32_encode_bits(beacon_rate, RTW89_H2C_BCN_UPD_W1_RATE); @@ -3289,10 +3339,10 @@ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev, EXPORT_SYMBOL(rtw89_fw_h2c_update_beacon);
int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { - const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx); - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx); + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); struct rtw89_h2c_bcn_upd_be *h2c; struct sk_buff *skb_beacon; struct ieee80211_hdr *hdr; @@ -3319,7 +3369,7 @@ int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev, return -ENOMEM; }
- noa_len = rtw89_p2p_noa_fetch(rtwvif, &noa_data); + noa_len = rtw89_p2p_noa_fetch(rtwvif_link, &noa_data); if (noa_len && (noa_len <= skb_tailroom(skb_beacon) || pskb_expand_head(skb_beacon, 0, noa_len, GFP_KERNEL) == 0)) { @@ -3339,11 +3389,11 @@ int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev, skb_put(skb, len); h2c = (struct rtw89_h2c_bcn_upd_be *)skb->data;
- h2c->w0 = le32_encode_bits(rtwvif->port, RTW89_H2C_BCN_UPD_BE_W0_PORT) | + h2c->w0 = le32_encode_bits(rtwvif_link->port, RTW89_H2C_BCN_UPD_BE_W0_PORT) | le32_encode_bits(0, RTW89_H2C_BCN_UPD_BE_W0_MBSSID) | - le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_BCN_UPD_BE_W0_BAND) | + le32_encode_bits(rtwvif_link->mac_idx, RTW89_H2C_BCN_UPD_BE_W0_BAND) | le32_encode_bits(tim_offset | BIT(7), RTW89_H2C_BCN_UPD_BE_W0_GRP_IE_OFST); - h2c->w1 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_BCN_UPD_BE_W1_MACID) | + h2c->w1 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_BCN_UPD_BE_W1_MACID) | le32_encode_bits(RTW89_MGMT_HW_SSN_SEL, RTW89_H2C_BCN_UPD_BE_W1_SSN_SEL) | le32_encode_bits(RTW89_MGMT_HW_SEQ_MODE, RTW89_H2C_BCN_UPD_BE_W1_SSN_MODE) | le32_encode_bits(beacon_rate, RTW89_H2C_BCN_UPD_BE_W1_RATE); @@ -3373,22 +3423,22 @@ EXPORT_SYMBOL(rtw89_fw_h2c_update_beacon_be);
#define H2C_ROLE_MAINTAIN_LEN 4 int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, enum rtw89_upd_mode upd_mode) { struct sk_buff *skb; - u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id; + u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id; u8 self_role; int ret;
- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) { - if (rtwsta) + if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) { + if (rtwsta_link) self_role = RTW89_SELF_ROLE_AP_CLIENT; else - self_role = rtwvif->self_role; + self_role = rtwvif_link->self_role; } else { - self_role = rtwvif->self_role; + self_role = rtwvif_link->self_role; }
skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_ROLE_MAINTAIN_LEN); @@ -3400,7 +3450,7 @@ int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev, SET_FWROLE_MAINTAIN_MACID(skb->data, mac_id); SET_FWROLE_MAINTAIN_SELF_ROLE(skb->data, self_role); SET_FWROLE_MAINTAIN_UPD_MODE(skb->data, upd_mode); - SET_FWROLE_MAINTAIN_WIFI_ROLE(skb->data, rtwvif->wifi_role); + SET_FWROLE_MAINTAIN_WIFI_ROLE(skb->data, rtwvif_link->wifi_role);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, H2C_CAT_MAC, H2C_CL_MAC_MEDIA_RPT, @@ -3421,39 +3471,53 @@ int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev, }
static enum rtw89_fw_sta_type -rtw89_fw_get_sta_type(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) +rtw89_fw_get_sta_type(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct ieee80211_sta *sta = rtwsta_to_sta_safe(rtwsta); - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct ieee80211_bss_conf *bss_conf; + struct ieee80211_link_sta *link_sta; + enum rtw89_fw_sta_type type; + + rcu_read_lock();
- if (!sta) + if (!rtwsta_link) goto by_vif;
- if (sta->deflink.eht_cap.has_eht) - return RTW89_FW_BE_STA; - else if (sta->deflink.he_cap.has_he) - return RTW89_FW_AX_STA; + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + + if (link_sta->eht_cap.has_eht) + type = RTW89_FW_BE_STA; + else if (link_sta->he_cap.has_he) + type = RTW89_FW_AX_STA; else - return RTW89_FW_N_AC_STA; + type = RTW89_FW_N_AC_STA; + + goto out;
by_vif: - if (vif->bss_conf.eht_support) - return RTW89_FW_BE_STA; - else if (vif->bss_conf.he_support) - return RTW89_FW_AX_STA; + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + + if (bss_conf->eht_support) + type = RTW89_FW_BE_STA; + else if (bss_conf->he_support) + type = RTW89_FW_AX_STA; else - return RTW89_FW_N_AC_STA; + type = RTW89_FW_N_AC_STA; + +out: + rcu_read_unlock(); + + return type; }
-int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, bool dis_conn) +int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, bool dis_conn) { struct sk_buff *skb; - u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id; - u8 self_role = rtwvif->self_role; + u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id; + u8 self_role = rtwvif_link->self_role; enum rtw89_fw_sta_type sta_type; - u8 net_type = rtwvif->net_type; + u8 net_type = rtwvif_link->net_type; struct rtw89_h2c_join_v1 *h2c_v1; struct rtw89_h2c_join *h2c; u32 len = sizeof(*h2c); @@ -3465,7 +3529,7 @@ int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, format_v1 = true; }
- if (net_type == RTW89_NET_TYPE_AP_MODE && rtwsta) { + if (net_type == RTW89_NET_TYPE_AP_MODE && rtwsta_link) { self_role = RTW89_SELF_ROLE_AP_CLIENT; net_type = dis_conn ? RTW89_NET_TYPE_NO_LINK : net_type; } @@ -3480,16 +3544,17 @@ int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
h2c->w0 = le32_encode_bits(mac_id, RTW89_H2C_JOININFO_W0_MACID) | le32_encode_bits(dis_conn, RTW89_H2C_JOININFO_W0_OP) | - le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_JOININFO_W0_BAND) | - le32_encode_bits(rtwvif->wmm, RTW89_H2C_JOININFO_W0_WMM) | - le32_encode_bits(rtwvif->trigger, RTW89_H2C_JOININFO_W0_TGR) | + le32_encode_bits(rtwvif_link->mac_idx, RTW89_H2C_JOININFO_W0_BAND) | + le32_encode_bits(rtwvif_link->wmm, RTW89_H2C_JOININFO_W0_WMM) | + le32_encode_bits(rtwvif_link->trigger, RTW89_H2C_JOININFO_W0_TGR) | le32_encode_bits(0, RTW89_H2C_JOININFO_W0_ISHESTA) | le32_encode_bits(0, RTW89_H2C_JOININFO_W0_DLBW) | le32_encode_bits(0, RTW89_H2C_JOININFO_W0_TF_MAC_PAD) | le32_encode_bits(0, RTW89_H2C_JOININFO_W0_DL_T_PE) | - le32_encode_bits(rtwvif->port, RTW89_H2C_JOININFO_W0_PORT_ID) | + le32_encode_bits(rtwvif_link->port, RTW89_H2C_JOININFO_W0_PORT_ID) | le32_encode_bits(net_type, RTW89_H2C_JOININFO_W0_NET_TYPE) | - le32_encode_bits(rtwvif->wifi_role, RTW89_H2C_JOININFO_W0_WIFI_ROLE) | + le32_encode_bits(rtwvif_link->wifi_role, + RTW89_H2C_JOININFO_W0_WIFI_ROLE) | le32_encode_bits(self_role, RTW89_H2C_JOININFO_W0_SELF_ROLE);
if (!format_v1) @@ -3497,7 +3562,7 @@ int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
h2c_v1 = (struct rtw89_h2c_join_v1 *)skb->data;
- sta_type = rtw89_fw_get_sta_type(rtwdev, rtwvif, rtwsta); + sta_type = rtw89_fw_get_sta_type(rtwdev, rtwvif_link, rtwsta_link);
h2c_v1->w1 = le32_encode_bits(sta_type, RTW89_H2C_JOININFO_W1_STA_TYPE); h2c_v1->w2 = 0; @@ -3618,7 +3683,7 @@ int rtw89_fw_h2c_macid_pause(struct rtw89_dev *rtwdev, u8 sh, u8 grp, }
#define H2C_EDCA_LEN 12 -int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u8 ac, u32 val) { struct sk_buff *skb; @@ -3631,7 +3696,7 @@ int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, } skb_put(skb, H2C_EDCA_LEN); RTW89_SET_EDCA_SEL(skb->data, 0); - RTW89_SET_EDCA_BAND(skb->data, rtwvif->mac_idx); + RTW89_SET_EDCA_BAND(skb->data, rtwvif_link->mac_idx); RTW89_SET_EDCA_WMM(skb->data, 0); RTW89_SET_EDCA_AC(skb->data, ac); RTW89_SET_EDCA_PARAM(skb->data, val); @@ -3655,7 +3720,8 @@ int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, }
#define H2C_TSF32_TOGL_LEN 4 -int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool en) { struct sk_buff *skb; @@ -3671,9 +3737,9 @@ int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif skb_put(skb, H2C_TSF32_TOGL_LEN); cmd = skb->data;
- RTW89_SET_FWCMD_TSF32_TOGL_BAND(cmd, rtwvif->mac_idx); + RTW89_SET_FWCMD_TSF32_TOGL_BAND(cmd, rtwvif_link->mac_idx); RTW89_SET_FWCMD_TSF32_TOGL_EN(cmd, en); - RTW89_SET_FWCMD_TSF32_TOGL_PORT(cmd, rtwvif->port); + RTW89_SET_FWCMD_TSF32_TOGL_PORT(cmd, rtwvif_link->port); RTW89_SET_FWCMD_TSF32_TOGL_EARLY(cmd, early_us);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, @@ -3727,11 +3793,10 @@ int rtw89_fw_h2c_set_ofld_cfg(struct rtw89_dev *rtwdev) }
int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, + struct rtw89_vif_link *rtwvif_link, bool connect) { - struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif); - struct ieee80211_bss_conf *bss_conf = vif ? &vif->bss_conf : NULL; + struct ieee80211_bss_conf *bss_conf; s32 thold = RTW89_DEFAULT_CQM_THOLD; u32 hyst = RTW89_DEFAULT_CQM_HYST; struct rtw89_h2c_bcnfltr *h2c; @@ -3742,9 +3807,20 @@ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev, if (!RTW89_CHK_FW_FEATURE(BEACON_FILTER, &rtwdev->fw)) return -EINVAL;
- if (!rtwvif || !bss_conf || rtwvif->net_type != RTW89_NET_TYPE_INFRA) + if (!rtwvif_link || rtwvif_link->net_type != RTW89_NET_TYPE_INFRA) return -EINVAL;
+ rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false); + + if (bss_conf->cqm_rssi_hyst) + hyst = bss_conf->cqm_rssi_hyst; + if (bss_conf->cqm_rssi_thold) + thold = bss_conf->cqm_rssi_thold; + + rcu_read_unlock(); + skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len); if (!skb) { rtw89_err(rtwdev, "failed to alloc skb for h2c bcn filter\n"); @@ -3754,11 +3830,6 @@ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev, skb_put(skb, len); h2c = (struct rtw89_h2c_bcnfltr *)skb->data;
- if (bss_conf->cqm_rssi_hyst) - hyst = bss_conf->cqm_rssi_hyst; - if (bss_conf->cqm_rssi_thold) - thold = bss_conf->cqm_rssi_thold; - h2c->w0 = le32_encode_bits(connect, RTW89_H2C_BCNFLTR_W0_MON_RSSI) | le32_encode_bits(connect, RTW89_H2C_BCNFLTR_W0_MON_BCN) | le32_encode_bits(connect, RTW89_H2C_BCNFLTR_W0_MON_EN) | @@ -3768,7 +3839,7 @@ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev, le32_encode_bits(hyst, RTW89_H2C_BCNFLTR_W0_RSSI_HYST) | le32_encode_bits(thold + MAX_RSSI, RTW89_H2C_BCNFLTR_W0_RSSI_THRESHOLD) | - le32_encode_bits(rtwvif->mac_id, RTW89_H2C_BCNFLTR_W0_MAC_ID); + le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_BCNFLTR_W0_MAC_ID);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, H2C_CAT_MAC, H2C_CL_MAC_FW_OFLD, @@ -3833,15 +3904,16 @@ int rtw89_fw_h2c_rssi_offload(struct rtw89_dev *rtwdev, return ret; }
-int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; struct rtw89_traffic_stats *stats = &rtwvif->stats; struct rtw89_h2c_ofld *h2c; u32 len = sizeof(*h2c); struct sk_buff *skb; int ret;
- if (rtwvif->net_type != RTW89_NET_TYPE_INFRA) + if (rtwvif_link->net_type != RTW89_NET_TYPE_INFRA) return -EINVAL;
skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len); @@ -3853,7 +3925,7 @@ int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) skb_put(skb, len); h2c = (struct rtw89_h2c_ofld *)skb->data;
- h2c->w0 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_OFLD_W0_MAC_ID) | + h2c->w0 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_OFLD_W0_MAC_ID) | le32_encode_bits(stats->tx_throughput, RTW89_H2C_OFLD_W0_TX_TP) | le32_encode_bits(stats->rx_throughput, RTW89_H2C_OFLD_W0_RX_TP);
@@ -4858,7 +4930,7 @@ int rtw89_fw_h2c_scan_list_offload_be(struct rtw89_dev *rtwdev, int ch_num, #define RTW89_SCAN_DELAY_TSF_UNIT 104800 int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev, struct rtw89_scan_option *option, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, bool wowlan) { struct rtw89_wait_info *wait = &rtwdev->mac.fw_ofld_wait; @@ -4880,7 +4952,7 @@ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev, h2c = (struct rtw89_h2c_scanofld *)skb->data;
if (option->delay) { - ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif, &tsf); + ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif_link, &tsf); if (ret) { rtw89_warn(rtwdev, "NLO failed to get port tsf: %d\n", ret); scan_mode = RTW89_SCAN_IMMEDIATE; @@ -4890,8 +4962,8 @@ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev, } }
- h2c->w0 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_SCANOFLD_W0_MACID) | - le32_encode_bits(rtwvif->port, RTW89_H2C_SCANOFLD_W0_PORT_ID) | + h2c->w0 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_SCANOFLD_W0_MACID) | + le32_encode_bits(rtwvif_link->port, RTW89_H2C_SCANOFLD_W0_PORT_ID) | le32_encode_bits(RTW89_PHY_0, RTW89_H2C_SCANOFLD_W0_BAND) | le32_encode_bits(option->enable, RTW89_H2C_SCANOFLD_W0_OPERATION);
@@ -4963,9 +5035,10 @@ static void rtw89_scan_get_6g_disabled_chan(struct rtw89_dev *rtwdev,
int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev, struct rtw89_scan_option *option, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, bool wowlan) { + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info; struct rtw89_wait_info *wait = &rtwdev->mac.fw_ofld_wait; struct cfg80211_scan_request *req = rtwvif->scan_req; @@ -5016,8 +5089,8 @@ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev, le32_encode_bits(option->repeat, RTW89_H2C_SCANOFLD_BE_W0_REPEAT) | le32_encode_bits(true, RTW89_H2C_SCANOFLD_BE_W0_NOTIFY_END) | le32_encode_bits(true, RTW89_H2C_SCANOFLD_BE_W0_LEARN_CH) | - le32_encode_bits(rtwvif->mac_id, RTW89_H2C_SCANOFLD_BE_W0_MACID) | - le32_encode_bits(rtwvif->port, RTW89_H2C_SCANOFLD_BE_W0_PORT) | + le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_SCANOFLD_BE_W0_MACID) | + le32_encode_bits(rtwvif_link->port, RTW89_H2C_SCANOFLD_BE_W0_PORT) | le32_encode_bits(option->band, RTW89_H2C_SCANOFLD_BE_W0_BAND);
h2c->w1 = le32_encode_bits(option->num_macc_role, RTW89_H2C_SCANOFLD_BE_W1_NUM_MACC_ROLE) | @@ -5082,11 +5155,11 @@ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
for (i = 0; i < option->num_opch; i++) { opch = ptr; - opch->w0 = le32_encode_bits(rtwvif->mac_id, + opch->w0 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_SCANOFLD_BE_OPCH_W0_MACID) | le32_encode_bits(option->band, RTW89_H2C_SCANOFLD_BE_OPCH_W0_BAND) | - le32_encode_bits(rtwvif->port, + le32_encode_bits(rtwvif_link->port, RTW89_H2C_SCANOFLD_BE_OPCH_W0_PORT) | le32_encode_bits(RTW89_SCAN_OPMODE_INTV, RTW89_H2C_SCANOFLD_BE_OPCH_W0_POLICY) | @@ -5871,12 +5944,10 @@ static void rtw89_release_pkt_list(struct rtw89_dev *rtwdev) }
static bool rtw89_is_6ghz_wildcard_probe_req(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct cfg80211_scan_request *req, struct rtw89_pktofld_info *info, enum nl80211_band band, u8 ssid_idx) { - struct cfg80211_scan_request *req = rtwvif->scan_req; - if (band != NL80211_BAND_6GHZ) return false;
@@ -5892,11 +5963,13 @@ static bool rtw89_is_6ghz_wildcard_probe_req(struct rtw89_dev *rtwdev, }
static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, struct sk_buff *skb, u8 ssid_idx) { struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info; + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; struct ieee80211_scan_ies *ies = rtwvif->scan_ies; + struct cfg80211_scan_request *req = rtwvif->scan_req; struct rtw89_pktofld_info *info; struct sk_buff *new; int ret = 0; @@ -5921,8 +5994,7 @@ static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev, goto out; }
- rtw89_is_6ghz_wildcard_probe_req(rtwdev, rtwvif, info, band, - ssid_idx); + rtw89_is_6ghz_wildcard_probe_req(rtwdev, req, info, band, ssid_idx);
ret = rtw89_fw_h2c_add_pkt_offload(rtwdev, &info->id, new); if (ret) { @@ -5939,22 +6011,23 @@ static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev, }
static int rtw89_hw_scan_update_probe_req(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; struct cfg80211_scan_request *req = rtwvif->scan_req; struct sk_buff *skb; u8 num = req->n_ssids, i; int ret;
for (i = 0; i < num; i++) { - skb = ieee80211_probereq_get(rtwdev->hw, rtwvif->mac_addr, + skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr, req->ssids[i].ssid, req->ssids[i].ssid_len, req->ie_len); if (!skb) return -ENOMEM;
- ret = rtw89_append_probe_req_ie(rtwdev, rtwvif, skb, i); + ret = rtw89_append_probe_req_ie(rtwdev, rtwvif_link, skb, i); kfree_skb(skb);
if (ret) @@ -5965,13 +6038,12 @@ static int rtw89_hw_scan_update_probe_req(struct rtw89_dev *rtwdev, }
static int rtw89_update_6ghz_rnr_chan(struct rtw89_dev *rtwdev, + struct ieee80211_scan_ies *ies, struct cfg80211_scan_request *req, struct rtw89_mac_chinfo *ch_info) { - struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif; + struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif; struct list_head *pkt_list = rtwdev->scan_info.pkt_list; - struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif); - struct ieee80211_scan_ies *ies = rtwvif->scan_ies; struct cfg80211_scan_6ghz_params *params; struct rtw89_pktofld_info *info, *tmp; struct ieee80211_hdr *hdr; @@ -6000,7 +6072,7 @@ static int rtw89_update_6ghz_rnr_chan(struct rtw89_dev *rtwdev, if (found) continue;
- skb = ieee80211_probereq_get(rtwdev->hw, rtwvif->mac_addr, + skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr, NULL, 0, req->ie_len); skb_put_data(skb, ies->ies[NL80211_BAND_6GHZ], ies->len[NL80211_BAND_6GHZ]); skb_put_data(skb, ies->common_ies, ies->common_ie_len); @@ -6090,8 +6162,9 @@ static void rtw89_hw_scan_add_chan(struct rtw89_dev *rtwdev, int chan_type, struct rtw89_mac_chinfo *ch_info) { struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info; - struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif; + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; + struct ieee80211_scan_ies *ies = rtwvif->scan_ies; struct cfg80211_scan_request *req = rtwvif->scan_req; struct rtw89_chan *op = &rtwdev->scan_info.op_chan; struct rtw89_pktofld_info *info; @@ -6117,7 +6190,7 @@ static void rtw89_hw_scan_add_chan(struct rtw89_dev *rtwdev, int chan_type, } }
- ret = rtw89_update_6ghz_rnr_chan(rtwdev, req, ch_info); + ret = rtw89_update_6ghz_rnr_chan(rtwdev, ies, req, ch_info); if (ret) rtw89_warn(rtwdev, "RNR fails: %d\n", ret);
@@ -6207,8 +6280,8 @@ static void rtw89_hw_scan_add_chan_be(struct rtw89_dev *rtwdev, int chan_type, struct rtw89_mac_chinfo_be *ch_info) { struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info; - struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif; + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; struct cfg80211_scan_request *req = rtwvif->scan_req; struct rtw89_pktofld_info *info; u8 band, probe_count = 0, i; @@ -6265,7 +6338,7 @@ static void rtw89_hw_scan_add_chan_be(struct rtw89_dev *rtwdev, int chan_type, }
int rtw89_pno_scan_add_chan_list_ax(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config; @@ -6315,8 +6388,9 @@ int rtw89_pno_scan_add_chan_list_ax(struct rtw89_dev *rtwdev, }
int rtw89_hw_scan_add_chan_list_ax(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool connected) + struct rtw89_vif_link *rtwvif_link, bool connected) { + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; struct cfg80211_scan_request *req = rtwvif->scan_req; struct rtw89_mac_chinfo *ch_info, *tmp; struct ieee80211_channel *channel; @@ -6392,7 +6466,7 @@ int rtw89_hw_scan_add_chan_list_ax(struct rtw89_dev *rtwdev, }
int rtw89_pno_scan_add_chan_list_be(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config; @@ -6444,8 +6518,9 @@ int rtw89_pno_scan_add_chan_list_be(struct rtw89_dev *rtwdev, }
int rtw89_hw_scan_add_chan_list_be(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool connected) + struct rtw89_vif_link *rtwvif_link, bool connected) { + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; struct cfg80211_scan_request *req = rtwvif->scan_req; struct rtw89_mac_chinfo_be *ch_info, *tmp; struct ieee80211_channel *channel; @@ -6503,45 +6578,50 @@ int rtw89_hw_scan_add_chan_list_be(struct rtw89_dev *rtwdev, }
static int rtw89_hw_scan_prehandle(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool connected) + struct rtw89_vif_link *rtwvif_link, bool connected) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; int ret;
- ret = rtw89_hw_scan_update_probe_req(rtwdev, rtwvif); + ret = rtw89_hw_scan_update_probe_req(rtwdev, rtwvif_link); if (ret) { rtw89_err(rtwdev, "Update probe request failed\n"); goto out; } - ret = mac->add_chan_list(rtwdev, rtwvif, connected); + ret = mac->add_chan_list(rtwdev, rtwvif_link, connected); out: return ret; }
-void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, +void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, struct ieee80211_scan_request *scan_req) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; struct cfg80211_scan_request *req = &scan_req->req; + const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, + rtwvif_link->chanctx_idx); + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; u32 rx_fltr = rtwdev->hal.rx_fltr; u8 mac_addr[ETH_ALEN];
- rtw89_get_channel(rtwdev, rtwvif, &rtwdev->scan_info.op_chan); - rtwdev->scan_info.scanning_vif = vif; + /* clone op and keep it during scan */ + rtwdev->scan_info.op_chan = *chan; + + rtwdev->scan_info.scanning_vif = rtwvif_link; rtwdev->scan_info.last_chan_idx = 0; rtwdev->scan_info.abort = false; rtwvif->scan_ies = &scan_req->ies; rtwvif->scan_req = req; ieee80211_stop_queues(rtwdev->hw); - rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif, false); + rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif_link, false);
if (req->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) get_random_mask_addr(mac_addr, req->mac_addr, req->mac_addr_mask); else - ether_addr_copy(mac_addr, vif->addr); - rtw89_core_scan_start(rtwdev, rtwvif, mac_addr, true); + ether_addr_copy(mac_addr, rtwvif_link->mac_addr); + rtw89_core_scan_start(rtwdev, rtwvif_link, mac_addr, true);
rx_fltr &= ~B_AX_A_BCN_CHK_EN; rx_fltr &= ~B_AX_A_BC; @@ -6554,28 +6634,33 @@ void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, rtw89_chanctx_pause(rtwdev, RTW89_CHANCTX_PAUSE_REASON_HW_SCAN); }
-void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, +void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool aborted) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info; - struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif); struct cfg80211_scan_info info = { .aborted = aborted, }; + struct rtw89_vif *rtwvif;
- if (!vif) + if (!rtwvif_link) return;
+ rtw89_chanctx_proceed(rtwdev); + + rtwvif = rtwvif_link->rtwvif; + rtw89_write32_mask(rtwdev, rtw89_mac_reg_by_idx(rtwdev, mac->rx_fltr, RTW89_MAC_0), B_AX_RX_FLTR_CFG_MASK, rtwdev->hal.rx_fltr);
- rtw89_core_scan_complete(rtwdev, vif, true); + rtw89_core_scan_complete(rtwdev, rtwvif_link, true); ieee80211_scan_completed(rtwdev->hw, &info); ieee80211_wake_queues(rtwdev->hw); - rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif, true); + rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif_link, true); rtw89_mac_enable_beacon_for_ap_vifs(rtwdev, true);
rtw89_release_pkt_list(rtwdev); @@ -6584,18 +6669,17 @@ void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, scan_info->last_chan_idx = 0; scan_info->scanning_vif = NULL; scan_info->abort = false; - - rtw89_chanctx_proceed(rtwdev); }
-void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif) +void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info; int ret;
scan_info->abort = true;
- ret = rtw89_hw_scan_offload(rtwdev, vif, false); + ret = rtw89_hw_scan_offload(rtwdev, rtwvif_link, false); if (ret) rtw89_warn(rtwdev, "rtw89_hw_scan_offload failed ret %d\n", ret);
@@ -6604,40 +6688,43 @@ void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif) * RTW89_SCAN_END_SCAN_NOTIFY, so that ieee80211_stop() can flush scan * work properly. */ - rtw89_hw_scan_complete(rtwdev, vif, true); + rtw89_hw_scan_complete(rtwdev, rtwvif_link, true); }
static bool rtw89_is_any_vif_connected_or_connecting(struct rtw89_dev *rtwdev) { + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id;
rtw89_for_each_rtwvif(rtwdev, rtwvif) { - /* This variable implies connected or during attempt to connect */ - if (!is_zero_ether_addr(rtwvif->bssid)) - return true; + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) { + /* This variable implies connected or during attempt to connect */ + if (!is_zero_ether_addr(rtwvif_link->bssid)) + return true; + } }
return false; }
-int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, +int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool enable) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; struct rtw89_scan_option opt = {0}; - struct rtw89_vif *rtwvif; bool connected; int ret = 0;
- rtwvif = vif ? (struct rtw89_vif *)vif->drv_priv : NULL; - if (!rtwvif) + if (!rtwvif_link) return -EINVAL;
connected = rtw89_is_any_vif_connected_or_connecting(rtwdev); opt.enable = enable; opt.target_ch_mode = connected; if (enable) { - ret = rtw89_hw_scan_prehandle(rtwdev, rtwvif, connected); + ret = rtw89_hw_scan_prehandle(rtwdev, rtwvif_link, connected); if (ret) goto out; } @@ -6652,7 +6739,7 @@ int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, opt.opch_end = connected ? 0 : RTW89_CHAN_INVALID; }
- ret = mac->scan_offload(rtwdev, &opt, rtwvif, false); + ret = mac->scan_offload(rtwdev, &opt, rtwvif_link, false); out: return ret; } @@ -6758,7 +6845,7 @@ int rtw89_fw_h2c_pkt_drop(struct rtw89_dev *rtwdev, }
#define H2C_KEEP_ALIVE_LEN 4 -int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool enable) { struct sk_buff *skb; @@ -6766,7 +6853,7 @@ int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, int ret;
if (enable) { - ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif, + ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link, RTW89_PKT_OFLD_TYPE_NULL_DATA, &pkt_id); if (ret) @@ -6784,7 +6871,7 @@ int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, RTW89_SET_KEEP_ALIVE_ENABLE(skb->data, enable); RTW89_SET_KEEP_ALIVE_PKT_NULL_ID(skb->data, pkt_id); RTW89_SET_KEEP_ALIVE_PERIOD(skb->data, 5); - RTW89_SET_KEEP_ALIVE_MACID(skb->data, rtwvif->mac_id); + RTW89_SET_KEEP_ALIVE_MACID(skb->data, rtwvif_link->mac_id);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, H2C_CAT_MAC, @@ -6806,7 +6893,7 @@ int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, return ret; }
-int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool enable) { struct rtw89_h2c_arp_offload *h2c; @@ -6816,7 +6903,7 @@ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, int ret;
if (enable) { - ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif, + ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link, RTW89_PKT_OFLD_TYPE_ARP_RSP, &pkt_id); if (ret) @@ -6834,7 +6921,7 @@ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
h2c->w0 = le32_encode_bits(enable, RTW89_H2C_ARP_OFFLOAD_W0_ENABLE) | le32_encode_bits(0, RTW89_H2C_ARP_OFFLOAD_W0_ACTION) | - le32_encode_bits(rtwvif->mac_id, RTW89_H2C_ARP_OFFLOAD_W0_MACID) | + le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_ARP_OFFLOAD_W0_MACID) | le32_encode_bits(pkt_id, RTW89_H2C_ARP_OFFLOAD_W0_PKT_ID);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, @@ -6859,11 +6946,11 @@ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
#define H2C_DISCONNECT_DETECT_LEN 8 int rtw89_fw_h2c_disconnect_detect(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool enable) + struct rtw89_vif_link *rtwvif_link, bool enable) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct sk_buff *skb; - u8 macid = rtwvif->mac_id; + u8 macid = rtwvif_link->mac_id; int ret;
skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_DISCONNECT_DETECT_LEN); @@ -6902,7 +6989,7 @@ int rtw89_fw_h2c_disconnect_detect(struct rtw89_dev *rtwdev, return ret; }
-int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool enable) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; @@ -6923,7 +7010,7 @@ int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
h2c->w0 = le32_encode_bits(enable, RTW89_H2C_NLO_W0_ENABLE) | le32_encode_bits(enable, RTW89_H2C_NLO_W0_IGNORE_CIPHER) | - le32_encode_bits(rtwvif->mac_id, RTW89_H2C_NLO_W0_MACID); + le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_NLO_W0_MACID);
if (enable) { h2c->nlo_cnt = nd_config->n_match_sets; @@ -6953,12 +7040,12 @@ int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, return ret; }
-int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool enable) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct rtw89_h2c_wow_global *h2c; - u8 macid = rtwvif->mac_id; + u8 macid = rtwvif_link->mac_id; u32 len = sizeof(*h2c); struct sk_buff *skb; int ret; @@ -7002,12 +7089,12 @@ int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
#define H2C_WAKEUP_CTRL_LEN 4 int rtw89_fw_h2c_wow_wakeup_ctrl(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, bool enable) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct sk_buff *skb; - u8 macid = rtwvif->mac_id; + u8 macid = rtwvif_link->mac_id; int ret;
skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_WAKEUP_CTRL_LEN); @@ -7100,13 +7187,13 @@ int rtw89_fw_wow_cam_update(struct rtw89_dev *rtwdev, }
int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, bool enable) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct rtw89_wow_gtk_info *gtk_info = &rtw_wow->gtk_info; struct rtw89_h2c_wow_gtk_ofld *h2c; - u8 macid = rtwvif->mac_id; + u8 macid = rtwvif_link->mac_id; u32 len = sizeof(*h2c); u8 pkt_id_sa_query = 0; struct sk_buff *skb; @@ -7128,14 +7215,14 @@ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev, if (!enable) goto hdr;
- ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif, + ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link, RTW89_PKT_OFLD_TYPE_EAPOL_KEY, &pkt_id_eapol); if (ret) goto fail;
if (gtk_info->igtk_keyid) { - ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif, + ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link, RTW89_PKT_OFLD_TYPE_SA_QUERY, &pkt_id_sa_query); if (ret) @@ -7173,7 +7260,7 @@ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev, return ret; }
-int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool enable) { struct rtw89_wait_info *wait = &rtwdev->mac.ps_wait; @@ -7189,7 +7276,7 @@ int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, skb_put(skb, len); h2c = (struct rtw89_h2c_fwips *)skb->data;
- h2c->w0 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_FW_IPS_W0_MACID) | + h2c->w0 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_FW_IPS_W0_MACID) | le32_encode_bits(enable, RTW89_H2C_FW_IPS_W0_ENABLE);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C, diff --git a/drivers/net/wireless/realtek/rtw89/fw.h b/drivers/net/wireless/realtek/rtw89/fw.h index ad47e77d740b..ccbbc43f33fe 100644 --- a/drivers/net/wireless/realtek/rtw89/fw.h +++ b/drivers/net/wireless/realtek/rtw89/fw.h @@ -4404,59 +4404,59 @@ void rtw89_h2c_pkt_set_hdr(struct rtw89_dev *rtwdev, struct sk_buff *skb, u8 type, u8 cat, u8 class, u8 func, bool rack, bool dack, u32 len); int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int rtw89_fw_h2c_default_cmac_tbl_g7(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int rtw89_fw_h2c_default_dmac_tbl_v2(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta); + struct rtw89_sta_link *rtwsta_link); int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta); + struct rtw89_sta_link *rtwsta_link); int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif); + struct rtw89_vif_link *rtwvif_link); int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif); -int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *vif, - struct rtw89_sta *rtwsta, const u8 *scan_mac_addr); + struct rtw89_vif_link *rtwvif_link); +int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif, + struct rtw89_sta_link *rtwsta_link, const u8 *scan_mac_addr); int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta); + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); void rtw89_fw_c2h_irqsafe(struct rtw89_dev *rtwdev, struct sk_buff *c2h); void rtw89_fw_c2h_work(struct work_struct *work); int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, enum rtw89_upd_mode upd_mode); -int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta, bool dis_conn); +int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, bool dis_conn); int rtw89_fw_h2c_notify_dbcc(struct rtw89_dev *rtwdev, bool en); int rtw89_fw_h2c_macid_pause(struct rtw89_dev *rtwdev, u8 sh, u8 grp, bool pause); -int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u8 ac, u32 val); int rtw89_fw_h2c_set_ofld_cfg(struct rtw89_dev *rtwdev); int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, + struct rtw89_vif_link *rtwvif_link, bool connect); int rtw89_fw_h2c_rssi_offload(struct rtw89_dev *rtwdev, struct rtw89_rx_phy_ppdu *phy_ppdu); -int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif); +int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link); int rtw89_fw_h2c_ra(struct rtw89_dev *rtwdev, struct rtw89_ra_info *ra, bool csi); int rtw89_fw_h2c_cxdrv_init(struct rtw89_dev *rtwdev, u8 type); int rtw89_fw_h2c_cxdrv_init_v7(struct rtw89_dev *rtwdev, u8 type); @@ -4478,11 +4478,11 @@ int rtw89_fw_h2c_scan_list_offload_be(struct rtw89_dev *rtwdev, int ch_num, struct list_head *chan_list); int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev, struct rtw89_scan_option *opt, - struct rtw89_vif *vif, + struct rtw89_vif_link *vif, bool wowlan); int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev, struct rtw89_scan_option *opt, - struct rtw89_vif *vif, + struct rtw89_vif_link *vif, bool wowlan); int rtw89_fw_h2c_rf_reg(struct rtw89_dev *rtwdev, struct rtw89_fw_h2c_rf_reg_info *info, @@ -4508,14 +4508,19 @@ int rtw89_fw_h2c_raw_with_hdr(struct rtw89_dev *rtwdev, int rtw89_fw_h2c_raw(struct rtw89_dev *rtwdev, const u8 *buf, u16 len); void rtw89_fw_send_all_early_h2c(struct rtw89_dev *rtwdev); void rtw89_fw_free_all_early_h2c(struct rtw89_dev *rtwdev); -int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u8 macid); void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool notify_fw); + struct rtw89_vif_link *rtwvif_link, + bool notify_fw); void rtw89_fw_release_general_pkt_list(struct rtw89_dev *rtwdev, bool notify_fw); -int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, +int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, bool valid, struct ieee80211_ampdu_params *params); -int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, +int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, bool valid, struct ieee80211_ampdu_params *params); void rtw89_fw_h2c_init_dynamic_ba_cam_v0_ext(struct rtw89_dev *rtwdev); int rtw89_fw_h2c_init_ba_cam_users(struct rtw89_dev *rtwdev, u8 users, @@ -4524,8 +4529,8 @@ int rtw89_fw_h2c_init_ba_cam_users(struct rtw89_dev *rtwdev, u8 users, int rtw89_fw_h2c_lps_parm(struct rtw89_dev *rtwdev, struct rtw89_lps_parm *lps_param); int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif); -int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link); +int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool enable); struct sk_buff *rtw89_fw_h2c_alloc_skb_with_hdr(struct rtw89_dev *rtwdev, u32 len); struct sk_buff *rtw89_fw_h2c_alloc_skb_no_hdr(struct rtw89_dev *rtwdev, u32 len); @@ -4534,49 +4539,56 @@ int rtw89_fw_msg_reg(struct rtw89_dev *rtwdev, struct rtw89_mac_c2h_info *c2h_info); int rtw89_fw_h2c_fw_log(struct rtw89_dev *rtwdev, bool enable); void rtw89_fw_st_dbg_dump(struct rtw89_dev *rtwdev); -void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, - struct ieee80211_scan_request *req); -void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, +void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct ieee80211_scan_request *scan_req); +void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool aborted); -int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, +int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool enable); -void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif); +void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link); int rtw89_hw_scan_add_chan_list_ax(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool connected); + struct rtw89_vif_link *rtwvif_link, bool connected); int rtw89_pno_scan_add_chan_list_ax(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif); + struct rtw89_vif_link *rtwvif_link); int rtw89_hw_scan_add_chan_list_be(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool connected); + struct rtw89_vif_link *rtwvif_link, bool connected); int rtw89_pno_scan_add_chan_list_be(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif); + struct rtw89_vif_link *rtwvif_link); int rtw89_fw_h2c_trigger_cpu_exception(struct rtw89_dev *rtwdev); int rtw89_fw_h2c_pkt_drop(struct rtw89_dev *rtwdev, const struct rtw89_pkt_drop_params *params); -int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, +int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct ieee80211_bss_conf *bss_conf, struct ieee80211_p2p_noa_desc *desc, u8 act, u8 noa_id); -int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool en); -int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool enable); int rtw89_fw_h2c_wow_wakeup_ctrl(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool enable); -int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, bool enable); +int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool enable); -int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool enable); int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool enable); + struct rtw89_vif_link *rtwvif_link, bool enable); int rtw89_fw_h2c_disconnect_detect(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool enable); -int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, bool enable); +int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool enable); int rtw89_fw_h2c_wow_wakeup_ctrl(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool enable); + struct rtw89_vif_link *rtwvif_link, bool enable); int rtw89_fw_wow_cam_update(struct rtw89_dev *rtwdev, struct rtw89_wow_cam_info *cam_info); int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, bool enable); int rtw89_fw_h2c_wow_request_aoac(struct rtw89_dev *rtwdev); int rtw89_fw_h2c_add_mcc(struct rtw89_dev *rtwdev, @@ -4621,51 +4633,73 @@ static inline void rtw89_fw_h2c_init_ba_cam(struct rtw89_dev *rtwdev) }
static inline int rtw89_chip_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { const struct rtw89_chip_info *chip = rtwdev->chip;
- return chip->ops->h2c_default_cmac_tbl(rtwdev, rtwvif, rtwsta); + return chip->ops->h2c_default_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link); }
static inline int rtw89_chip_h2c_default_dmac_tbl(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_sta *rtwsta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { const struct rtw89_chip_info *chip = rtwdev->chip;
if (chip->ops->h2c_default_dmac_tbl) - return chip->ops->h2c_default_dmac_tbl(rtwdev, rtwvif, rtwsta); + return chip->ops->h2c_default_dmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
return 0; }
static inline int rtw89_chip_h2c_update_beacon(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_chip_info *chip = rtwdev->chip;
- return chip->ops->h2c_update_beacon(rtwdev, rtwvif); + return chip->ops->h2c_update_beacon(rtwdev, rtwvif_link); }
static inline int rtw89_chip_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { const struct rtw89_chip_info *chip = rtwdev->chip;
- return chip->ops->h2c_assoc_cmac_tbl(rtwdev, vif, sta); + return chip->ops->h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link); }
-static inline int rtw89_chip_h2c_ampdu_cmac_tbl(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) +static inline +int rtw89_chip_h2c_ampdu_link_cmac_tbl(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { const struct rtw89_chip_info *chip = rtwdev->chip;
if (chip->ops->h2c_ampdu_cmac_tbl) - return chip->ops->h2c_ampdu_cmac_tbl(rtwdev, vif, sta); + return chip->ops->h2c_ampdu_cmac_tbl(rtwdev, rtwvif_link, + rtwsta_link); + + return 0; +} + +static inline int rtw89_chip_h2c_ampdu_cmac_tbl(struct rtw89_dev *rtwdev, + struct rtw89_vif *rtwvif, + struct rtw89_sta *rtwsta) +{ + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + int ret; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + ret = rtw89_chip_h2c_ampdu_link_cmac_tbl(rtwdev, rtwvif_link, + rtwsta_link); + if (ret) + return ret; + }
return 0; } @@ -4675,8 +4709,20 @@ int rtw89_chip_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, bool valid, struct ieee80211_ampdu_params *params) { const struct rtw89_chip_info *chip = rtwdev->chip; + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + int ret; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + ret = chip->ops->h2c_ba_cam(rtwdev, rtwvif_link, rtwsta_link, + valid, params); + if (ret) + return ret; + }
- return chip->ops->h2c_ba_cam(rtwdev, rtwsta, valid, params); + return 0; }
/* must consider compatibility; don't insert new in the mid */ diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c index c70a23a763b0..4e15d539e3d1 100644 --- a/drivers/net/wireless/realtek/rtw89/mac.c +++ b/drivers/net/wireless/realtek/rtw89/mac.c @@ -4076,17 +4076,17 @@ static const struct rtw89_port_reg rtw89_port_base_ax = { };
static void rtw89_mac_check_packet_ctrl(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, u8 type) + struct rtw89_vif_link *rtwvif_link, u8 type) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; - u8 mask = B_AX_PTCL_DBG_INFO_MASK_BY_PORT(rtwvif->port); + u8 mask = B_AX_PTCL_DBG_INFO_MASK_BY_PORT(rtwvif_link->port); u32 reg_info, reg_ctrl; u32 val; int ret;
- reg_info = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg_info, rtwvif->mac_idx); - reg_ctrl = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg, rtwvif->mac_idx); + reg_info = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg_info, rtwvif_link->mac_idx); + reg_ctrl = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg, rtwvif_link->mac_idx);
rtw89_write32_mask(rtwdev, reg_ctrl, B_AX_PTCL_DBG_SEL_MASK, type); rtw89_write32_set(rtwdev, reg_ctrl, B_AX_PTCL_DBG_EN); @@ -4098,26 +4098,32 @@ static void rtw89_mac_check_packet_ctrl(struct rtw89_dev *rtwdev, rtw89_warn(rtwdev, "Polling beacon packet empty fail\n"); }
-static void rtw89_mac_bcn_drop(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void rtw89_mac_bcn_drop(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
- rtw89_write32_set(rtwdev, p->bcn_drop_all, BIT(rtwvif->port)); - rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK, 1); - rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_area, B_AX_BCN_MSK_AREA_MASK, 0); - rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, 0); - rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK, 2); - rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_early, B_AX_TBTTERLY_MASK, 1); - rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_space, B_AX_BCN_SPACE_MASK, 1); - rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN); - - rtw89_mac_check_packet_ctrl(rtwdev, rtwvif, AX_PTCL_DBG_BCNQ_NUM0); - if (rtwvif->port == RTW89_PORT_0) - rtw89_mac_check_packet_ctrl(rtwdev, rtwvif, AX_PTCL_DBG_BCNQ_NUM1); - - rtw89_write32_clr(rtwdev, p->bcn_drop_all, BIT(rtwvif->port)); - rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_TBTT_PROHIB_EN); + rtw89_write32_set(rtwdev, p->bcn_drop_all, BIT(rtwvif_link->port)); + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK, + 1); + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_area, B_AX_BCN_MSK_AREA_MASK, + 0); + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, + 0); + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_early, B_AX_BCNERLY_MASK, 2); + rtw89_write16_port_mask(rtwdev, rtwvif_link, p->tbtt_early, + B_AX_TBTTERLY_MASK, 1); + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_space, + B_AX_BCN_SPACE_MASK, 1); + rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_BCNTX_EN); + + rtw89_mac_check_packet_ctrl(rtwdev, rtwvif_link, AX_PTCL_DBG_BCNQ_NUM0); + if (rtwvif_link->port == RTW89_PORT_0) + rtw89_mac_check_packet_ctrl(rtwdev, rtwvif_link, AX_PTCL_DBG_BCNQ_NUM1); + + rtw89_write32_clr(rtwdev, p->bcn_drop_all, BIT(rtwvif_link->port)); + rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_TBTT_PROHIB_EN); fsleep(2000); }
@@ -4131,286 +4137,329 @@ static void rtw89_mac_bcn_drop(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvi #define BCN_ERLY_SET_DLY (10 * 2)
static void rtw89_mac_port_cfg_func_sw(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); const struct rtw89_chip_info *chip = rtwdev->chip; + struct ieee80211_bss_conf *bss_conf; bool need_backup = false; u32 backup_val; + u16 beacon_int;
- if (!rtw89_read32_port_mask(rtwdev, rtwvif, p->port_cfg, B_AX_PORT_FUNC_EN)) + if (!rtw89_read32_port_mask(rtwdev, rtwvif_link, p->port_cfg, B_AX_PORT_FUNC_EN)) return;
- if (chip->chip_id == RTL8852A && rtwvif->port != RTW89_PORT_0) { + if (chip->chip_id == RTL8852A && rtwvif_link->port != RTW89_PORT_0) { need_backup = true; - backup_val = rtw89_read32_port(rtwdev, rtwvif, p->tbtt_prohib); + backup_val = rtw89_read32_port(rtwdev, rtwvif_link, p->tbtt_prohib); }
- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) - rtw89_mac_bcn_drop(rtwdev, rtwvif); + if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) + rtw89_mac_bcn_drop(rtwdev, rtwvif_link);
if (chip->chip_id == RTL8852A) { - rtw89_write32_port_clr(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK); - rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, 1); - rtw89_write16_port_clr(rtwdev, rtwvif, p->tbtt_early, B_AX_TBTTERLY_MASK); - rtw89_write16_port_clr(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK); + rtw89_write32_port_clr(rtwdev, rtwvif_link, p->tbtt_prohib, + B_AX_TBTT_SETUP_MASK); + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib, + B_AX_TBTT_HOLD_MASK, 1); + rtw89_write16_port_clr(rtwdev, rtwvif_link, p->tbtt_early, + B_AX_TBTTERLY_MASK); + rtw89_write16_port_clr(rtwdev, rtwvif_link, p->bcn_early, + B_AX_BCNERLY_MASK); }
- msleep(vif->bss_conf.beacon_int + 1); - rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_PORT_FUNC_EN | + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + beacon_int = bss_conf->beacon_int; + + rcu_read_unlock(); + + msleep(beacon_int + 1); + rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_PORT_FUNC_EN | B_AX_BRK_SETUP); - rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TSFTR_RST); - rtw89_write32_port(rtwdev, rtwvif, p->bcn_cnt_tmr, 0); + rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_TSFTR_RST); + rtw89_write32_port(rtwdev, rtwvif_link, p->bcn_cnt_tmr, 0);
if (need_backup) - rtw89_write32_port(rtwdev, rtwvif, p->tbtt_prohib, backup_val); + rtw89_write32_port(rtwdev, rtwvif_link, p->tbtt_prohib, backup_val); }
static void rtw89_mac_port_cfg_tx_rpt(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool en) + struct rtw89_vif_link *rtwvif_link, bool en) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
if (en) - rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TXBCN_RPT_EN); + rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, + B_AX_TXBCN_RPT_EN); else - rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_TXBCN_RPT_EN); + rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, + B_AX_TXBCN_RPT_EN); }
static void rtw89_mac_port_cfg_rx_rpt(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool en) + struct rtw89_vif_link *rtwvif_link, bool en) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
if (en) - rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_RXBCN_RPT_EN); + rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, + B_AX_RXBCN_RPT_EN); else - rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_RXBCN_RPT_EN); + rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, + B_AX_RXBCN_RPT_EN); }
static void rtw89_mac_port_cfg_net_type(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
- rtw89_write32_port_mask(rtwdev, rtwvif, p->port_cfg, B_AX_NET_TYPE_MASK, - rtwvif->net_type); + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->port_cfg, B_AX_NET_TYPE_MASK, + rtwvif_link->net_type); }
static void rtw89_mac_port_cfg_bcn_prct(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; - bool en = rtwvif->net_type != RTW89_NET_TYPE_NO_LINK; + bool en = rtwvif_link->net_type != RTW89_NET_TYPE_NO_LINK; u32 bits = B_AX_TBTT_PROHIB_EN | B_AX_BRK_SETUP;
if (en) - rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, bits); + rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, bits); else - rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, bits); + rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, bits); }
static void rtw89_mac_port_cfg_rx_sw(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; - bool en = rtwvif->net_type == RTW89_NET_TYPE_INFRA || - rtwvif->net_type == RTW89_NET_TYPE_AD_HOC; + bool en = rtwvif_link->net_type == RTW89_NET_TYPE_INFRA || + rtwvif_link->net_type == RTW89_NET_TYPE_AD_HOC; u32 bit = B_AX_RX_BSSID_FIT_EN;
if (en) - rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, bit); + rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, bit); else - rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, bit); + rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, bit); }
void rtw89_mac_port_cfg_rx_sync(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool en) + struct rtw89_vif_link *rtwvif_link, bool en) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
if (en) - rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TSF_UDT_EN); + rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_TSF_UDT_EN); else - rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_TSF_UDT_EN); + rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_TSF_UDT_EN); }
static void rtw89_mac_port_cfg_rx_sync_by_nettype(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { - bool en = rtwvif->net_type == RTW89_NET_TYPE_INFRA || - rtwvif->net_type == RTW89_NET_TYPE_AD_HOC; + bool en = rtwvif_link->net_type == RTW89_NET_TYPE_INFRA || + rtwvif_link->net_type == RTW89_NET_TYPE_AD_HOC;
- rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif, en); + rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif_link, en); }
static void rtw89_mac_port_cfg_tx_sw(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool en) + struct rtw89_vif_link *rtwvif_link, bool en) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
if (en) - rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN); + rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_BCNTX_EN); else - rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN); + rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_BCNTX_EN); }
static void rtw89_mac_port_cfg_tx_sw_by_nettype(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { - bool en = rtwvif->net_type == RTW89_NET_TYPE_AP_MODE || - rtwvif->net_type == RTW89_NET_TYPE_AD_HOC; + bool en = rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE || + rtwvif_link->net_type == RTW89_NET_TYPE_AD_HOC;
- rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif, en); + rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif_link, en); }
void rtw89_mac_enable_beacon_for_ap_vifs(struct rtw89_dev *rtwdev, bool en) { + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id;
rtw89_for_each_rtwvif(rtwdev, rtwvif) - if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) - rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif, en); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) + rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif_link, en); }
static void rtw89_mac_port_cfg_bcn_intv(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); - u16 bcn_int = vif->bss_conf.beacon_int ? vif->bss_conf.beacon_int : BCN_INTERVAL; + struct ieee80211_bss_conf *bss_conf; + u16 bcn_int; + + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + if (bss_conf->beacon_int) + bcn_int = bss_conf->beacon_int; + else + bcn_int = BCN_INTERVAL; + + rcu_read_unlock();
- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_space, B_AX_BCN_SPACE_MASK, + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_space, B_AX_BCN_SPACE_MASK, bcn_int); }
static void rtw89_mac_port_cfg_hiq_win(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { - u8 win = rtwvif->net_type == RTW89_NET_TYPE_AP_MODE ? 16 : 0; + u8 win = rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE ? 16 : 0; const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; - u8 port = rtwvif->port; + u8 port = rtwvif_link->port; u32 reg;
- reg = rtw89_mac_reg_by_idx(rtwdev, p->hiq_win[port], rtwvif->mac_idx); + reg = rtw89_mac_reg_by_idx(rtwdev, p->hiq_win[port], rtwvif_link->mac_idx); rtw89_write8(rtwdev, reg, win); }
static void rtw89_mac_port_cfg_hiq_dtim(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct ieee80211_bss_conf *bss_conf; + u8 dtim_period; u32 addr;
- addr = rtw89_mac_reg_by_idx(rtwdev, p->md_tsft, rtwvif->mac_idx); + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + dtim_period = bss_conf->dtim_period; + + rcu_read_unlock(); + + addr = rtw89_mac_reg_by_idx(rtwdev, p->md_tsft, rtwvif_link->mac_idx); rtw89_write8_set(rtwdev, addr, B_AX_UPD_HGQMD | B_AX_UPD_TIMIE);
- rtw89_write16_port_mask(rtwdev, rtwvif, p->dtim_ctrl, B_AX_DTIM_NUM_MASK, - vif->bss_conf.dtim_period); + rtw89_write16_port_mask(rtwdev, rtwvif_link, p->dtim_ctrl, B_AX_DTIM_NUM_MASK, + dtim_period); }
static void rtw89_mac_port_cfg_bcn_setup_time(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK, BCN_SETUP_DEF); }
static void rtw89_mac_port_cfg_bcn_hold_time(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, BCN_HOLD_DEF); }
static void rtw89_mac_port_cfg_bcn_mask_area(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_area, + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_area, B_AX_BCN_MSK_AREA_MASK, BCN_MASK_DEF); }
static void rtw89_mac_port_cfg_tbtt_early(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
- rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_early, + rtw89_write16_port_mask(rtwdev, rtwvif_link, p->tbtt_early, B_AX_TBTTERLY_MASK, TBTT_ERLY_DEF); }
static void rtw89_mac_port_cfg_bss_color(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); static const u32 masks[RTW89_PORT_NUM] = { B_AX_BSS_COLOB_AX_PORT_0_MASK, B_AX_BSS_COLOB_AX_PORT_1_MASK, B_AX_BSS_COLOB_AX_PORT_2_MASK, B_AX_BSS_COLOB_AX_PORT_3_MASK, B_AX_BSS_COLOB_AX_PORT_4_MASK, }; - u8 port = rtwvif->port; + struct ieee80211_bss_conf *bss_conf; + u8 port = rtwvif_link->port; u32 reg_base; u32 reg; u8 bss_color;
- bss_color = vif->bss_conf.he_bss_color.color; + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + bss_color = bss_conf->he_bss_color.color; + + rcu_read_unlock(); + reg_base = port >= 4 ? p->bss_color + 4 : p->bss_color; - reg = rtw89_mac_reg_by_idx(rtwdev, reg_base, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_idx(rtwdev, reg_base, rtwvif_link->mac_idx); rtw89_write32_mask(rtwdev, reg, masks[port], bss_color); }
static void rtw89_mac_port_cfg_mbssid(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; - u8 port = rtwvif->port; + u8 port = rtwvif_link->port; u32 reg;
- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) + if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) return;
if (port == 0) { - reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid, rtwvif_link->mac_idx); rtw89_write32_clr(rtwdev, reg, B_AX_P0MB_ALL_MASK); } }
static void rtw89_mac_port_cfg_hiq_drop(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; - u8 port = rtwvif->port; + u8 port = rtwvif_link->port; u32 reg; u32 val;
- reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid_drop, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid_drop, rtwvif_link->mac_idx); val = rtw89_read32(rtwdev, reg); val &= ~FIELD_PREP(B_AX_PORT_DROP_4_0_MASK, BIT(port)); if (port == 0) @@ -4419,31 +4468,31 @@ static void rtw89_mac_port_cfg_hiq_drop(struct rtw89_dev *rtwdev, }
static void rtw89_mac_port_cfg_func_en(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool enable) + struct rtw89_vif_link *rtwvif_link, bool enable) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
if (enable) - rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, + rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_PORT_FUNC_EN); else - rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, + rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_PORT_FUNC_EN); }
static void rtw89_mac_port_cfg_bcn_early(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base;
- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK, + rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_early, B_AX_BCNERLY_MASK, BCN_ERLY_DEF); }
static void rtw89_mac_port_cfg_tbtt_shift(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; const struct rtw89_port_reg *p = mac->port_base; @@ -4452,20 +4501,20 @@ static void rtw89_mac_port_cfg_tbtt_shift(struct rtw89_dev *rtwdev, if (rtwdev->chip->chip_id != RTL8852C) return;
- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT && - rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION) + if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT && + rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION) return;
val = FIELD_PREP(B_AX_TBTT_SHIFT_OFST_MAG, 1) | B_AX_TBTT_SHIFT_OFST_SIGN;
- rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_shift, + rtw89_write16_port_mask(rtwdev, rtwvif_link, p->tbtt_shift, B_AX_TBTT_SHIFT_OFST_MASK, val); }
void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_vif *rtwvif_src, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_vif_link *rtwvif_src, u16 offset_tu) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; @@ -4473,8 +4522,8 @@ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev, u32 val, reg;
val = RTW89_PORT_OFFSET_TU_TO_32US(offset_tu); - reg = rtw89_mac_reg_by_idx(rtwdev, p->tsf_sync + rtwvif->port * 4, - rtwvif->mac_idx); + reg = rtw89_mac_reg_by_idx(rtwdev, p->tsf_sync + rtwvif_link->port * 4, + rtwvif_link->mac_idx);
rtw89_write32_mask(rtwdev, reg, B_AX_SYNC_PORT_SRC, rtwvif_src->port); rtw89_write32_mask(rtwdev, reg, B_AX_SYNC_PORT_OFFSET_VAL, val); @@ -4482,16 +4531,16 @@ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev, }
static void rtw89_mac_port_tsf_sync_rand(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_vif *rtwvif_src, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_vif_link *rtwvif_src, u8 offset, int *n_offset) { - if (rtwvif->net_type != RTW89_NET_TYPE_AP_MODE || rtwvif == rtwvif_src) + if (rtwvif_link->net_type != RTW89_NET_TYPE_AP_MODE || rtwvif_link == rtwvif_src) return;
/* adjust offset randomly to avoid beacon conflict */ offset = offset - offset / 4 + get_random_u32() % (offset / 2); - rtw89_mac_port_tsf_sync(rtwdev, rtwvif, rtwvif_src, + rtw89_mac_port_tsf_sync(rtwdev, rtwvif_link, rtwvif_src, (*n_offset) * offset);
(*n_offset)++; @@ -4499,15 +4548,19 @@ static void rtw89_mac_port_tsf_sync_rand(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_tsf_resync_all(struct rtw89_dev *rtwdev) { - struct rtw89_vif *src = NULL, *tmp; + struct rtw89_vif_link *src = NULL, *tmp; u8 offset = 100, vif_aps = 0; + struct rtw89_vif *rtwvif; + unsigned int link_id; int n_offset = 1;
- rtw89_for_each_rtwvif(rtwdev, tmp) { - if (!src || tmp->net_type == RTW89_NET_TYPE_INFRA) - src = tmp; - if (tmp->net_type == RTW89_NET_TYPE_AP_MODE) - vif_aps++; + rtw89_for_each_rtwvif(rtwdev, rtwvif) { + rtw89_vif_for_each_link(rtwvif, tmp, link_id) { + if (!src || tmp->net_type == RTW89_NET_TYPE_INFRA) + src = tmp; + if (tmp->net_type == RTW89_NET_TYPE_AP_MODE) + vif_aps++; + } }
if (vif_aps == 0) @@ -4515,104 +4568,106 @@ static void rtw89_mac_port_tsf_resync_all(struct rtw89_dev *rtwdev)
offset /= (vif_aps + 1);
- rtw89_for_each_rtwvif(rtwdev, tmp) - rtw89_mac_port_tsf_sync_rand(rtwdev, tmp, src, offset, &n_offset); + rtw89_for_each_rtwvif(rtwdev, rtwvif) + rtw89_vif_for_each_link(rtwvif, tmp, link_id) + rtw89_mac_port_tsf_sync_rand(rtwdev, tmp, src, offset, + &n_offset); }
-int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { int ret;
- ret = rtw89_mac_port_update(rtwdev, rtwvif); + ret = rtw89_mac_port_update(rtwdev, rtwvif_link); if (ret) return ret;
- rtw89_mac_dmac_tbl_init(rtwdev, rtwvif->mac_id); - rtw89_mac_cmac_tbl_init(rtwdev, rtwvif->mac_id); + rtw89_mac_dmac_tbl_init(rtwdev, rtwvif_link->mac_id); + rtw89_mac_cmac_tbl_init(rtwdev, rtwvif_link->mac_id);
- ret = rtw89_mac_set_macid_pause(rtwdev, rtwvif->mac_id, false); + ret = rtw89_mac_set_macid_pause(rtwdev, rtwvif_link->mac_id, false); if (ret) return ret;
- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, NULL, RTW89_ROLE_CREATE); + ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, NULL, RTW89_ROLE_CREATE); if (ret) return ret;
- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, NULL, true); + ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, NULL, true); if (ret) return ret;
- ret = rtw89_cam_init(rtwdev, rtwvif); + ret = rtw89_cam_init(rtwdev, rtwvif_link); if (ret) return ret;
- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL); + ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL); if (ret) return ret;
- ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif, NULL); + ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif_link, NULL); if (ret) return ret;
- ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif, NULL); + ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif_link, NULL); if (ret) return ret;
return 0; }
-int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { int ret;
- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, NULL, RTW89_ROLE_REMOVE); + ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, NULL, RTW89_ROLE_REMOVE); if (ret) return ret;
- rtw89_cam_deinit(rtwdev, rtwvif); + rtw89_cam_deinit(rtwdev, rtwvif_link);
- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL); + ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL); if (ret) return ret;
return 0; }
-int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { - u8 port = rtwvif->port; + u8 port = rtwvif_link->port;
if (port >= RTW89_PORT_NUM) return -EINVAL;
- rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif); - rtw89_mac_port_cfg_tx_rpt(rtwdev, rtwvif, false); - rtw89_mac_port_cfg_rx_rpt(rtwdev, rtwvif, false); - rtw89_mac_port_cfg_net_type(rtwdev, rtwvif); - rtw89_mac_port_cfg_bcn_prct(rtwdev, rtwvif); - rtw89_mac_port_cfg_rx_sw(rtwdev, rtwvif); - rtw89_mac_port_cfg_rx_sync_by_nettype(rtwdev, rtwvif); - rtw89_mac_port_cfg_tx_sw_by_nettype(rtwdev, rtwvif); - rtw89_mac_port_cfg_bcn_intv(rtwdev, rtwvif); - rtw89_mac_port_cfg_hiq_win(rtwdev, rtwvif); - rtw89_mac_port_cfg_hiq_dtim(rtwdev, rtwvif); - rtw89_mac_port_cfg_hiq_drop(rtwdev, rtwvif); - rtw89_mac_port_cfg_bcn_setup_time(rtwdev, rtwvif); - rtw89_mac_port_cfg_bcn_hold_time(rtwdev, rtwvif); - rtw89_mac_port_cfg_bcn_mask_area(rtwdev, rtwvif); - rtw89_mac_port_cfg_tbtt_early(rtwdev, rtwvif); - rtw89_mac_port_cfg_tbtt_shift(rtwdev, rtwvif); - rtw89_mac_port_cfg_bss_color(rtwdev, rtwvif); - rtw89_mac_port_cfg_mbssid(rtwdev, rtwvif); - rtw89_mac_port_cfg_func_en(rtwdev, rtwvif, true); + rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_tx_rpt(rtwdev, rtwvif_link, false); + rtw89_mac_port_cfg_rx_rpt(rtwdev, rtwvif_link, false); + rtw89_mac_port_cfg_net_type(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_bcn_prct(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_rx_sw(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_rx_sync_by_nettype(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_tx_sw_by_nettype(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_bcn_intv(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_hiq_win(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_hiq_dtim(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_hiq_drop(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_bcn_setup_time(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_bcn_hold_time(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_bcn_mask_area(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_tbtt_early(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_tbtt_shift(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_bss_color(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_mbssid(rtwdev, rtwvif_link); + rtw89_mac_port_cfg_func_en(rtwdev, rtwvif_link, true); rtw89_mac_port_tsf_resync_all(rtwdev); fsleep(BCN_ERLY_SET_DLY); - rtw89_mac_port_cfg_bcn_early(rtwdev, rtwvif); + rtw89_mac_port_cfg_bcn_early(rtwdev, rtwvif_link);
return 0; }
-int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u64 *tsf) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; @@ -4620,12 +4675,12 @@ int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, u32 tsf_low, tsf_high; int ret;
- ret = rtw89_mac_check_mac_en(rtwdev, rtwvif->mac_idx, RTW89_CMAC_SEL); + ret = rtw89_mac_check_mac_en(rtwdev, rtwvif_link->mac_idx, RTW89_CMAC_SEL); if (ret) return ret;
- tsf_low = rtw89_read32_port(rtwdev, rtwvif, p->tsftr_l); - tsf_high = rtw89_read32_port(rtwdev, rtwvif, p->tsftr_h); + tsf_low = rtw89_read32_port(rtwdev, rtwvif_link, p->tsftr_l); + tsf_high = rtw89_read32_port(rtwdev, rtwvif_link, p->tsftr_h); *tsf = (u64)tsf_high << 32 | tsf_low;
return 0; @@ -4651,65 +4706,57 @@ static void rtw89_mac_check_he_obss_narrow_bw_ru_iter(struct wiphy *wiphy, }
void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif) + struct rtw89_vif_link *rtwvif_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; struct ieee80211_hw *hw = rtwdev->hw; + struct ieee80211_bss_conf *bss_conf; + struct cfg80211_chan_def oper; bool tolerated = true; u32 reg;
- if (!vif->bss_conf.he_support || vif->type != NL80211_IFTYPE_STATION) + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + if (!bss_conf->he_support || vif->type != NL80211_IFTYPE_STATION) { + rcu_read_unlock(); return; + }
- if (!(vif->bss_conf.chanreq.oper.chan->flags & IEEE80211_CHAN_RADAR)) + oper = bss_conf->chanreq.oper; + if (!(oper.chan->flags & IEEE80211_CHAN_RADAR)) { + rcu_read_unlock(); return; + } + + rcu_read_unlock();
- cfg80211_bss_iter(hw->wiphy, &vif->bss_conf.chanreq.oper, + cfg80211_bss_iter(hw->wiphy, &oper, rtw89_mac_check_he_obss_narrow_bw_ru_iter, &tolerated);
reg = rtw89_mac_reg_by_idx(rtwdev, mac->narrow_bw_ru_dis.addr, - rtwvif->mac_idx); + rtwvif_link->mac_idx); if (tolerated) rtw89_write32_clr(rtwdev, reg, mac->narrow_bw_ru_dis.mask); else rtw89_write32_set(rtwdev, reg, mac->narrow_bw_ru_dis.mask); }
-void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { - rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif); + rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif_link); }
-int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { - int ret; - - rtwvif->mac_id = rtw89_acquire_mac_id(rtwdev); - if (rtwvif->mac_id == RTW89_MAX_MAC_ID_NUM) - return -ENOSPC; - - ret = rtw89_mac_vif_init(rtwdev, rtwvif); - if (ret) - goto release_mac_id; - - return 0; - -release_mac_id: - rtw89_release_mac_id(rtwdev, rtwvif->mac_id); - - return ret; + return rtw89_mac_vif_init(rtwdev, rtwvif_link); }
-int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { - int ret; - - ret = rtw89_mac_vif_deinit(rtwdev, rtwvif); - rtw89_release_mac_id(rtwdev, rtwvif->mac_id); - - return ret; + return rtw89_mac_vif_deinit(rtwdev, rtwvif_link); }
static void @@ -4730,8 +4777,8 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb, { const struct rtw89_c2h_scanofld *c2h = (const struct rtw89_c2h_scanofld *)skb->data; - struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif; - struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif); + struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif; + struct rtw89_vif *rtwvif; struct rtw89_chan new; u8 reason, status, tx_fail, band, actual_period, expect_period; u32 last_chan = rtwdev->scan_info.last_chan_idx, report_tsf; @@ -4739,9 +4786,11 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb, u16 chan; int ret;
- if (!rtwvif) + if (!rtwvif_link) return;
+ rtwvif = rtwvif_link->rtwvif; + tx_fail = le32_get_bits(c2h->w5, RTW89_C2H_SCANOFLD_W5_TX_FAIL); status = le32_get_bits(c2h->w2, RTW89_C2H_SCANOFLD_W2_STATUS); chan = le32_get_bits(c2h->w2, RTW89_C2H_SCANOFLD_W2_PRI_CH); @@ -4781,28 +4830,28 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb, if (rtwdev->scan_info.abort) return;
- if (rtwvif && rtwvif->scan_req && + if (rtwvif_link && rtwvif->scan_req && last_chan < rtwvif->scan_req->n_channels) { - ret = rtw89_hw_scan_offload(rtwdev, vif, true); + ret = rtw89_hw_scan_offload(rtwdev, rtwvif_link, true); if (ret) { - rtw89_hw_scan_abort(rtwdev, vif); + rtw89_hw_scan_abort(rtwdev, rtwvif_link); rtw89_warn(rtwdev, "HW scan failed: %d\n", ret); } } else { - rtw89_hw_scan_complete(rtwdev, vif, false); + rtw89_hw_scan_complete(rtwdev, rtwvif_link, false); } break; case RTW89_SCAN_ENTER_OP_NOTIFY: case RTW89_SCAN_ENTER_CH_NOTIFY: if (rtw89_is_op_chan(rtwdev, band, chan)) { - rtw89_assign_entity_chan(rtwdev, rtwvif->chanctx_idx, + rtw89_assign_entity_chan(rtwdev, rtwvif_link->chanctx_idx, &rtwdev->scan_info.op_chan); rtw89_mac_enable_beacon_for_ap_vifs(rtwdev, true); ieee80211_wake_queues(rtwdev->hw); } else { rtw89_chan_create(&new, chan, chan, band, RTW89_CHANNEL_WIDTH_20); - rtw89_assign_entity_chan(rtwdev, rtwvif->chanctx_idx, + rtw89_assign_entity_chan(rtwdev, rtwvif_link->chanctx_idx, &new); } break; @@ -4812,10 +4861,11 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb, }
static void -rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, struct sk_buff *skb) { - struct ieee80211_vif *vif = rtwvif_to_vif_safe(rtwvif); + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + struct rtw89_vif *rtwvif = rtwvif_link->rtwvif; enum nl80211_cqm_rssi_threshold_event nl_event; const struct rtw89_c2h_mac_bcnfltr_rpt *c2h = (const struct rtw89_c2h_mac_bcnfltr_rpt *)skb->data; @@ -4827,7 +4877,7 @@ rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, event = le32_get_bits(c2h->w2, RTW89_C2H_MAC_BCNFLTR_RPT_W2_EVENT); mac_id = le32_get_bits(c2h->w2, RTW89_C2H_MAC_BCNFLTR_RPT_W2_MACID);
- if (mac_id != rtwvif->mac_id) + if (mac_id != rtwvif_link->mac_id) return;
rtw89_debug(rtwdev, RTW89_DBG_FW, @@ -4839,7 +4889,7 @@ rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, if (!rtwdev->scanning && !rtwvif->offchan) ieee80211_connection_loss(vif); else - rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true); + rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true); return; case RTW89_BCN_FLTR_NOTIFY: nl_event = NL80211_CQM_RSSI_THRESHOLD_EVENT_HIGH; @@ -4863,10 +4913,13 @@ static void rtw89_mac_c2h_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct sk_buff *c2h, u32 len) { + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id;
rtw89_for_each_rtwvif(rtwdev, rtwvif) - rtw89_mac_bcn_fltr_rpt(rtwdev, rtwvif, c2h); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_mac_bcn_fltr_rpt(rtwdev, rtwvif_link, c2h); }
static void @@ -5931,15 +5984,15 @@ static int rtw89_mac_init_bfee_ax(struct rtw89_dev *rtwdev, u8 mac_idx) }
static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - u8 mac_idx = rtwvif->mac_idx; u8 nc = 1, nr = 3, ng = 0, cb = 1, cs = 1, ldpc_en = 1, stbc_en = 1; - u8 port_sel = rtwvif->port; + struct ieee80211_link_sta *link_sta; + u8 mac_idx = rtwvif_link->mac_idx; + u8 port_sel = rtwvif_link->port; u8 sound_dim = 3, t; - u8 *phy_cap = sta->deflink.he_cap.he_cap_elem.phy_cap_info; + u8 *phy_cap; u32 reg; u16 val; int ret; @@ -5948,6 +6001,11 @@ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev, if (ret) return ret;
+ rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info; + if ((phy_cap[3] & IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER) || (phy_cap[4] & IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER)) { ldpc_en &= !!(phy_cap[1] & IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD); @@ -5956,17 +6014,19 @@ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev, phy_cap[5]); sound_dim = min(sound_dim, t); } - if ((sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) || - (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) { - ldpc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC); - stbc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK); + if ((link_sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) || + (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) { + ldpc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC); + stbc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK); t = FIELD_GET(IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK, - sta->deflink.vht_cap.cap); + link_sta->vht_cap.cap); sound_dim = min(sound_dim, t); } nc = min(nc, sound_dim); nr = min(nr, sound_dim);
+ rcu_read_unlock(); + reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_TRXPTCL_RESP_CSI_CTRL_0, mac_idx); rtw89_write32_set(rtwdev, reg, B_AX_BFMEE_BFPARAM_SEL);
@@ -5989,34 +6049,41 @@ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev, }
static int rtw89_mac_csi_rrsc_ax(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; u32 rrsc = BIT(RTW89_MAC_BF_RRSC_6M) | BIT(RTW89_MAC_BF_RRSC_24M); + struct ieee80211_link_sta *link_sta; + u8 mac_idx = rtwvif_link->mac_idx; u32 reg; - u8 mac_idx = rtwvif->mac_idx; int ret;
ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL); if (ret) return ret;
- if (sta->deflink.he_cap.has_he) { + rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + + if (link_sta->he_cap.has_he) { rrsc |= (BIT(RTW89_MAC_BF_RRSC_HE_MSC0) | BIT(RTW89_MAC_BF_RRSC_HE_MSC3) | BIT(RTW89_MAC_BF_RRSC_HE_MSC5)); } - if (sta->deflink.vht_cap.vht_supported) { + if (link_sta->vht_cap.vht_supported) { rrsc |= (BIT(RTW89_MAC_BF_RRSC_VHT_MSC0) | BIT(RTW89_MAC_BF_RRSC_VHT_MSC3) | BIT(RTW89_MAC_BF_RRSC_VHT_MSC5)); } - if (sta->deflink.ht_cap.ht_supported) { + if (link_sta->ht_cap.ht_supported) { rrsc |= (BIT(RTW89_MAC_BF_RRSC_HT_MSC0) | BIT(RTW89_MAC_BF_RRSC_HT_MSC3) | BIT(RTW89_MAC_BF_RRSC_HT_MSC5)); } + + rcu_read_unlock(); + reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_TRXPTCL_RESP_CSI_CTRL_0, mac_idx); rtw89_write32_set(rtwdev, reg, B_AX_BFMEE_BFPARAM_SEL); rtw89_write32_clr(rtwdev, reg, B_AX_BFMEE_CSI_FORCE_RETE_EN); @@ -6028,35 +6095,53 @@ static int rtw89_mac_csi_rrsc_ax(struct rtw89_dev *rtwdev, }
static void rtw89_mac_bf_assoc_ax(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct ieee80211_link_sta *link_sta; + bool has_beamformer_cap; + + rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + has_beamformer_cap = rtw89_sta_has_beamformer_cap(link_sta); + + rcu_read_unlock();
- if (rtw89_sta_has_beamformer_cap(sta)) { + if (has_beamformer_cap) { rtw89_debug(rtwdev, RTW89_DBG_BF, "initialize bfee for new association\n"); - rtw89_mac_init_bfee_ax(rtwdev, rtwvif->mac_idx); - rtw89_mac_set_csi_para_reg_ax(rtwdev, vif, sta); - rtw89_mac_csi_rrsc_ax(rtwdev, vif, sta); + rtw89_mac_init_bfee_ax(rtwdev, rtwvif_link->mac_idx); + rtw89_mac_set_csi_para_reg_ax(rtwdev, rtwvif_link, rtwsta_link); + rtw89_mac_csi_rrsc_ax(rtwdev, rtwvif_link, rtwsta_link); } }
-void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, - struct ieee80211_sta *sta) +void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - - rtw89_mac_bfee_ctrl(rtwdev, rtwvif->mac_idx, false); + rtw89_mac_bfee_ctrl(rtwdev, rtwvif_link->mac_idx, false); }
void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, struct ieee80211_bss_conf *conf) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; - u8 mac_idx = rtwvif->mac_idx; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link; + u8 mac_idx; __le32 *p;
+ rtwvif_link = rtwvif->links[conf->link_id]; + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, + "%s: rtwvif link (link_id %u) is not active\n", + __func__, conf->link_id); + return; + } + + mac_idx = rtwvif_link->mac_idx; + rtw89_debug(rtwdev, RTW89_DBG_BF, "update bf GID table\n");
p = (__le32 *)conf->mu_group.membership; @@ -6080,7 +6165,7 @@ void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *
struct rtw89_mac_bf_monitor_iter_data { struct rtw89_dev *rtwdev; - struct ieee80211_sta *down_sta; + struct rtw89_sta_link *down_rtwsta_link; int count; };
@@ -6089,23 +6174,41 @@ void rtw89_mac_bf_monitor_calc_iter(void *data, struct ieee80211_sta *sta) { struct rtw89_mac_bf_monitor_iter_data *iter_data = (struct rtw89_mac_bf_monitor_iter_data *)data; - struct ieee80211_sta *down_sta = iter_data->down_sta; + struct rtw89_sta_link *down_rtwsta_link = iter_data->down_rtwsta_link; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct ieee80211_link_sta *link_sta; + struct rtw89_sta_link *rtwsta_link; + bool has_beamformer_cap = false; int *count = &iter_data->count; + unsigned int link_id;
- if (down_sta == sta) - return; + rcu_read_lock(); + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + if (rtwsta_link == down_rtwsta_link) + continue;
- if (rtw89_sta_has_beamformer_cap(sta)) + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false); + if (rtw89_sta_has_beamformer_cap(link_sta)) { + has_beamformer_cap = true; + break; + } + } + + if (has_beamformer_cap) (*count)++; + + rcu_read_unlock(); }
void rtw89_mac_bf_monitor_calc(struct rtw89_dev *rtwdev, - struct ieee80211_sta *sta, bool disconnect) + struct rtw89_sta_link *rtwsta_link, + bool disconnect) { struct rtw89_mac_bf_monitor_iter_data data;
data.rtwdev = rtwdev; - data.down_sta = disconnect ? sta : NULL; + data.down_rtwsta_link = disconnect ? rtwsta_link : NULL; data.count = 0; ieee80211_iterate_stations_atomic(rtwdev->hw, rtw89_mac_bf_monitor_calc_iter, @@ -6121,10 +6224,12 @@ void rtw89_mac_bf_monitor_calc(struct rtw89_dev *rtwdev, void _rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev) { struct rtw89_traffic_stats *stats = &rtwdev->stats; - struct rtw89_vif *rtwvif; + struct rtw89_vif_link *rtwvif_link; bool en = stats->tx_tfc_lv <= stats->rx_tfc_lv; bool old = test_bit(RTW89_FLAG_BFEE_EN, rtwdev->flags); + struct rtw89_vif *rtwvif; bool keep_timer = true; + unsigned int link_id; bool old_keep_timer;
old_keep_timer = test_bit(RTW89_FLAG_BFEE_TIMER_KEEP, rtwdev->flags); @@ -6134,30 +6239,32 @@ void _rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev)
if (keep_timer != old_keep_timer) { rtw89_for_each_rtwvif(rtwdev, rtwvif) - rtw89_mac_bfee_standby_timer(rtwdev, rtwvif->mac_idx, - keep_timer); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_mac_bfee_standby_timer(rtwdev, rtwvif_link->mac_idx, + keep_timer); }
if (en == old) return;
rtw89_for_each_rtwvif(rtwdev, rtwvif) - rtw89_mac_bfee_ctrl(rtwdev, rtwvif->mac_idx, en); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_mac_bfee_ctrl(rtwdev, rtwvif_link->mac_idx, en); }
static int -__rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, +__rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link, u32 tx_time) { #define MAC_AX_DFLT_TX_TIME 5280 - u8 mac_idx = rtwsta->rtwvif->mac_idx; + u8 mac_idx = rtwsta_link->rtwvif_link->mac_idx; u32 max_tx_time = tx_time == 0 ? MAC_AX_DFLT_TX_TIME : tx_time; u32 reg; int ret = 0;
- if (rtwsta->cctl_tx_time) { - rtwsta->ampdu_max_time = (max_tx_time - 512) >> 9; - ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta); + if (rtwsta_link->cctl_tx_time) { + rtwsta_link->ampdu_max_time = (max_tx_time - 512) >> 9; + ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta_link); } else { ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL); if (ret) { @@ -6173,31 +6280,31 @@ __rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, return ret; }
-int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, +int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link, bool resume, u32 tx_time) { int ret = 0;
if (!resume) { - rtwsta->cctl_tx_time = true; - ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta, tx_time); + rtwsta_link->cctl_tx_time = true; + ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta_link, tx_time); } else { - ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta, tx_time); - rtwsta->cctl_tx_time = false; + ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta_link, tx_time); + rtwsta_link->cctl_tx_time = false; }
return ret; }
-int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, +int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link, u32 *tx_time) { - u8 mac_idx = rtwsta->rtwvif->mac_idx; + u8 mac_idx = rtwsta_link->rtwvif_link->mac_idx; u32 reg; int ret = 0;
- if (rtwsta->cctl_tx_time) { - *tx_time = (rtwsta->ampdu_max_time + 1) << 9; + if (rtwsta_link->cctl_tx_time) { + *tx_time = (rtwsta_link->ampdu_max_time + 1) << 9; } else { ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL); if (ret) { @@ -6213,33 +6320,33 @@ int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, }
int rtw89_mac_set_tx_retry_limit(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta, + struct rtw89_sta_link *rtwsta_link, bool resume, u8 tx_retry) { int ret = 0;
- rtwsta->data_tx_cnt_lmt = tx_retry; + rtwsta_link->data_tx_cnt_lmt = tx_retry;
if (!resume) { - rtwsta->cctl_tx_retry_limit = true; - ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta); + rtwsta_link->cctl_tx_retry_limit = true; + ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta_link); } else { - ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta); - rtwsta->cctl_tx_retry_limit = false; + ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta_link); + rtwsta_link->cctl_tx_retry_limit = false; }
return ret; }
int rtw89_mac_get_tx_retry_limit(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta, u8 *tx_retry) + struct rtw89_sta_link *rtwsta_link, u8 *tx_retry) { - u8 mac_idx = rtwsta->rtwvif->mac_idx; + u8 mac_idx = rtwsta_link->rtwvif_link->mac_idx; u32 reg; int ret = 0;
- if (rtwsta->cctl_tx_retry_limit) { - *tx_retry = rtwsta->data_tx_cnt_lmt; + if (rtwsta_link->cctl_tx_retry_limit) { + *tx_retry = rtwsta_link->data_tx_cnt_lmt; } else { ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL); if (ret) { @@ -6255,10 +6362,10 @@ int rtw89_mac_get_tx_retry_limit(struct rtw89_dev *rtwdev, }
int rtw89_mac_set_hw_muedca_ctrl(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool en) + struct rtw89_vif_link *rtwvif_link, bool en) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; - u8 mac_idx = rtwvif->mac_idx; + u8 mac_idx = rtwvif_link->mac_idx; u16 set = mac->muedca_ctrl.mask; u32 reg; u32 ret; @@ -6326,7 +6433,9 @@ int rtw89_mac_read_xtal_si_ax(struct rtw89_dev *rtwdev, u8 offset, u8 *val) }
static -void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta) +void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { static const enum rtw89_pkt_drop_sel sels[] = { RTW89_PKT_DROP_SEL_MACID_BE_ONCE, @@ -6334,15 +6443,14 @@ void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta) RTW89_PKT_DROP_SEL_MACID_VI_ONCE, RTW89_PKT_DROP_SEL_MACID_VO_ONCE, }; - struct rtw89_vif *rtwvif = rtwsta->rtwvif; struct rtw89_pkt_drop_params params = {0}; int i;
params.mac_band = RTW89_MAC_0; - params.macid = rtwsta->mac_id; - params.port = rtwvif->port; + params.macid = rtwsta_link->mac_id; + params.port = rtwvif_link->port; params.mbssid = 0; - params.tf_trs = rtwvif->trigger; + params.tf_trs = rtwvif_link->trigger;
for (i = 0; i < ARRAY_SIZE(sels); i++) { params.sel = sels[i]; @@ -6352,15 +6460,21 @@ void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta)
static void rtw89_mac_pkt_drop_vif_iter(void *data, struct ieee80211_sta *sta) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); struct rtw89_vif *rtwvif = rtwsta->rtwvif; - struct rtw89_dev *rtwdev = rtwvif->rtwdev; + struct rtw89_dev *rtwdev = rtwsta->rtwdev; + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; struct rtw89_vif *target = data; + unsigned int link_id;
if (rtwvif != target) return;
- rtw89_mac_pkt_drop_sta(rtwdev, rtwsta); + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + rtw89_mac_pkt_drop_sta(rtwdev, rtwvif_link, rtwsta_link); + } }
void rtw89_mac_pkt_drop_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h index 67c2a4507124..0c269961a573 100644 --- a/drivers/net/wireless/realtek/rtw89/mac.h +++ b/drivers/net/wireless/realtek/rtw89/mac.h @@ -951,8 +951,9 @@ struct rtw89_mac_gen_def { void (*dmac_func_pre_en)(struct rtw89_dev *rtwdev); void (*dle_func_en)(struct rtw89_dev *rtwdev, bool enable); void (*dle_clk_en)(struct rtw89_dev *rtwdev, bool enable); - void (*bf_assoc)(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, - struct ieee80211_sta *sta); + void (*bf_assoc)(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link);
int (*typ_fltr_opt)(struct rtw89_dev *rtwdev, enum rtw89_machdr_frame_type type, @@ -1004,12 +1005,12 @@ struct rtw89_mac_gen_def { bool (*is_txq_empty)(struct rtw89_dev *rtwdev);
int (*add_chan_list)(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool connected); + struct rtw89_vif_link *rtwvif_link, bool connected); int (*add_chan_list_pno)(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif); + struct rtw89_vif_link *rtwvif_link); int (*scan_offload)(struct rtw89_dev *rtwdev, struct rtw89_scan_option *option, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, bool wowlan);
int (*wow_config_mac)(struct rtw89_dev *rtwdev, bool enable_wow); @@ -1033,81 +1034,89 @@ u32 rtw89_mac_reg_by_port(struct rtw89_dev *rtwdev, u32 base, u8 port, u8 mac_id }
static inline u32 -rtw89_read32_port(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, u32 base) +rtw89_read32_port(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base) { u32 reg;
- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port, + rtwvif_link->mac_idx); return rtw89_read32(rtwdev, reg); }
static inline u32 -rtw89_read32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +rtw89_read32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base, u32 mask) { u32 reg;
- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port, + rtwvif_link->mac_idx); return rtw89_read32_mask(rtwdev, reg, mask); }
static inline void -rtw89_write32_port(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, u32 base, +rtw89_write32_port(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base, u32 data) { u32 reg;
- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port, + rtwvif_link->mac_idx); rtw89_write32(rtwdev, reg, data); }
static inline void -rtw89_write32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +rtw89_write32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base, u32 mask, u32 data) { u32 reg;
- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port, + rtwvif_link->mac_idx); rtw89_write32_mask(rtwdev, reg, mask, data); }
static inline void -rtw89_write16_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +rtw89_write16_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base, u32 mask, u16 data) { u32 reg;
- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port, + rtwvif_link->mac_idx); rtw89_write16_mask(rtwdev, reg, mask, data); }
static inline void -rtw89_write32_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +rtw89_write32_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base, u32 bit) { u32 reg;
- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port, + rtwvif_link->mac_idx); rtw89_write32_clr(rtwdev, reg, bit); }
static inline void -rtw89_write16_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +rtw89_write16_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base, u16 bit) { u32 reg;
- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port, + rtwvif_link->mac_idx); rtw89_write16_clr(rtwdev, reg, bit); }
static inline void -rtw89_write32_port_set(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +rtw89_write32_port_set(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base, u32 bit) { u32 reg;
- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port, + rtwvif_link->mac_idx); rtw89_write32_set(rtwdev, reg, bit); }
@@ -1139,21 +1148,21 @@ int rtw89_mac_dle_dfi_qempty_cfg(struct rtw89_dev *rtwdev, struct rtw89_mac_dle_dfi_qempty *qempty); void rtw89_mac_dump_l0_to_l1(struct rtw89_dev *rtwdev, enum mac_ax_err_info err); -int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *vif); -int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif); +int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif); +int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link); void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, - struct rtw89_vif *rtwvif_src, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_vif_link *rtwvif_src, u16 offset_tu); -int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u64 *tsf); void rtw89_mac_port_cfg_rx_sync(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool en); + struct rtw89_vif_link *rtwvif_link, bool en); void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif); -void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif); + struct rtw89_vif_link *rtwvif_link); +void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link); void rtw89_mac_enable_beacon_for_ap_vifs(struct rtw89_dev *rtwdev, bool en); -int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *vif); +int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif); int rtw89_mac_enable_bb_rf(struct rtw89_dev *rtwdev); int rtw89_mac_disable_bb_rf(struct rtw89_dev *rtwdev);
@@ -1251,27 +1260,30 @@ void rtw89_mac_power_mode_change(struct rtw89_dev *rtwdev, bool enter); void rtw89_mac_notify_wake(struct rtw89_dev *rtwdev);
static inline -void rtw89_mac_bf_assoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, - struct ieee80211_sta *sta) +void rtw89_mac_bf_assoc(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
if (mac->bf_assoc) - mac->bf_assoc(rtwdev, vif, sta); + mac->bf_assoc(rtwdev, rtwvif_link, rtwsta_link); }
-void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, - struct ieee80211_sta *sta); +void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link); void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif, struct ieee80211_bss_conf *conf); void rtw89_mac_bf_monitor_calc(struct rtw89_dev *rtwdev, - struct ieee80211_sta *sta, bool disconnect); + struct rtw89_sta_link *rtwsta_link, + bool disconnect); void _rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev); void rtw89_mac_bfee_ctrl(struct rtw89_dev *rtwdev, u8 mac_idx, bool en); -int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif); -int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif); +int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link); +int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link); int rtw89_mac_set_hw_muedca_ctrl(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool en); + struct rtw89_vif_link *rtwvif_link, bool en); int rtw89_mac_set_macid_pause(struct rtw89_dev *rtwdev, u8 macid, bool pause);
static inline void rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev) @@ -1376,15 +1388,15 @@ static inline bool rtw89_mac_get_power_state(struct rtw89_dev *rtwdev) return !!val; }
-int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, +int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link, bool resume, u32 tx_time); -int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, +int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link, u32 *tx_time); int rtw89_mac_set_tx_retry_limit(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta, + struct rtw89_sta_link *rtwsta_link, bool resume, u8 tx_retry); int rtw89_mac_get_tx_retry_limit(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta, u8 *tx_retry); + struct rtw89_sta_link *rtwsta_link, u8 *tx_retry);
enum rtw89_mac_xtal_si_offset { XTAL0 = 0x0, diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c index 48ad0d0f76bf..13fb3cac2701 100644 --- a/drivers/net/wireless/realtek/rtw89/mac80211.c +++ b/drivers/net/wireless/realtek/rtw89/mac80211.c @@ -23,13 +23,13 @@ static void rtw89_ops_tx(struct ieee80211_hw *hw, struct rtw89_dev *rtwdev = hw->priv; struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); struct ieee80211_vif *vif = info->control.vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); struct ieee80211_sta *sta = control->sta; u32 flags = IEEE80211_SKB_CB(skb)->flags; int ret, qsel;
if (rtwvif->offchan && !(flags & IEEE80211_TX_CTL_TX_OFFCHAN) && sta) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
rtw89_debug(rtwdev, RTW89_DBG_TXRX, "ops_tx during offchan\n"); skb_queue_tail(&rtwsta->roc_queue, skb); @@ -105,11 +105,61 @@ static int rtw89_ops_config(struct ieee80211_hw *hw, u32 changed) return 0; }
+static int __rtw89_ops_add_iface_link(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) +{ + struct ieee80211_bss_conf *bss_conf; + int ret; + + rtw89_leave_ps_mode(rtwdev); + + rtw89_vif_type_mapping(rtwvif_link, false); + + INIT_WORK(&rtwvif_link->update_beacon_work, rtw89_core_update_beacon_work); + INIT_LIST_HEAD(&rtwvif_link->general_pkt_list); + + rtwvif_link->hit_rule = 0; + rtwvif_link->bcn_hit_cond = 0; + rtwvif_link->chanctx_assigned = false; + rtwvif_link->chanctx_idx = RTW89_CHANCTX_0; + rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT; + + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + ether_addr_copy(rtwvif_link->mac_addr, bss_conf->addr); + + rcu_read_unlock(); + + ret = rtw89_mac_add_vif(rtwdev, rtwvif_link); + if (ret) + return ret; + + rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, NULL, BTC_ROLE_START); + return 0; +} + +static void __rtw89_ops_remove_iface_link(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) +{ + mutex_unlock(&rtwdev->mutex); + cancel_work_sync(&rtwvif_link->update_beacon_work); + mutex_lock(&rtwdev->mutex); + + rtw89_leave_ps_mode(rtwdev); + + rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, NULL, BTC_ROLE_STOP); + + rtw89_mac_remove_vif(rtwdev, rtwvif_link); +} + static int rtw89_ops_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link; + u8 mac_id, port; int ret = 0;
rtw89_debug(rtwdev, RTW89_DBG_STATE, "add vif %pM type %d, p2p %d\n", @@ -123,49 +173,56 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw, vif->driver_flags |= IEEE80211_VIF_BEACON_FILTER | IEEE80211_VIF_SUPPORTS_CQM_RSSI;
- rtwvif->rtwdev = rtwdev; - rtwvif->roc.state = RTW89_ROC_IDLE; - rtwvif->offchan = false; + mac_id = rtw89_acquire_mac_id(rtwdev); + if (mac_id == RTW89_MAX_MAC_ID_NUM) { + ret = -ENOSPC; + goto err; + } + + port = rtw89_core_acquire_bit_map(rtwdev->hw_port, RTW89_PORT_NUM); + if (port == RTW89_PORT_NUM) { + ret = -ENOSPC; + goto release_macid; + } + + rtw89_init_vif(rtwdev, rtwvif, mac_id, port); + + rtw89_core_txq_init(rtwdev, vif->txq); + if (!rtw89_rtwvif_in_list(rtwdev, rtwvif)) list_add_tail(&rtwvif->list, &rtwdev->rtwvifs_list);
- INIT_WORK(&rtwvif->update_beacon_work, rtw89_core_update_beacon_work); + ether_addr_copy(rtwvif->mac_addr, vif->addr); + + rtwvif->offchan = false; + rtwvif->roc.state = RTW89_ROC_IDLE; INIT_DELAYED_WORK(&rtwvif->roc.roc_work, rtw89_roc_work); - rtw89_leave_ps_mode(rtwdev);
rtw89_traffic_stats_init(rtwdev, &rtwvif->stats); - rtw89_vif_type_mapping(vif, false); - rtwvif->port = rtw89_core_acquire_bit_map(rtwdev->hw_port, - RTW89_PORT_NUM); - if (rtwvif->port == RTW89_PORT_NUM) { - ret = -ENOSPC; - list_del_init(&rtwvif->list); - goto out; - } - - rtwvif->bcn_hit_cond = 0; - rtwvif->mac_idx = RTW89_MAC_0; - rtwvif->phy_idx = RTW89_PHY_0; - rtwvif->chanctx_idx = RTW89_CHANCTX_0; - rtwvif->chanctx_assigned = false; - rtwvif->hit_rule = 0; - rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT; - ether_addr_copy(rtwvif->mac_addr, vif->addr); - INIT_LIST_HEAD(&rtwvif->general_pkt_list);
- ret = rtw89_mac_add_vif(rtwdev, rtwvif); - if (ret) { - rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port); - list_del_init(&rtwvif->list); - goto out; + rtwvif_link = rtw89_vif_set_link(rtwvif, 0); + if (!rtwvif_link) { + ret = -EINVAL; + goto release_port; }
- rtw89_core_txq_init(rtwdev, vif->txq); - - rtw89_btc_ntfy_role_info(rtwdev, rtwvif, NULL, BTC_ROLE_START); + ret = __rtw89_ops_add_iface_link(rtwdev, rtwvif_link); + if (ret) + goto unset_link;
rtw89_recalc_lps(rtwdev); -out: + + mutex_unlock(&rtwdev->mutex); + return 0; + +unset_link: + rtw89_vif_unset_link(rtwvif, 0); +release_port: + list_del_init(&rtwvif->list); + rtw89_core_release_bit_map(rtwdev->hw_port, port); +release_macid: + rtw89_release_mac_id(rtwdev, mac_id); +err: mutex_unlock(&rtwdev->mutex);
return ret; @@ -175,20 +232,35 @@ static void rtw89_ops_remove_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + u8 macid = rtw89_vif_get_main_macid(rtwvif); + u8 port = rtw89_vif_get_main_port(rtwvif); + struct rtw89_vif_link *rtwvif_link;
rtw89_debug(rtwdev, RTW89_DBG_STATE, "remove vif %pM type %d p2p %d\n", vif->addr, vif->type, vif->p2p);
- cancel_work_sync(&rtwvif->update_beacon_work); cancel_delayed_work_sync(&rtwvif->roc.roc_work);
mutex_lock(&rtwdev->mutex); - rtw89_leave_ps_mode(rtwdev); - rtw89_btc_ntfy_role_info(rtwdev, rtwvif, NULL, BTC_ROLE_STOP); - rtw89_mac_remove_vif(rtwdev, rtwvif); - rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port); + + rtwvif_link = rtwvif->links[0]; + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, + "%s: rtwvif link (link_id %u) is not active\n", + __func__, 0); + goto bottom; + } + + __rtw89_ops_remove_iface_link(rtwdev, rtwvif_link); + + rtw89_vif_unset_link(rtwvif, 0); + +bottom: list_del_init(&rtwvif->list); + rtw89_core_release_bit_map(rtwdev->hw_port, port); + rtw89_release_mac_id(rtwdev, macid); + rtw89_recalc_lps(rtwdev); rtw89_enter_ips_by_hwflags(rtwdev);
@@ -311,24 +383,30 @@ static const u8 ac_to_fw_idx[IEEE80211_NUM_ACS] = { };
static u8 rtw89_aifsn_to_aifs(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, u8 aifsn) + struct rtw89_vif_link *rtwvif_link, u8 aifsn) { - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); + rtwvif_link->chanctx_idx); + struct ieee80211_bss_conf *bss_conf; u8 slot_time; u8 sifs;
- slot_time = vif->bss_conf.use_short_slot ? 9 : 20; + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + slot_time = bss_conf->use_short_slot ? 9 : 20; + + rcu_read_unlock(); + sifs = chan->band_type == RTW89_BAND_2G ? 10 : 16;
return aifsn * slot_time + sifs; }
static void ____rtw89_conf_tx_edca(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, u16 ac) + struct rtw89_vif_link *rtwvif_link, u16 ac) { - struct ieee80211_tx_queue_params *params = &rtwvif->tx_params[ac]; + struct ieee80211_tx_queue_params *params = &rtwvif_link->tx_params[ac]; u32 val; u8 ecw_max, ecw_min; u8 aifs; @@ -336,12 +414,12 @@ static void ____rtw89_conf_tx_edca(struct rtw89_dev *rtwdev, /* 2^ecw - 1 = cw; ecw = log2(cw + 1) */ ecw_max = ilog2(params->cw_max + 1); ecw_min = ilog2(params->cw_min + 1); - aifs = rtw89_aifsn_to_aifs(rtwdev, rtwvif, params->aifs); + aifs = rtw89_aifsn_to_aifs(rtwdev, rtwvif_link, params->aifs); val = FIELD_PREP(FW_EDCA_PARAM_TXOPLMT_MSK, params->txop) | FIELD_PREP(FW_EDCA_PARAM_CWMAX_MSK, ecw_max) | FIELD_PREP(FW_EDCA_PARAM_CWMIN_MSK, ecw_min) | FIELD_PREP(FW_EDCA_PARAM_AIFS_MSK, aifs); - rtw89_fw_h2c_set_edca(rtwdev, rtwvif, ac_to_fw_idx[ac], val); + rtw89_fw_h2c_set_edca(rtwdev, rtwvif_link, ac_to_fw_idx[ac], val); }
#define R_MUEDCA_ACS_PARAM(acs) {R_AX_MUEDCA_ ## acs ## _PARAM_0, \ @@ -355,9 +433,9 @@ static const u32 ac_to_mu_edca_param[IEEE80211_NUM_ACS][RTW89_CHIP_GEN_NUM] = { };
static void ____rtw89_conf_tx_mu_edca(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, u16 ac) + struct rtw89_vif_link *rtwvif_link, u16 ac) { - struct ieee80211_tx_queue_params *params = &rtwvif->tx_params[ac]; + struct ieee80211_tx_queue_params *params = &rtwvif_link->tx_params[ac]; struct ieee80211_he_mu_edca_param_ac_rec *mu_edca; int gen = rtwdev->chip->chip_gen; u8 aifs, aifsn; @@ -370,32 +448,199 @@ static void ____rtw89_conf_tx_mu_edca(struct rtw89_dev *rtwdev,
mu_edca = ¶ms->mu_edca_param_rec; aifsn = FIELD_GET(GENMASK(3, 0), mu_edca->aifsn); - aifs = aifsn ? rtw89_aifsn_to_aifs(rtwdev, rtwvif, aifsn) : 0; + aifs = aifsn ? rtw89_aifsn_to_aifs(rtwdev, rtwvif_link, aifsn) : 0; timer_32us = mu_edca->mu_edca_timer << 8;
val = FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_TIMER_MASK, timer_32us) | FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_CW_MASK, mu_edca->ecw_min_max) | FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_AIFS_MASK, aifs); - reg = rtw89_mac_reg_by_idx(rtwdev, ac_to_mu_edca_param[ac][gen], rtwvif->mac_idx); + reg = rtw89_mac_reg_by_idx(rtwdev, ac_to_mu_edca_param[ac][gen], + rtwvif_link->mac_idx); rtw89_write32(rtwdev, reg, val);
- rtw89_mac_set_hw_muedca_ctrl(rtwdev, rtwvif, true); + rtw89_mac_set_hw_muedca_ctrl(rtwdev, rtwvif_link, true); }
static void __rtw89_conf_tx(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, u16 ac) + struct rtw89_vif_link *rtwvif_link, u16 ac) { - ____rtw89_conf_tx_edca(rtwdev, rtwvif, ac); - ____rtw89_conf_tx_mu_edca(rtwdev, rtwvif, ac); + ____rtw89_conf_tx_edca(rtwdev, rtwvif_link, ac); + ____rtw89_conf_tx_mu_edca(rtwdev, rtwvif_link, ac); }
static void rtw89_conf_tx(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { u16 ac;
for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) - __rtw89_conf_tx(rtwdev, rtwvif, ac); + __rtw89_conf_tx(rtwdev, rtwvif_link, ac); +} + +static int __rtw89_ops_sta_add(struct rtw89_dev *rtwdev, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta) +{ + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + bool acquire_macid = false; + u8 macid; + int ret; + int i; + + if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) { + /* for station mode, assign the mac_id from itself */ + macid = rtw89_vif_get_main_macid(rtwvif); + } else { + macid = rtw89_acquire_mac_id(rtwdev); + if (macid == RTW89_MAX_MAC_ID_NUM) + return -ENOSPC; + + acquire_macid = true; + } + + rtw89_init_sta(rtwdev, rtwvif, rtwsta, macid); + + for (i = 0; i < ARRAY_SIZE(sta->txq); i++) + rtw89_core_txq_init(rtwdev, sta->txq[i]); + + skb_queue_head_init(&rtwsta->roc_queue); + + rtwsta_link = rtw89_sta_set_link(rtwsta, sta->deflink.link_id); + if (!rtwsta_link) { + ret = -EINVAL; + goto err; + } + + rtwvif_link = rtwsta_link->rtwvif_link; + + ret = rtw89_core_sta_link_add(rtwdev, rtwvif_link, rtwsta_link); + if (ret) + goto unset_link; + + if (vif->type == NL80211_IFTYPE_AP || sta->tdls) + rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE); + + return 0; + +unset_link: + rtw89_sta_unset_link(rtwsta, sta->deflink.link_id); +err: + if (acquire_macid) + rtw89_release_mac_id(rtwdev, macid); + + return ret; +} + +static int __rtw89_ops_sta_assoc(struct rtw89_dev *rtwdev, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + bool station_mode) +{ + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + int ret; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + + if (station_mode) + rtw89_vif_type_mapping(rtwvif_link, true); + + ret = rtw89_core_sta_link_assoc(rtwdev, rtwvif_link, rtwsta_link); + if (ret) + return ret; + } + + rtwdev->total_sta_assoc++; + if (sta->tdls) + rtwvif->tdls_peer++; + + return 0; +} + +static int __rtw89_ops_sta_disassoc(struct rtw89_dev *rtwdev, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta) +{ + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + int ret; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + ret = rtw89_core_sta_link_disassoc(rtwdev, rtwvif_link, rtwsta_link); + if (ret) + return ret; + } + + rtwsta->disassoc = true; + + rtwdev->total_sta_assoc--; + if (sta->tdls) + rtwvif->tdls_peer--; + + return 0; +} + +static int __rtw89_ops_sta_disconnect(struct rtw89_dev *rtwdev, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta) +{ + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + int ret; + + rtw89_core_free_sta_pending_ba(rtwdev, sta); + rtw89_core_free_sta_pending_forbid_ba(rtwdev, sta); + rtw89_core_free_sta_pending_roc_tx(rtwdev, sta); + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + ret = rtw89_core_sta_link_disconnect(rtwdev, rtwvif_link, rtwsta_link); + if (ret) + return ret; + } + + return 0; +} + +static int __rtw89_ops_sta_remove(struct rtw89_dev *rtwdev, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta) +{ + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + u8 macid = rtw89_sta_get_main_macid(rtwsta); + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + int ret; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + ret = rtw89_core_sta_link_remove(rtwdev, rtwvif_link, rtwsta_link); + if (ret) + return ret; + + rtw89_sta_unset_link(rtwsta, link_id); + } + + if (vif->type == NL80211_IFTYPE_AP || sta->tdls) { + rtw89_release_mac_id(rtwdev, macid); + rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE); + } + + return 0; }
static void rtw89_station_mode_sta_assoc(struct rtw89_dev *rtwdev, @@ -412,16 +657,34 @@ static void rtw89_station_mode_sta_assoc(struct rtw89_dev *rtwdev, return; }
- rtw89_vif_type_mapping(vif, true); + __rtw89_ops_sta_assoc(rtwdev, vif, sta, true); +} + +static void __rtw89_ops_bss_link_assoc(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) +{ + rtw89_phy_set_bss_color(rtwdev, rtwvif_link); + rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, rtwvif_link); + rtw89_mac_port_update(rtwdev, rtwvif_link); + rtw89_mac_set_he_obss_narrow_bw_ru(rtwdev, rtwvif_link); +} + +static void __rtw89_ops_bss_assoc(struct rtw89_dev *rtwdev, + struct ieee80211_vif *vif) +{ + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link; + unsigned int link_id;
- rtw89_core_sta_assoc(rtwdev, vif, sta); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + __rtw89_ops_bss_link_assoc(rtwdev, rtwvif_link); }
static void rtw89_ops_vif_cfg_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u64 changed) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
mutex_lock(&rtwdev->mutex); rtw89_leave_ps_mode(rtwdev); @@ -429,10 +692,7 @@ static void rtw89_ops_vif_cfg_changed(struct ieee80211_hw *hw, if (changed & BSS_CHANGED_ASSOC) { if (vif->cfg.assoc) { rtw89_station_mode_sta_assoc(rtwdev, vif); - rtw89_phy_set_bss_color(rtwdev, vif); - rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, vif); - rtw89_mac_port_update(rtwdev, rtwvif); - rtw89_mac_set_he_obss_narrow_bw_ru(rtwdev, vif); + __rtw89_ops_bss_assoc(rtwdev, vif);
rtw89_queue_chanctx_work(rtwdev); } else { @@ -459,39 +719,49 @@ static void rtw89_ops_link_info_changed(struct ieee80211_hw *hw, u64 changed) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link;
mutex_lock(&rtwdev->mutex); rtw89_leave_ps_mode(rtwdev);
+ rtwvif_link = rtwvif->links[conf->link_id]; + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, + "%s: rtwvif link (link_id %u) is not active\n", + __func__, conf->link_id); + goto out; + } + if (changed & BSS_CHANGED_BSSID) { - ether_addr_copy(rtwvif->bssid, conf->bssid); - rtw89_cam_bssid_changed(rtwdev, rtwvif); - rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL); - WRITE_ONCE(rtwvif->sync_bcn_tsf, 0); + ether_addr_copy(rtwvif_link->bssid, conf->bssid); + rtw89_cam_bssid_changed(rtwdev, rtwvif_link); + rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL); + WRITE_ONCE(rtwvif_link->sync_bcn_tsf, 0); }
if (changed & BSS_CHANGED_BEACON) - rtw89_chip_h2c_update_beacon(rtwdev, rtwvif); + rtw89_chip_h2c_update_beacon(rtwdev, rtwvif_link);
if (changed & BSS_CHANGED_ERP_SLOT) - rtw89_conf_tx(rtwdev, rtwvif); + rtw89_conf_tx(rtwdev, rtwvif_link);
if (changed & BSS_CHANGED_HE_BSS_COLOR) - rtw89_phy_set_bss_color(rtwdev, vif); + rtw89_phy_set_bss_color(rtwdev, rtwvif_link);
if (changed & BSS_CHANGED_MU_GROUPS) rtw89_mac_bf_set_gid_table(rtwdev, vif, conf);
if (changed & BSS_CHANGED_P2P_PS) - rtw89_core_update_p2p_ps(rtwdev, vif); + rtw89_core_update_p2p_ps(rtwdev, rtwvif_link, conf);
if (changed & BSS_CHANGED_CQM) - rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true); + rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true);
if (changed & BSS_CHANGED_TPE) - rtw89_reg_6ghz_recalc(rtwdev, rtwvif, true); + rtw89_reg_6ghz_recalc(rtwdev, rtwvif_link, true);
+out: mutex_unlock(&rtwdev->mutex); }
@@ -500,12 +770,21 @@ static int rtw89_ops_start_ap(struct ieee80211_hw *hw, struct ieee80211_bss_conf *link_conf) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link; const struct rtw89_chan *chan;
mutex_lock(&rtwdev->mutex);
- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx); + rtwvif_link = rtwvif->links[link_conf->link_id]; + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, + "%s: rtwvif link (link_id %u) is not active\n", + __func__, link_conf->link_id); + goto out; + } + + chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx); if (chan->band_type == RTW89_BAND_6G) { mutex_unlock(&rtwdev->mutex); return -EOPNOTSUPP; @@ -514,16 +793,18 @@ static int rtw89_ops_start_ap(struct ieee80211_hw *hw, if (rtwdev->scanning) rtw89_hw_scan_abort(rtwdev, rtwdev->scan_info.scanning_vif);
- ether_addr_copy(rtwvif->bssid, vif->bss_conf.bssid); - rtw89_cam_bssid_changed(rtwdev, rtwvif); - rtw89_mac_port_update(rtwdev, rtwvif); - rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, NULL); - rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, NULL, RTW89_ROLE_TYPE_CHANGE); - rtw89_fw_h2c_join_info(rtwdev, rtwvif, NULL, true); - rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL); - rtw89_chip_rfk_channel(rtwdev, rtwvif); + ether_addr_copy(rtwvif_link->bssid, link_conf->bssid); + rtw89_cam_bssid_changed(rtwdev, rtwvif_link); + rtw89_mac_port_update(rtwdev, rtwvif_link); + rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, NULL); + rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, NULL, RTW89_ROLE_TYPE_CHANGE); + rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, NULL, true); + rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL); + rtw89_chip_rfk_channel(rtwdev, rtwvif_link);
rtw89_queue_chanctx_work(rtwdev); + +out: mutex_unlock(&rtwdev->mutex);
return 0; @@ -534,12 +815,24 @@ void rtw89_ops_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_bss_conf *link_conf) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link;
mutex_lock(&rtwdev->mutex); - rtw89_mac_stop_ap(rtwdev, rtwvif); - rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, NULL); - rtw89_fw_h2c_join_info(rtwdev, rtwvif, NULL, true); + + rtwvif_link = rtwvif->links[link_conf->link_id]; + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, + "%s: rtwvif link (link_id %u) is not active\n", + __func__, link_conf->link_id); + goto out; + } + + rtw89_mac_stop_ap(rtwdev, rtwvif_link); + rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, NULL); + rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, NULL, true); + +out: mutex_unlock(&rtwdev->mutex); }
@@ -547,10 +840,13 @@ static int rtw89_ops_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta, bool set) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); struct rtw89_vif *rtwvif = rtwsta->rtwvif; + struct rtw89_vif_link *rtwvif_link; + unsigned int link_id;
- ieee80211_queue_work(rtwdev->hw, &rtwvif->update_beacon_work); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + ieee80211_queue_work(rtwdev->hw, &rtwvif_link->update_beacon_work);
return 0; } @@ -561,15 +857,29 @@ static int rtw89_ops_conf_tx(struct ieee80211_hw *hw, const struct ieee80211_tx_queue_params *params) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link; + int ret = 0;
mutex_lock(&rtwdev->mutex); rtw89_leave_ps_mode(rtwdev); - rtwvif->tx_params[ac] = *params; - __rtw89_conf_tx(rtwdev, rtwvif, ac); + + rtwvif_link = rtwvif->links[link_id]; + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, + "%s: rtwvif link (link_id %u) is not active\n", + __func__, link_id); + ret = -ENOLINK; + goto out; + } + + rtwvif_link->tx_params[ac] = *params; + __rtw89_conf_tx(rtwdev, rtwvif_link, ac); + +out: mutex_unlock(&rtwdev->mutex);
- return 0; + return ret; }
static int __rtw89_ops_sta_state(struct ieee80211_hw *hw, @@ -582,26 +892,26 @@ static int __rtw89_ops_sta_state(struct ieee80211_hw *hw,
if (old_state == IEEE80211_STA_NOTEXIST && new_state == IEEE80211_STA_NONE) - return rtw89_core_sta_add(rtwdev, vif, sta); + return __rtw89_ops_sta_add(rtwdev, vif, sta);
if (old_state == IEEE80211_STA_AUTH && new_state == IEEE80211_STA_ASSOC) { if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) return 0; /* defer to bss_info_changed to have vif info */ - return rtw89_core_sta_assoc(rtwdev, vif, sta); + return __rtw89_ops_sta_assoc(rtwdev, vif, sta, false); }
if (old_state == IEEE80211_STA_ASSOC && new_state == IEEE80211_STA_AUTH) - return rtw89_core_sta_disassoc(rtwdev, vif, sta); + return __rtw89_ops_sta_disassoc(rtwdev, vif, sta);
if (old_state == IEEE80211_STA_AUTH && new_state == IEEE80211_STA_NONE) - return rtw89_core_sta_disconnect(rtwdev, vif, sta); + return __rtw89_ops_sta_disconnect(rtwdev, vif, sta);
if (old_state == IEEE80211_STA_NONE && new_state == IEEE80211_STA_NOTEXIST) - return rtw89_core_sta_remove(rtwdev, vif, sta); + return __rtw89_ops_sta_remove(rtwdev, vif, sta);
return 0; } @@ -667,7 +977,8 @@ static int rtw89_ops_ampdu_action(struct ieee80211_hw *hw, { struct rtw89_dev *rtwdev = hw->priv; struct ieee80211_sta *sta = params->sta; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); u16 tid = params->tid; struct ieee80211_txq *txq = sta->txq[tid]; struct rtw89_txq *rtwtxq = (struct rtw89_txq *)txq->drv_priv; @@ -681,7 +992,7 @@ static int rtw89_ops_ampdu_action(struct ieee80211_hw *hw, mutex_lock(&rtwdev->mutex); clear_bit(RTW89_TXQ_F_AMPDU, &rtwtxq->flags); clear_bit(tid, rtwsta->ampdu_map); - rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, vif, sta); + rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, rtwvif, rtwsta); mutex_unlock(&rtwdev->mutex); ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid); break; @@ -692,7 +1003,7 @@ static int rtw89_ops_ampdu_action(struct ieee80211_hw *hw, rtwsta->ampdu_params[tid].amsdu = params->amsdu; set_bit(tid, rtwsta->ampdu_map); rtw89_leave_ps_mode(rtwdev); - rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, vif, sta); + rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, rtwvif, rtwsta); mutex_unlock(&rtwdev->mutex); break; case IEEE80211_AMPDU_RX_START: @@ -731,9 +1042,14 @@ static void rtw89_ops_sta_statistics(struct ieee80211_hw *hw, struct ieee80211_sta *sta, struct station_info *sinfo) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_sta_link *rtwsta_link;
- sinfo->txrate = rtwsta->ra_report.txrate; + rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0); + if (unlikely(!rtwsta_link)) + return; + + sinfo->txrate = rtwsta_link->ra_report.txrate; sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE); }
@@ -743,7 +1059,7 @@ void __rtw89_drop_packets(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif) struct rtw89_vif *rtwvif;
if (vif) { - rtwvif = (struct rtw89_vif *)vif->drv_priv; + rtwvif = vif_to_rtwvif(vif); rtw89_mac_pkt_drop_vif(rtwdev, rtwvif); } else { rtw89_for_each_rtwvif(rtwdev, rtwvif) @@ -777,14 +1093,20 @@ struct rtw89_iter_bitrate_mask_data { static void rtw89_ra_mask_info_update_iter(void *data, struct ieee80211_sta *sta) { struct rtw89_iter_bitrate_mask_data *br_data = data; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct ieee80211_vif *vif = rtwvif_to_vif(rtwsta->rtwvif); + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_vif *rtwvif = rtwsta->rtwvif; + struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id;
if (vif != br_data->vif || vif->p2p) return;
- rtwsta->use_cfg_mask = true; - rtwsta->mask = *br_data->mask; + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwsta_link->use_cfg_mask = true; + rtwsta_link->mask = *br_data->mask; + } + rtw89_phy_ra_update_sta(br_data->rtwdev, sta, IEEE80211_RC_SUPP_RATES_CHANGED); }
@@ -854,10 +1176,20 @@ static void rtw89_ops_sw_scan_start(struct ieee80211_hw *hw, const u8 *mac_addr) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link;
mutex_lock(&rtwdev->mutex); - rtw89_core_scan_start(rtwdev, rtwvif, mac_addr, false); + + rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0); + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, "sw scan start: find no link on HW-0\n"); + goto out; + } + + rtw89_core_scan_start(rtwdev, rtwvif_link, mac_addr, false); + +out: mutex_unlock(&rtwdev->mutex); }
@@ -865,9 +1197,20 @@ static void rtw89_ops_sw_scan_complete(struct ieee80211_hw *hw, struct ieee80211_vif *vif) { struct rtw89_dev *rtwdev = hw->priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link;
mutex_lock(&rtwdev->mutex); - rtw89_core_scan_complete(rtwdev, vif, false); + + rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0); + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, "sw scan complete: find no link on HW-0\n"); + goto out; + } + + rtw89_core_scan_complete(rtwdev, rtwvif_link, false); + +out: mutex_unlock(&rtwdev->mutex); }
@@ -884,22 +1227,35 @@ static int rtw89_ops_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_scan_request *req) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif); - int ret = 0; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link; + int ret;
if (!RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw)) return 1;
- if (rtwdev->scanning || rtwvif->offchan) - return -EBUSY; - mutex_lock(&rtwdev->mutex); - rtw89_hw_scan_start(rtwdev, vif, req); - ret = rtw89_hw_scan_offload(rtwdev, vif, true); + + if (rtwdev->scanning || rtwvif->offchan) { + ret = -EBUSY; + goto out; + } + + rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0); + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, "hw scan: find no link on HW-0\n"); + ret = -ENOLINK; + goto out; + } + + rtw89_hw_scan_start(rtwdev, rtwvif_link, req); + ret = rtw89_hw_scan_offload(rtwdev, rtwvif_link, true); if (ret) { - rtw89_hw_scan_abort(rtwdev, vif); + rtw89_hw_scan_abort(rtwdev, rtwvif_link); rtw89_err(rtwdev, "HW scan failed with status: %d\n", ret); } + +out: mutex_unlock(&rtwdev->mutex);
return ret; @@ -909,6 +1265,8 @@ static void rtw89_ops_cancel_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif) { struct rtw89_dev *rtwdev = hw->priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link;
if (!RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw)) return; @@ -917,7 +1275,16 @@ static void rtw89_ops_cancel_hw_scan(struct ieee80211_hw *hw, return;
mutex_lock(&rtwdev->mutex); - rtw89_hw_scan_abort(rtwdev, vif); + + rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0); + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, "cancel hw scan: find no link on HW-0\n"); + goto out; + } + + rtw89_hw_scan_abort(rtwdev, rtwvif_link); + +out: mutex_unlock(&rtwdev->mutex); }
@@ -970,11 +1337,24 @@ static int rtw89_ops_assign_vif_chanctx(struct ieee80211_hw *hw, struct ieee80211_chanctx_conf *ctx) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link; int ret;
mutex_lock(&rtwdev->mutex); - ret = rtw89_chanctx_ops_assign_vif(rtwdev, rtwvif, ctx); + + rtwvif_link = rtwvif->links[link_conf->link_id]; + if (unlikely(!rtwvif_link)) { + rtw89_err(rtwdev, + "%s: rtwvif link (link_id %u) is not active\n", + __func__, link_conf->link_id); + ret = -ENOLINK; + goto out; + } + + ret = rtw89_chanctx_ops_assign_vif(rtwdev, rtwvif_link, ctx); + +out: mutex_unlock(&rtwdev->mutex);
return ret; @@ -986,10 +1366,21 @@ static void rtw89_ops_unassign_vif_chanctx(struct ieee80211_hw *hw, struct ieee80211_chanctx_conf *ctx) { struct rtw89_dev *rtwdev = hw->priv; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link;
mutex_lock(&rtwdev->mutex); - rtw89_chanctx_ops_unassign_vif(rtwdev, rtwvif, ctx); + + rtwvif_link = rtwvif->links[link_conf->link_id]; + if (unlikely(!rtwvif_link)) { + mutex_unlock(&rtwdev->mutex); + rtw89_err(rtwdev, + "%s: rtwvif link (link_id %u) is not active\n", + __func__, link_conf->link_id); + return; + } + + rtw89_chanctx_ops_unassign_vif(rtwdev, rtwvif_link, ctx); mutex_unlock(&rtwdev->mutex); }
@@ -1003,7 +1394,7 @@ static int rtw89_ops_remain_on_channel(struct ieee80211_hw *hw, struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif); struct rtw89_roc *roc = &rtwvif->roc;
- if (!vif) + if (!rtwvif) return -EINVAL;
mutex_lock(&rtwdev->mutex); @@ -1053,8 +1444,8 @@ static int rtw89_ops_cancel_remain_on_channel(struct ieee80211_hw *hw, static void rtw89_set_tid_config_iter(void *data, struct ieee80211_sta *sta) { struct cfg80211_tid_config *tid_config = data; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_dev *rtwdev = rtwsta->rtwvif->rtwdev; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_dev *rtwdev = rtwsta->rtwdev;
rtw89_core_set_tid_config(rtwdev, sta, tid_config); } diff --git a/drivers/net/wireless/realtek/rtw89/mac_be.c b/drivers/net/wireless/realtek/rtw89/mac_be.c index 31f0a5225b11..f22eaa83297f 100644 --- a/drivers/net/wireless/realtek/rtw89/mac_be.c +++ b/drivers/net/wireless/realtek/rtw89/mac_be.c @@ -2091,13 +2091,13 @@ static int rtw89_mac_init_bfee_be(struct rtw89_dev *rtwdev, u8 mac_idx) }
static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; u8 nc = 1, nr = 3, ng = 0, cb = 1, cs = 1, ldpc_en = 1, stbc_en = 1; - u8 mac_idx = rtwvif->mac_idx; - u8 port_sel = rtwvif->port; + struct ieee80211_link_sta *link_sta; + u8 mac_idx = rtwvif_link->mac_idx; + u8 port_sel = rtwvif_link->port; u8 sound_dim = 3, t; u8 *phy_cap; u32 reg; @@ -2108,7 +2108,10 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev, if (ret) return ret;
- phy_cap = sta->deflink.he_cap.he_cap_elem.phy_cap_info; + rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info;
if ((phy_cap[3] & IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER) || (phy_cap[4] & IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER)) { @@ -2119,11 +2122,11 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev, sound_dim = min(sound_dim, t); }
- if ((sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) || - (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) { - ldpc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC); - stbc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK); - t = u32_get_bits(sta->deflink.vht_cap.cap, + if ((link_sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) || + (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) { + ldpc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC); + stbc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK); + t = u32_get_bits(link_sta->vht_cap.cap, IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK); sound_dim = min(sound_dim, t); } @@ -2131,6 +2134,8 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev, nc = min(nc, sound_dim); nr = min(nr, sound_dim);
+ rcu_read_unlock(); + reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_0, mac_idx); rtw89_write32_set(rtwdev, reg, B_BE_BFMEE_BFPARAM_SEL);
@@ -2155,12 +2160,12 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev, }
static int rtw89_mac_csi_rrsc_be(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; u32 rrsc = BIT(RTW89_MAC_BF_RRSC_6M) | BIT(RTW89_MAC_BF_RRSC_24M); - u8 mac_idx = rtwvif->mac_idx; + struct ieee80211_link_sta *link_sta; + u8 mac_idx = rtwvif_link->mac_idx; int ret; u32 reg;
@@ -2168,22 +2173,28 @@ static int rtw89_mac_csi_rrsc_be(struct rtw89_dev *rtwdev, if (ret) return ret;
- if (sta->deflink.he_cap.has_he) { + rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + + if (link_sta->he_cap.has_he) { rrsc |= (BIT(RTW89_MAC_BF_RRSC_HE_MSC0) | BIT(RTW89_MAC_BF_RRSC_HE_MSC3) | BIT(RTW89_MAC_BF_RRSC_HE_MSC5)); } - if (sta->deflink.vht_cap.vht_supported) { + if (link_sta->vht_cap.vht_supported) { rrsc |= (BIT(RTW89_MAC_BF_RRSC_VHT_MSC0) | BIT(RTW89_MAC_BF_RRSC_VHT_MSC3) | BIT(RTW89_MAC_BF_RRSC_VHT_MSC5)); } - if (sta->deflink.ht_cap.ht_supported) { + if (link_sta->ht_cap.ht_supported) { rrsc |= (BIT(RTW89_MAC_BF_RRSC_HT_MSC0) | BIT(RTW89_MAC_BF_RRSC_HT_MSC3) | BIT(RTW89_MAC_BF_RRSC_HT_MSC5)); }
+ rcu_read_unlock(); + reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_0, mac_idx); rtw89_write32_set(rtwdev, reg, B_BE_BFMEE_BFPARAM_SEL); rtw89_write32_clr(rtwdev, reg, B_BE_BFMEE_CSI_FORCE_RETE_EN); @@ -2195,17 +2206,25 @@ static int rtw89_mac_csi_rrsc_be(struct rtw89_dev *rtwdev, }
static void rtw89_mac_bf_assoc_be(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; + struct ieee80211_link_sta *link_sta; + bool has_beamformer_cap; + + rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + has_beamformer_cap = rtw89_sta_has_beamformer_cap(link_sta); + + rcu_read_unlock();
- if (rtw89_sta_has_beamformer_cap(sta)) { + if (has_beamformer_cap) { rtw89_debug(rtwdev, RTW89_DBG_BF, "initialize bfee for new association\n"); - rtw89_mac_init_bfee_be(rtwdev, rtwvif->mac_idx); - rtw89_mac_set_csi_para_reg_be(rtwdev, vif, sta); - rtw89_mac_csi_rrsc_be(rtwdev, vif, sta); + rtw89_mac_init_bfee_be(rtwdev, rtwvif_link->mac_idx); + rtw89_mac_set_csi_para_reg_be(rtwdev, rtwvif_link, rtwsta_link); + rtw89_mac_csi_rrsc_be(rtwdev, rtwvif_link, rtwsta_link); } }
diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c index c7165e757842..4b47b45f897c 100644 --- a/drivers/net/wireless/realtek/rtw89/phy.c +++ b/drivers/net/wireless/realtek/rtw89/phy.c @@ -75,12 +75,12 @@ static u64 get_mcs_ra_mask(u16 mcs_map, u8 highest_mcs, u8 gap) return ra_mask; }
-static u64 get_he_ra_mask(struct ieee80211_sta *sta) +static u64 get_he_ra_mask(struct ieee80211_link_sta *link_sta) { - struct ieee80211_sta_he_cap cap = sta->deflink.he_cap; + struct ieee80211_sta_he_cap cap = link_sta->he_cap; u16 mcs_map;
- switch (sta->deflink.bandwidth) { + switch (link_sta->bandwidth) { case IEEE80211_STA_RX_BW_160: if (cap.he_cap_elem.phy_cap_info[0] & IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G) @@ -118,14 +118,14 @@ static u64 get_eht_mcs_ra_mask(u8 *max_nss, u8 start_mcs, u8 n_nss) return mask; }
-static u64 get_eht_ra_mask(struct ieee80211_sta *sta) +static u64 get_eht_ra_mask(struct ieee80211_link_sta *link_sta) { - struct ieee80211_sta_eht_cap *eht_cap = &sta->deflink.eht_cap; + struct ieee80211_sta_eht_cap *eht_cap = &link_sta->eht_cap; struct ieee80211_eht_mcs_nss_supp_20mhz_only *mcs_nss_20mhz; struct ieee80211_eht_mcs_nss_supp_bw *mcs_nss; - u8 *he_phy_cap = sta->deflink.he_cap.he_cap_elem.phy_cap_info; + u8 *he_phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info;
- switch (sta->deflink.bandwidth) { + switch (link_sta->bandwidth) { case IEEE80211_STA_RX_BW_320: mcs_nss = &eht_cap->eht_mcs_nss_supp.bw._320; /* MCS 9, 11, 13 */ @@ -195,15 +195,16 @@ static u64 rtw89_phy_ra_mask_recover(u64 ra_mask, u64 ra_mask_bak) return ra_mask; }
-static u64 rtw89_phy_ra_mask_cfg(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta, +static u64 rtw89_phy_ra_mask_cfg(struct rtw89_dev *rtwdev, + struct rtw89_sta_link *rtwsta_link, + struct ieee80211_link_sta *link_sta, const struct rtw89_chan *chan) { - struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta); - struct cfg80211_bitrate_mask *mask = &rtwsta->mask; + struct cfg80211_bitrate_mask *mask = &rtwsta_link->mask; enum nl80211_band band; u64 cfg_mask;
- if (!rtwsta->use_cfg_mask) + if (!rtwsta_link->use_cfg_mask) return -1;
switch (chan->band_type) { @@ -227,17 +228,17 @@ static u64 rtw89_phy_ra_mask_cfg(struct rtw89_dev *rtwdev, struct rtw89_sta *rtw return -1; }
- if (sta->deflink.he_cap.has_he) { + if (link_sta->he_cap.has_he) { cfg_mask |= u64_encode_bits(mask->control[band].he_mcs[0], RA_MASK_HE_1SS_RATES); cfg_mask |= u64_encode_bits(mask->control[band].he_mcs[1], RA_MASK_HE_2SS_RATES); - } else if (sta->deflink.vht_cap.vht_supported) { + } else if (link_sta->vht_cap.vht_supported) { cfg_mask |= u64_encode_bits(mask->control[band].vht_mcs[0], RA_MASK_VHT_1SS_RATES); cfg_mask |= u64_encode_bits(mask->control[band].vht_mcs[1], RA_MASK_VHT_2SS_RATES); - } else if (sta->deflink.ht_cap.ht_supported) { + } else if (link_sta->ht_cap.ht_supported) { cfg_mask |= u64_encode_bits(mask->control[band].ht_mcs[0], RA_MASK_HT_1SS_RATES); cfg_mask |= u64_encode_bits(mask->control[band].ht_mcs[1], @@ -261,17 +262,17 @@ rtw89_ra_mask_eht_rates[4] = {RA_MASK_EHT_1SS_RATES, RA_MASK_EHT_2SS_RATES, RA_MASK_EHT_3SS_RATES, RA_MASK_EHT_4SS_RATES};
static void rtw89_phy_ra_gi_ltf(struct rtw89_dev *rtwdev, - struct rtw89_sta *rtwsta, + struct rtw89_sta_link *rtwsta_link, const struct rtw89_chan *chan, bool *fix_giltf_en, u8 *fix_giltf) { - struct cfg80211_bitrate_mask *mask = &rtwsta->mask; + struct cfg80211_bitrate_mask *mask = &rtwsta_link->mask; u8 band = chan->band_type; enum nl80211_band nl_band = rtw89_hw_to_nl80211_band(band); u8 he_gi = mask->control[nl_band].he_gi; u8 he_ltf = mask->control[nl_band].he_ltf;
- if (!rtwsta->use_cfg_mask) + if (!rtwsta_link->use_cfg_mask) return;
if (he_ltf == 2 && he_gi == 2) { @@ -295,17 +296,17 @@ static void rtw89_phy_ra_gi_ltf(struct rtw89_dev *rtwdev, }
static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev, - struct ieee80211_sta *sta, bool csi) + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, + struct ieee80211_link_sta *link_sta, + bool p2p, bool csi) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_vif *rtwvif = rtwsta->rtwvif; - struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif->rate_pattern; - struct rtw89_ra_info *ra = &rtwsta->ra; + struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif_link->rate_pattern; + struct rtw89_ra_info *ra = &rtwsta_link->ra; const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); - struct ieee80211_vif *vif = rtwvif_to_vif(rtwsta->rtwvif); + rtwvif_link->chanctx_idx); const u64 *high_rate_masks = rtw89_ra_mask_ht_rates; - u8 rssi = ewma_rssi_read(&rtwsta->avg_rssi); + u8 rssi = ewma_rssi_read(&rtwsta_link->avg_rssi); u64 ra_mask = 0; u64 ra_mask_bak; u8 mode = 0; @@ -320,65 +321,65 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
memset(ra, 0, sizeof(*ra)); /* Set the ra mask from sta's capability */ - if (sta->deflink.eht_cap.has_eht) { + if (link_sta->eht_cap.has_eht) { mode |= RTW89_RA_MODE_EHT; - ra_mask |= get_eht_ra_mask(sta); + ra_mask |= get_eht_ra_mask(link_sta); high_rate_masks = rtw89_ra_mask_eht_rates; - } else if (sta->deflink.he_cap.has_he) { + } else if (link_sta->he_cap.has_he) { mode |= RTW89_RA_MODE_HE; csi_mode = RTW89_RA_RPT_MODE_HE; - ra_mask |= get_he_ra_mask(sta); + ra_mask |= get_he_ra_mask(link_sta); high_rate_masks = rtw89_ra_mask_he_rates; - if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[2] & + if (link_sta->he_cap.he_cap_elem.phy_cap_info[2] & IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ) stbc_en = 1; - if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[1] & + if (link_sta->he_cap.he_cap_elem.phy_cap_info[1] & IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD) ldpc_en = 1; - rtw89_phy_ra_gi_ltf(rtwdev, rtwsta, chan, &fix_giltf_en, &fix_giltf); - } else if (sta->deflink.vht_cap.vht_supported) { - u16 mcs_map = le16_to_cpu(sta->deflink.vht_cap.vht_mcs.rx_mcs_map); + rtw89_phy_ra_gi_ltf(rtwdev, rtwsta_link, chan, &fix_giltf_en, &fix_giltf); + } else if (link_sta->vht_cap.vht_supported) { + u16 mcs_map = le16_to_cpu(link_sta->vht_cap.vht_mcs.rx_mcs_map);
mode |= RTW89_RA_MODE_VHT; csi_mode = RTW89_RA_RPT_MODE_VHT; /* MCS9 (non-20MHz), MCS8, MCS7 */ - if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_20) + if (link_sta->bandwidth == IEEE80211_STA_RX_BW_20) ra_mask |= get_mcs_ra_mask(mcs_map, 8, 1); else ra_mask |= get_mcs_ra_mask(mcs_map, 9, 1); high_rate_masks = rtw89_ra_mask_vht_rates; - if (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK) + if (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK) stbc_en = 1; - if (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC) + if (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC) ldpc_en = 1; - } else if (sta->deflink.ht_cap.ht_supported) { + } else if (link_sta->ht_cap.ht_supported) { mode |= RTW89_RA_MODE_HT; csi_mode = RTW89_RA_RPT_MODE_HT; - ra_mask |= ((u64)sta->deflink.ht_cap.mcs.rx_mask[3] << 48) | - ((u64)sta->deflink.ht_cap.mcs.rx_mask[2] << 36) | - ((u64)sta->deflink.ht_cap.mcs.rx_mask[1] << 24) | - ((u64)sta->deflink.ht_cap.mcs.rx_mask[0] << 12); + ra_mask |= ((u64)link_sta->ht_cap.mcs.rx_mask[3] << 48) | + ((u64)link_sta->ht_cap.mcs.rx_mask[2] << 36) | + ((u64)link_sta->ht_cap.mcs.rx_mask[1] << 24) | + ((u64)link_sta->ht_cap.mcs.rx_mask[0] << 12); high_rate_masks = rtw89_ra_mask_ht_rates; - if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_RX_STBC) + if (link_sta->ht_cap.cap & IEEE80211_HT_CAP_RX_STBC) stbc_en = 1; - if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_LDPC_CODING) + if (link_sta->ht_cap.cap & IEEE80211_HT_CAP_LDPC_CODING) ldpc_en = 1; }
switch (chan->band_type) { case RTW89_BAND_2G: - ra_mask |= sta->deflink.supp_rates[NL80211_BAND_2GHZ]; - if (sta->deflink.supp_rates[NL80211_BAND_2GHZ] & 0xf) + ra_mask |= link_sta->supp_rates[NL80211_BAND_2GHZ]; + if (link_sta->supp_rates[NL80211_BAND_2GHZ] & 0xf) mode |= RTW89_RA_MODE_CCK; - if (sta->deflink.supp_rates[NL80211_BAND_2GHZ] & 0xff0) + if (link_sta->supp_rates[NL80211_BAND_2GHZ] & 0xff0) mode |= RTW89_RA_MODE_OFDM; break; case RTW89_BAND_5G: - ra_mask |= (u64)sta->deflink.supp_rates[NL80211_BAND_5GHZ] << 4; + ra_mask |= (u64)link_sta->supp_rates[NL80211_BAND_5GHZ] << 4; mode |= RTW89_RA_MODE_OFDM; break; case RTW89_BAND_6G: - ra_mask |= (u64)sta->deflink.supp_rates[NL80211_BAND_6GHZ] << 4; + ra_mask |= (u64)link_sta->supp_rates[NL80211_BAND_6GHZ] << 4; mode |= RTW89_RA_MODE_OFDM; break; default: @@ -405,48 +406,48 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev, ra_mask &= rtw89_phy_ra_mask_rssi(rtwdev, rssi, 0);
ra_mask = rtw89_phy_ra_mask_recover(ra_mask, ra_mask_bak); - ra_mask &= rtw89_phy_ra_mask_cfg(rtwdev, rtwsta, chan); + ra_mask &= rtw89_phy_ra_mask_cfg(rtwdev, rtwsta_link, link_sta, chan);
- switch (sta->deflink.bandwidth) { + switch (link_sta->bandwidth) { case IEEE80211_STA_RX_BW_160: bw_mode = RTW89_CHANNEL_WIDTH_160; - sgi = sta->deflink.vht_cap.vht_supported && - (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_160); + sgi = link_sta->vht_cap.vht_supported && + (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_160); break; case IEEE80211_STA_RX_BW_80: bw_mode = RTW89_CHANNEL_WIDTH_80; - sgi = sta->deflink.vht_cap.vht_supported && - (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80); + sgi = link_sta->vht_cap.vht_supported && + (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80); break; case IEEE80211_STA_RX_BW_40: bw_mode = RTW89_CHANNEL_WIDTH_40; - sgi = sta->deflink.ht_cap.ht_supported && - (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_SGI_40); + sgi = link_sta->ht_cap.ht_supported && + (link_sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40); break; default: bw_mode = RTW89_CHANNEL_WIDTH_20; - sgi = sta->deflink.ht_cap.ht_supported && - (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_SGI_20); + sgi = link_sta->ht_cap.ht_supported && + (link_sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20); break; }
- if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[3] & + if (link_sta->he_cap.he_cap_elem.phy_cap_info[3] & IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_16_QAM) ra->dcm_cap = 1;
- if (rate_pattern->enable && !vif->p2p) { - ra_mask = rtw89_phy_ra_mask_cfg(rtwdev, rtwsta, chan); + if (rate_pattern->enable && !p2p) { + ra_mask = rtw89_phy_ra_mask_cfg(rtwdev, rtwsta_link, link_sta, chan); ra_mask &= rate_pattern->ra_mask; mode = rate_pattern->ra_mode; }
ra->bw_cap = bw_mode; - ra->er_cap = rtwsta->er_cap; + ra->er_cap = rtwsta_link->er_cap; ra->mode_ctrl = mode; - ra->macid = rtwsta->mac_id; + ra->macid = rtwsta_link->mac_id; ra->stbc_cap = stbc_en; ra->ldpc_cap = ldpc_en; - ra->ss_num = min(sta->deflink.rx_nss, rtwdev->hal.tx_nss) - 1; + ra->ss_num = min(link_sta->rx_nss, rtwdev->hal.tx_nss) - 1; ra->en_sgi = sgi; ra->ra_mask = ra_mask; ra->fix_giltf_en = fix_giltf_en; @@ -458,20 +459,29 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev, ra->fixed_csi_rate_en = false; ra->ra_csi_rate_en = true; ra->cr_tbl_sel = false; - ra->band_num = rtwvif->phy_idx; + ra->band_num = rtwvif_link->phy_idx; ra->csi_bw = bw_mode; ra->csi_gi_ltf = RTW89_GILTF_LGI_4XHE32; ra->csi_mcs_ss_idx = 5; ra->csi_mode = csi_mode; }
-void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta, - u32 changed) +static void __rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct rtw89_sta_link *rtwsta_link, + u32 changed) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_ra_info *ra = &rtwsta->ra; + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + struct rtw89_ra_info *ra = &rtwsta_link->ra; + struct ieee80211_link_sta *link_sta;
- rtw89_phy_ra_sta_update(rtwdev, sta, false); + rcu_read_lock(); + + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false); + rtw89_phy_ra_sta_update(rtwdev, rtwvif_link, rtwsta_link, + link_sta, vif->p2p, false); + + rcu_read_unlock();
if (changed & IEEE80211_RC_SUPP_RATES_CHANGED) ra->upd_mask = 1; @@ -489,6 +499,20 @@ void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta rtw89_fw_h2c_ra(rtwdev, ra, false); }
+void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta, + u32 changed) +{ + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + __rtw89_phy_ra_update_sta(rtwdev, rtwvif_link, rtwsta_link, changed); + } +} + static bool __check_rate_pattern(struct rtw89_phy_rate_pattern *next, u16 rate_base, u64 ra_mask, u8 ra_mode, u32 rate_ctrl, u32 ctrl_skip, bool force) @@ -523,15 +547,15 @@ static bool __check_rate_pattern(struct rtw89_phy_rate_pattern *next, [RTW89_CHIP_BE] = RTW89_HW_RATE_V1_ ## rate, \ }
-void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif, - const struct cfg80211_bitrate_mask *mask) +static +void __rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + const struct cfg80211_bitrate_mask *mask) { struct ieee80211_supported_band *sband; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; struct rtw89_phy_rate_pattern next_pattern = {0}; const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); + rtwvif_link->chanctx_idx); static const u16 hw_rate_he[][RTW89_CHIP_GEN_NUM] = { RTW89_HW_RATE_BY_CHIP_GEN(HE_NSS1_MCS0), RTW89_HW_RATE_BY_CHIP_GEN(HE_NSS2_MCS0), @@ -600,7 +624,7 @@ void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev, if (!next_pattern.enable) goto out;
- rtwvif->rate_pattern = next_pattern; + rtwvif_link->rate_pattern = next_pattern; rtw89_debug(rtwdev, RTW89_DBG_RA, "configure pattern: rate 0x%x, mask 0x%llx, mode 0x%x\n", next_pattern.rate, @@ -609,10 +633,22 @@ void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev, return;
out: - rtwvif->rate_pattern.enable = false; + rtwvif_link->rate_pattern.enable = false; rtw89_debug(rtwdev, RTW89_DBG_RA, "unset rate pattern\n"); }
+void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev, + struct ieee80211_vif *vif, + const struct cfg80211_bitrate_mask *mask) +{ + struct rtw89_vif *rtwvif = vif_to_rtwvif(vif); + struct rtw89_vif_link *rtwvif_link; + unsigned int link_id; + + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + __rtw89_phy_rate_pattern_vif(rtwdev, rtwvif_link, mask); +} + static void rtw89_phy_ra_update_sta_iter(void *data, struct ieee80211_sta *sta) { struct rtw89_dev *rtwdev = (struct rtw89_dev *)data; @@ -627,14 +663,24 @@ void rtw89_phy_ra_update(struct rtw89_dev *rtwdev) rtwdev); }
-void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta) +void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_ra_info *ra = &rtwsta->ra; - u8 rssi = ewma_rssi_read(&rtwsta->avg_rssi) >> RSSI_FACTOR; - bool csi = rtw89_sta_has_beamformer_cap(sta); + struct rtw89_vif_link *rtwvif_link = rtwsta_link->rtwvif_link; + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); + struct rtw89_ra_info *ra = &rtwsta_link->ra; + u8 rssi = ewma_rssi_read(&rtwsta_link->avg_rssi) >> RSSI_FACTOR; + struct ieee80211_link_sta *link_sta; + bool csi; + + rcu_read_lock();
- rtw89_phy_ra_sta_update(rtwdev, sta, csi); + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true); + csi = rtw89_sta_has_beamformer_cap(link_sta); + + rtw89_phy_ra_sta_update(rtwdev, rtwvif_link, rtwsta_link, + link_sta, vif->p2p, csi); + + rcu_read_unlock();
if (rssi > 40) ra->init_rate_lv = 1; @@ -2553,14 +2599,14 @@ struct rtw89_phy_iter_ra_data { struct sk_buff *c2h; };
-static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta) +static void __rtw89_phy_c2h_ra_rpt_iter(struct rtw89_sta_link *rtwsta_link, + struct ieee80211_link_sta *link_sta, + struct rtw89_phy_iter_ra_data *ra_data) { - struct rtw89_phy_iter_ra_data *ra_data = (struct rtw89_phy_iter_ra_data *)data; struct rtw89_dev *rtwdev = ra_data->rtwdev; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; const struct rtw89_c2h_ra_rpt *c2h = (const struct rtw89_c2h_ra_rpt *)ra_data->c2h->data; - struct rtw89_ra_report *ra_report = &rtwsta->ra_report; + struct rtw89_ra_report *ra_report = &rtwsta_link->ra_report; const struct rtw89_chip_info *chip = rtwdev->chip; bool format_v1 = chip->chip_gen == RTW89_CHIP_BE; u8 mode, rate, bw, giltf, mac_id; @@ -2570,7 +2616,7 @@ static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta) u8 t;
mac_id = le32_get_bits(c2h->w2, RTW89_C2H_RA_RPT_W2_MACID); - if (mac_id != rtwsta->mac_id) + if (mac_id != rtwsta_link->mac_id) return;
rate = le32_get_bits(c2h->w3, RTW89_C2H_RA_RPT_W3_MCSNSS); @@ -2661,8 +2707,26 @@ static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta) u16_encode_bits(mode, RTW89_HW_RATE_MASK_MOD) | u16_encode_bits(rate, RTW89_HW_RATE_MASK_VAL); ra_report->might_fallback_legacy = mcs <= 2; - sta->deflink.agg.max_rc_amsdu_len = get_max_amsdu_len(rtwdev, ra_report); - rtwsta->max_agg_wait = sta->deflink.agg.max_rc_amsdu_len / 1500 - 1; + link_sta->agg.max_rc_amsdu_len = get_max_amsdu_len(rtwdev, ra_report); + rtwsta_link->max_agg_wait = link_sta->agg.max_rc_amsdu_len / 1500 - 1; +} + +static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta) +{ + struct rtw89_phy_iter_ra_data *ra_data = (struct rtw89_phy_iter_ra_data *)data; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_sta_link *rtwsta_link; + struct ieee80211_link_sta *link_sta; + unsigned int link_id; + + rcu_read_lock(); + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false); + __rtw89_phy_c2h_ra_rpt_iter(rtwsta_link, link_sta, ra_data); + } + + rcu_read_unlock(); }
static void @@ -4290,33 +4354,33 @@ void rtw89_phy_cfo_parse(struct rtw89_dev *rtwdev, s16 cfo_val, cfo->packet_count++; }
-void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { const struct rtw89_chip_info *chip = rtwdev->chip; const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, - rtwvif->chanctx_idx); + rtwvif_link->chanctx_idx); struct rtw89_phy_ul_tb_info *ul_tb_info = &rtwdev->ul_tb_info;
if (!chip->ul_tb_waveform_ctrl) return;
- rtwvif->def_tri_idx = + rtwvif_link->def_tri_idx = rtw89_phy_read32_mask(rtwdev, R_DCFO_OPT, B_TXSHAPE_TRIANGULAR_CFG);
if (chip->chip_id == RTL8852B && rtwdev->hal.cv > CHIP_CBV) - rtwvif->dyn_tb_bedge_en = false; + rtwvif_link->dyn_tb_bedge_en = false; else if (chan->band_type >= RTW89_BAND_5G && chan->band_width >= RTW89_CHANNEL_WIDTH_40) - rtwvif->dyn_tb_bedge_en = true; + rtwvif_link->dyn_tb_bedge_en = true; else - rtwvif->dyn_tb_bedge_en = false; + rtwvif_link->dyn_tb_bedge_en = false;
rtw89_debug(rtwdev, RTW89_DBG_UL_TB, "[ULTB] def_if_bandedge=%d, def_tri_idx=%d\n", - ul_tb_info->def_if_bandedge, rtwvif->def_tri_idx); + ul_tb_info->def_if_bandedge, rtwvif_link->def_tri_idx); rtw89_debug(rtwdev, RTW89_DBG_UL_TB, "[ULTB] dyn_tb_begde_en=%d, dyn_tb_tri_en=%d\n", - rtwvif->dyn_tb_bedge_en, ul_tb_info->dyn_tb_tri_en); + rtwvif_link->dyn_tb_bedge_en, ul_tb_info->dyn_tb_tri_en); }
struct rtw89_phy_ul_tb_check_data { @@ -4338,7 +4402,7 @@ struct rtw89_phy_power_diff { };
static void rtw89_phy_ofdma_power_diff(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { static const struct rtw89_phy_power_diff table[2] = { {0x0, 0x0, 0x0, 0x0, 0xf4, 0x3, 0x3}, @@ -4350,13 +4414,13 @@ static void rtw89_phy_ofdma_power_diff(struct rtw89_dev *rtwdev, if (!rtwdev->chip->ul_tb_pwr_diff) return;
- if (rtwvif->pwr_diff_en == rtwvif->pre_pwr_diff_en) { - rtwvif->pwr_diff_en = false; + if (rtwvif_link->pwr_diff_en == rtwvif_link->pre_pwr_diff_en) { + rtwvif_link->pwr_diff_en = false; return; }
- rtwvif->pre_pwr_diff_en = rtwvif->pwr_diff_en; - param = &table[rtwvif->pwr_diff_en]; + rtwvif_link->pre_pwr_diff_en = rtwvif_link->pwr_diff_en; + param = &table[rtwvif_link->pwr_diff_en];
rtw89_phy_write32_mask(rtwdev, R_Q_MATRIX_00, B_Q_MATRIX_00_REAL, param->q_00); @@ -4365,32 +4429,32 @@ static void rtw89_phy_ofdma_power_diff(struct rtw89_dev *rtwdev, rtw89_phy_write32_mask(rtwdev, R_CUSTOMIZE_Q_MATRIX, B_CUSTOMIZE_Q_MATRIX_EN, param->q_matrix_en);
- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_1T, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_1T, rtwvif_link->mac_idx); rtw89_write32_mask(rtwdev, reg, B_AX_PWR_UL_TB_1T_NORM_BW160, param->ultb_1t_norm_160);
- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_2T, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_2T, rtwvif_link->mac_idx); rtw89_write32_mask(rtwdev, reg, B_AX_PWR_UL_TB_2T_NORM_BW160, param->ultb_2t_norm_160);
- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM1, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM1, rtwvif_link->mac_idx); rtw89_write32_mask(rtwdev, reg, B_AX_PATH_COM1_NORM_1STS, param->com1_norm_1sts);
- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM2, rtwvif->mac_idx); + reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM2, rtwvif_link->mac_idx); rtw89_write32_mask(rtwdev, reg, B_AX_PATH_COM2_RESP_1STS_PATH, param->com2_resp_1sts_path); }
static void rtw89_phy_ul_tb_ctrl_check(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, struct rtw89_phy_ul_tb_check_data *ul_tb_data) { struct rtw89_traffic_stats *stats = &rtwdev->stats; - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION) + if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION) return;
if (!vif->cfg.assoc) @@ -4403,11 +4467,11 @@ void rtw89_phy_ul_tb_ctrl_check(struct rtw89_dev *rtwdev, ul_tb_data->low_tf_client = true;
ul_tb_data->valid = true; - ul_tb_data->def_tri_idx = rtwvif->def_tri_idx; - ul_tb_data->dyn_tb_bedge_en = rtwvif->dyn_tb_bedge_en; + ul_tb_data->def_tri_idx = rtwvif_link->def_tri_idx; + ul_tb_data->dyn_tb_bedge_en = rtwvif_link->dyn_tb_bedge_en; }
- rtw89_phy_ofdma_power_diff(rtwdev, rtwvif); + rtw89_phy_ofdma_power_diff(rtwdev, rtwvif_link); }
static void rtw89_phy_ul_tb_waveform_ctrl(struct rtw89_dev *rtwdev, @@ -4453,7 +4517,9 @@ void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev) { const struct rtw89_chip_info *chip = rtwdev->chip; struct rtw89_phy_ul_tb_check_data ul_tb_data = {}; + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id;
if (!chip->ul_tb_waveform_ctrl && !chip->ul_tb_pwr_diff) return; @@ -4462,7 +4528,8 @@ void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev) return;
rtw89_for_each_rtwvif(rtwdev, rtwvif) - rtw89_phy_ul_tb_ctrl_check(rtwdev, rtwvif, &ul_tb_data); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_phy_ul_tb_ctrl_check(rtwdev, rtwvif_link, &ul_tb_data);
if (!ul_tb_data.valid) return; @@ -4626,30 +4693,42 @@ struct rtw89_phy_iter_rssi_data { bool rssi_changed; };
-static void rtw89_phy_stat_rssi_update_iter(void *data, - struct ieee80211_sta *sta) +static +void __rtw89_phy_stat_rssi_update_iter(struct rtw89_sta_link *rtwsta_link, + struct rtw89_phy_iter_rssi_data *rssi_data) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_phy_iter_rssi_data *rssi_data = - (struct rtw89_phy_iter_rssi_data *)data; struct rtw89_phy_ch_info *ch_info = rssi_data->ch_info; unsigned long rssi_curr;
- rssi_curr = ewma_rssi_read(&rtwsta->avg_rssi); + rssi_curr = ewma_rssi_read(&rtwsta_link->avg_rssi);
if (rssi_curr < ch_info->rssi_min) { ch_info->rssi_min = rssi_curr; - ch_info->rssi_min_macid = rtwsta->mac_id; + ch_info->rssi_min_macid = rtwsta_link->mac_id; }
- if (rtwsta->prev_rssi == 0) { - rtwsta->prev_rssi = rssi_curr; - } else if (abs((int)rtwsta->prev_rssi - (int)rssi_curr) > (3 << RSSI_FACTOR)) { - rtwsta->prev_rssi = rssi_curr; + if (rtwsta_link->prev_rssi == 0) { + rtwsta_link->prev_rssi = rssi_curr; + } else if (abs((int)rtwsta_link->prev_rssi - (int)rssi_curr) > + (3 << RSSI_FACTOR)) { + rtwsta_link->prev_rssi = rssi_curr; rssi_data->rssi_changed = true; } }
+static void rtw89_phy_stat_rssi_update_iter(void *data, + struct ieee80211_sta *sta) +{ + struct rtw89_phy_iter_rssi_data *rssi_data = + (struct rtw89_phy_iter_rssi_data *)data; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) + __rtw89_phy_stat_rssi_update_iter(rtwsta_link, rssi_data); +} + static void rtw89_phy_stat_rssi_update(struct rtw89_dev *rtwdev) { struct rtw89_phy_iter_rssi_data rssi_data = {0}; @@ -5753,26 +5832,15 @@ void rtw89_phy_dig(struct rtw89_dev *rtwdev) rtw89_phy_dig_sdagc_follow_pagc_config(rtwdev, false); }
-static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta) +static void __rtw89_phy_tx_path_div_sta_iter(struct rtw89_dev *rtwdev, + struct rtw89_sta_link *rtwsta_link) { - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; - struct rtw89_dev *rtwdev = rtwsta->rtwdev; - struct rtw89_vif *rtwvif = rtwsta->rtwvif; struct rtw89_hal *hal = &rtwdev->hal; - bool *done = data; u8 rssi_a, rssi_b; u32 candidate;
- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION || sta->tdls) - return; - - if (*done) - return; - - *done = true; - - rssi_a = ewma_rssi_read(&rtwsta->rssi[RF_PATH_A]); - rssi_b = ewma_rssi_read(&rtwsta->rssi[RF_PATH_B]); + rssi_a = ewma_rssi_read(&rtwsta_link->rssi[RF_PATH_A]); + rssi_b = ewma_rssi_read(&rtwsta_link->rssi[RF_PATH_B]);
if (rssi_a > rssi_b + RTW89_TX_DIV_RSSI_RAW_TH) candidate = RF_A; @@ -5785,7 +5853,7 @@ static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta return;
hal->antenna_tx = candidate; - rtw89_fw_h2c_txpath_cmac_tbl(rtwdev, rtwsta); + rtw89_fw_h2c_txpath_cmac_tbl(rtwdev, rtwsta_link);
if (hal->antenna_tx == RF_A) { rtw89_phy_write32_mask(rtwdev, R_P0_RFMODE, B_P0_RFMODE_MUX, 0x12); @@ -5796,6 +5864,37 @@ static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta } }
+static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta) +{ + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); + struct rtw89_dev *rtwdev = rtwsta->rtwdev; + struct rtw89_vif *rtwvif = rtwsta->rtwvif; + struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id; + bool *done = data; + + if (WARN(ieee80211_vif_is_mld(vif), "MLD mix path_div\n")) + return; + + if (sta->tdls) + return; + + if (*done) + return; + + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link; + if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION) + continue; + + *done = true; + __rtw89_phy_tx_path_div_sta_iter(rtwdev, rtwsta_link); + return; + } +} + void rtw89_phy_tx_path_div_track(struct rtw89_dev *rtwdev) { struct rtw89_hal *hal = &rtwdev->hal; @@ -6002,17 +6101,27 @@ void rtw89_phy_dm_init(struct rtw89_dev *rtwdev) rtw89_chip_cfg_txrx_path(rtwdev); }
-void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif) +void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link); const struct rtw89_chip_info *chip = rtwdev->chip; const struct rtw89_reg_def *bss_clr_vld = &chip->bss_clr_vld; enum rtw89_phy_idx phy_idx = RTW89_PHY_0; + struct ieee80211_bss_conf *bss_conf; u8 bss_color;
- if (!vif->bss_conf.he_support || !vif->cfg.assoc) + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + if (!bss_conf->he_support || !vif->cfg.assoc) { + rcu_read_unlock(); return; + } + + bss_color = bss_conf->he_bss_color.color;
- bss_color = vif->bss_conf.he_bss_color.color; + rcu_read_unlock();
rtw89_phy_write32_idx(rtwdev, bss_clr_vld->addr, bss_clr_vld->mask, 0x1, phy_idx); diff --git a/drivers/net/wireless/realtek/rtw89/phy.h b/drivers/net/wireless/realtek/rtw89/phy.h index 6dd8ec46939a..7e335c02ee6f 100644 --- a/drivers/net/wireless/realtek/rtw89/phy.h +++ b/drivers/net/wireless/realtek/rtw89/phy.h @@ -892,7 +892,7 @@ void rtw89_phy_set_txpwr_limit_ru(struct rtw89_dev *rtwdev, phy->set_txpwr_limit_ru(rtwdev, chan, phy_idx); }
-void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta); +void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link); void rtw89_phy_ra_update(struct rtw89_dev *rtwdev); void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta, u32 changed); @@ -953,11 +953,12 @@ void rtw89_phy_antdiv_parse(struct rtw89_dev *rtwdev, struct rtw89_rx_phy_ppdu *phy_ppdu); void rtw89_phy_antdiv_track(struct rtw89_dev *rtwdev); void rtw89_phy_antdiv_work(struct work_struct *work); -void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif); +void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link); void rtw89_phy_tssi_ctrl_set_bandedge_cfg(struct rtw89_dev *rtwdev, enum rtw89_mac_idx mac_idx, enum rtw89_tssi_bandedge_cfg bandedge_cfg); -void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif); +void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link); void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev); u8 rtw89_encode_chan_idx(struct rtw89_dev *rtwdev, u8 central_ch, u8 band); void rtw89_decode_chan_idx(struct rtw89_dev *rtwdev, u8 chan_idx, diff --git a/drivers/net/wireless/realtek/rtw89/ps.c b/drivers/net/wireless/realtek/rtw89/ps.c index aebd6404f802..c1c12abc2ea9 100644 --- a/drivers/net/wireless/realtek/rtw89/ps.c +++ b/drivers/net/wireless/realtek/rtw89/ps.c @@ -62,9 +62,9 @@ static void rtw89_ps_power_mode_change(struct rtw89_dev *rtwdev, bool enter) rtw89_mac_power_mode_change(rtwdev, enter); }
-void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link) { - if (rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT) + if (rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT) return;
if (!rtwdev->ps_mode) @@ -85,23 +85,25 @@ void __rtw89_leave_ps_mode(struct rtw89_dev *rtwdev) rtw89_ps_power_mode_change(rtwdev, false); }
-static void __rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void __rtw89_enter_lps(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { struct rtw89_lps_parm lps_param = { - .macid = rtwvif->mac_id, + .macid = rtwvif_link->mac_id, .psmode = RTW89_MAC_AX_PS_MODE_LEGACY, .lastrpwm = RTW89_LAST_RPWM_PS, };
rtw89_btc_ntfy_radio_state(rtwdev, BTC_RFCTRL_FW_CTRL); rtw89_fw_h2c_lps_parm(rtwdev, &lps_param); - rtw89_fw_h2c_lps_ch_info(rtwdev, rtwvif); + rtw89_fw_h2c_lps_ch_info(rtwdev, rtwvif_link); }
-static void __rtw89_leave_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void __rtw89_leave_lps(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { struct rtw89_lps_parm lps_param = { - .macid = rtwvif->mac_id, + .macid = rtwvif_link->mac_id, .psmode = RTW89_MAC_AX_PS_MODE_ACTIVE, .lastrpwm = RTW89_LAST_RPWM_ACTIVE, }; @@ -109,7 +111,7 @@ static void __rtw89_leave_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif rtw89_fw_h2c_lps_parm(rtwdev, &lps_param); rtw89_fw_leave_lps_check(rtwdev, 0); rtw89_btc_ntfy_radio_state(rtwdev, BTC_RFCTRL_WL_ON); - rtw89_chip_digital_pwr_comp(rtwdev, rtwvif->phy_idx); + rtw89_chip_digital_pwr_comp(rtwdev, rtwvif_link->phy_idx); }
void rtw89_leave_ps_mode(struct rtw89_dev *rtwdev) @@ -119,7 +121,7 @@ void rtw89_leave_ps_mode(struct rtw89_dev *rtwdev) __rtw89_leave_ps_mode(rtwdev); }
-void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool ps_mode) { lockdep_assert_held(&rtwdev->mutex); @@ -127,23 +129,26 @@ void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, if (test_and_set_bit(RTW89_FLAG_LEISURE_PS, rtwdev->flags)) return;
- __rtw89_enter_lps(rtwdev, rtwvif); + __rtw89_enter_lps(rtwdev, rtwvif_link); if (ps_mode) - __rtw89_enter_ps_mode(rtwdev, rtwvif); + __rtw89_enter_ps_mode(rtwdev, rtwvif_link); }
-static void rtw89_leave_lps_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void rtw89_leave_lps_vif(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { - if (rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION && - rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT) + if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION && + rtwvif_link->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT) return;
- __rtw89_leave_lps(rtwdev, rtwvif); + __rtw89_leave_lps(rtwdev, rtwvif_link); }
void rtw89_leave_lps(struct rtw89_dev *rtwdev) { + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id;
lockdep_assert_held(&rtwdev->mutex);
@@ -153,12 +158,15 @@ void rtw89_leave_lps(struct rtw89_dev *rtwdev) __rtw89_leave_ps_mode(rtwdev);
rtw89_for_each_rtwvif(rtwdev, rtwvif) - rtw89_leave_lps_vif(rtwdev, rtwvif); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_leave_lps_vif(rtwdev, rtwvif_link); }
void rtw89_enter_ips(struct rtw89_dev *rtwdev) { + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id;
set_bit(RTW89_FLAG_INACTIVE_PS, rtwdev->flags);
@@ -166,14 +174,17 @@ void rtw89_enter_ips(struct rtw89_dev *rtwdev) return;
rtw89_for_each_rtwvif(rtwdev, rtwvif) - rtw89_mac_vif_deinit(rtwdev, rtwvif); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_mac_vif_deinit(rtwdev, rtwvif_link);
rtw89_core_stop(rtwdev); }
void rtw89_leave_ips(struct rtw89_dev *rtwdev) { + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id; int ret;
if (test_bit(RTW89_FLAG_POWERON, rtwdev->flags)) @@ -186,7 +197,8 @@ void rtw89_leave_ips(struct rtw89_dev *rtwdev) rtw89_set_channel(rtwdev);
rtw89_for_each_rtwvif(rtwdev, rtwvif) - rtw89_mac_vif_init(rtwdev, rtwvif); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_mac_vif_init(rtwdev, rtwvif_link);
clear_bit(RTW89_FLAG_INACTIVE_PS, rtwdev->flags); } @@ -197,48 +209,50 @@ void rtw89_set_coex_ctrl_lps(struct rtw89_dev *rtwdev, bool btc_ctrl) rtw89_leave_lps(rtwdev); }
-static void rtw89_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +static void rtw89_tsf32_toggle(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, enum rtw89_p2pps_action act) { if (act == RTW89_P2P_ACT_UPDATE || act == RTW89_P2P_ACT_REMOVE) return;
if (act == RTW89_P2P_ACT_INIT) - rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif, true); + rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif_link, true); else if (act == RTW89_P2P_ACT_TERMINATE) - rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif, false); + rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif_link, false); }
static void rtw89_p2p_disable_all_noa(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif) + struct rtw89_vif_link *rtwvif_link, + struct ieee80211_bss_conf *bss_conf) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; enum rtw89_p2pps_action act; u8 noa_id;
- if (rtwvif->last_noa_nr == 0) + if (rtwvif_link->last_noa_nr == 0) return;
- for (noa_id = 0; noa_id < rtwvif->last_noa_nr; noa_id++) { - if (noa_id == rtwvif->last_noa_nr - 1) + for (noa_id = 0; noa_id < rtwvif_link->last_noa_nr; noa_id++) { + if (noa_id == rtwvif_link->last_noa_nr - 1) act = RTW89_P2P_ACT_TERMINATE; else act = RTW89_P2P_ACT_REMOVE; - rtw89_tsf32_toggle(rtwdev, rtwvif, act); - rtw89_fw_h2c_p2p_act(rtwdev, vif, NULL, act, noa_id); + rtw89_tsf32_toggle(rtwdev, rtwvif_link, act); + rtw89_fw_h2c_p2p_act(rtwdev, rtwvif_link, bss_conf, + NULL, act, noa_id); } }
static void rtw89_p2p_update_noa(struct rtw89_dev *rtwdev, - struct ieee80211_vif *vif) + struct rtw89_vif_link *rtwvif_link, + struct ieee80211_bss_conf *bss_conf) { - struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv; struct ieee80211_p2p_noa_desc *desc; enum rtw89_p2pps_action act; u8 noa_id;
for (noa_id = 0; noa_id < RTW89_P2P_MAX_NOA_NUM; noa_id++) { - desc = &vif->bss_conf.p2p_noa_attr.desc[noa_id]; + desc = &bss_conf->p2p_noa_attr.desc[noa_id]; if (!desc->count || !desc->duration) break;
@@ -246,16 +260,19 @@ static void rtw89_p2p_update_noa(struct rtw89_dev *rtwdev, act = RTW89_P2P_ACT_INIT; else act = RTW89_P2P_ACT_UPDATE; - rtw89_tsf32_toggle(rtwdev, rtwvif, act); - rtw89_fw_h2c_p2p_act(rtwdev, vif, desc, act, noa_id); + rtw89_tsf32_toggle(rtwdev, rtwvif_link, act); + rtw89_fw_h2c_p2p_act(rtwdev, rtwvif_link, bss_conf, + desc, act, noa_id); } - rtwvif->last_noa_nr = noa_id; + rtwvif_link->last_noa_nr = noa_id; }
-void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif) +void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct ieee80211_bss_conf *bss_conf) { - rtw89_p2p_disable_all_noa(rtwdev, vif); - rtw89_p2p_update_noa(rtwdev, vif); + rtw89_p2p_disable_all_noa(rtwdev, rtwvif_link, bss_conf); + rtw89_p2p_update_noa(rtwdev, rtwvif_link, bss_conf); }
void rtw89_recalc_lps(struct rtw89_dev *rtwdev) @@ -265,6 +282,12 @@ void rtw89_recalc_lps(struct rtw89_dev *rtwdev) enum rtw89_entity_mode mode; int count = 0;
+ /* FIXME: Fix rtw89_enter_lps() and __rtw89_enter_ps_mode() + * to take MLO cases into account before doing the following. + */ + if (rtwdev->support_mlo) + goto disable_lps; + mode = rtw89_get_entity_mode(rtwdev); if (mode == RTW89_ENTITY_MODE_MCC) goto disable_lps; @@ -291,9 +314,9 @@ void rtw89_recalc_lps(struct rtw89_dev *rtwdev) rtwdev->lps_enabled = false; }
-void rtw89_p2p_noa_renew(struct rtw89_vif *rtwvif) +void rtw89_p2p_noa_renew(struct rtw89_vif_link *rtwvif_link) { - struct rtw89_p2p_noa_setter *setter = &rtwvif->p2p_noa; + struct rtw89_p2p_noa_setter *setter = &rtwvif_link->p2p_noa; struct rtw89_p2p_noa_ie *ie = &setter->ie; struct rtw89_p2p_ie_head *p2p_head = &ie->p2p_head; struct rtw89_noa_attr_head *noa_head = &ie->noa_head; @@ -318,10 +341,10 @@ void rtw89_p2p_noa_renew(struct rtw89_vif *rtwvif) noa_head->oppps_ctwindow = 0; }
-void rtw89_p2p_noa_append(struct rtw89_vif *rtwvif, +void rtw89_p2p_noa_append(struct rtw89_vif_link *rtwvif_link, const struct ieee80211_p2p_noa_desc *desc) { - struct rtw89_p2p_noa_setter *setter = &rtwvif->p2p_noa; + struct rtw89_p2p_noa_setter *setter = &rtwvif_link->p2p_noa; struct rtw89_p2p_noa_ie *ie = &setter->ie; struct rtw89_p2p_ie_head *p2p_head = &ie->p2p_head; struct rtw89_noa_attr_head *noa_head = &ie->noa_head; @@ -338,9 +361,9 @@ void rtw89_p2p_noa_append(struct rtw89_vif *rtwvif, ie->noa_desc[setter->noa_count++] = *desc; }
-u8 rtw89_p2p_noa_fetch(struct rtw89_vif *rtwvif, void **data) +u8 rtw89_p2p_noa_fetch(struct rtw89_vif_link *rtwvif_link, void **data) { - struct rtw89_p2p_noa_setter *setter = &rtwvif->p2p_noa; + struct rtw89_p2p_noa_setter *setter = &rtwvif_link->p2p_noa; struct rtw89_p2p_noa_ie *ie = &setter->ie; void *tail;
diff --git a/drivers/net/wireless/realtek/rtw89/ps.h b/drivers/net/wireless/realtek/rtw89/ps.h index 54486e4550b6..cdd712966b09 100644 --- a/drivers/net/wireless/realtek/rtw89/ps.h +++ b/drivers/net/wireless/realtek/rtw89/ps.h @@ -5,21 +5,23 @@ #ifndef __RTW89_PS_H_ #define __RTW89_PS_H_
-void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool ps_mode); void rtw89_leave_lps(struct rtw89_dev *rtwdev); void __rtw89_leave_ps_mode(struct rtw89_dev *rtwdev); -void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif); +void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link); void rtw89_leave_ps_mode(struct rtw89_dev *rtwdev); void rtw89_enter_ips(struct rtw89_dev *rtwdev); void rtw89_leave_ips(struct rtw89_dev *rtwdev); void rtw89_set_coex_ctrl_lps(struct rtw89_dev *rtwdev, bool btc_ctrl); -void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif); +void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, + struct ieee80211_bss_conf *bss_conf); void rtw89_recalc_lps(struct rtw89_dev *rtwdev); -void rtw89_p2p_noa_renew(struct rtw89_vif *rtwvif); -void rtw89_p2p_noa_append(struct rtw89_vif *rtwvif, +void rtw89_p2p_noa_renew(struct rtw89_vif_link *rtwvif_link); +void rtw89_p2p_noa_append(struct rtw89_vif_link *rtwvif_link, const struct ieee80211_p2p_noa_desc *desc); -u8 rtw89_p2p_noa_fetch(struct rtw89_vif *rtwvif, void **data); +u8 rtw89_p2p_noa_fetch(struct rtw89_vif_link *rtwvif_link, void **data);
static inline void rtw89_leave_ips_by_hwflags(struct rtw89_dev *rtwdev) { diff --git a/drivers/net/wireless/realtek/rtw89/regd.c b/drivers/net/wireless/realtek/rtw89/regd.c index a7720a1f17a7..bb064a086970 100644 --- a/drivers/net/wireless/realtek/rtw89/regd.c +++ b/drivers/net/wireless/realtek/rtw89/regd.c @@ -793,22 +793,26 @@ static bool __rtw89_reg_6ghz_tpe_recalc(struct rtw89_dev *rtwdev) { struct rtw89_regulatory_info *regulatory = &rtwdev->regulatory; struct rtw89_reg_6ghz_tpe new = {}; + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id; bool changed = false;
rtw89_for_each_rtwvif(rtwdev, rtwvif) { const struct rtw89_reg_6ghz_tpe *tmp; const struct rtw89_chan *chan;
- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx); - if (chan->band_type != RTW89_BAND_6G) - continue; + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) { + chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx); + if (chan->band_type != RTW89_BAND_6G) + continue;
- tmp = &rtwvif->reg_6ghz_tpe; - if (!tmp->valid) - continue; + tmp = &rtwvif_link->reg_6ghz_tpe; + if (!tmp->valid) + continue;
- tpe_intersect_constraint(&new, tmp->constraint); + tpe_intersect_constraint(&new, tmp->constraint); + } }
if (memcmp(®ulatory->reg_6ghz_tpe, &new, @@ -831,19 +835,24 @@ static bool __rtw89_reg_6ghz_tpe_recalc(struct rtw89_dev *rtwdev) }
static int rtw89_reg_6ghz_tpe_recalc(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool active, + struct rtw89_vif_link *rtwvif_link, bool active, unsigned int *changed) { - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); - struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; - struct rtw89_reg_6ghz_tpe *tpe = &rtwvif->reg_6ghz_tpe; + struct rtw89_reg_6ghz_tpe *tpe = &rtwvif_link->reg_6ghz_tpe; + struct ieee80211_bss_conf *bss_conf;
memset(tpe, 0, sizeof(*tpe));
- if (!active || rtwvif->reg_6ghz_power != RTW89_REG_6GHZ_POWER_STD) + if (!active || rtwvif_link->reg_6ghz_power != RTW89_REG_6GHZ_POWER_STD) goto bottom;
+ rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); rtw89_calculate_tpe(rtwdev, tpe, &bss_conf->tpe); + + rcu_read_unlock(); + if (!tpe->valid) goto bottom;
@@ -867,20 +876,24 @@ static bool __rtw89_reg_6ghz_power_recalc(struct rtw89_dev *rtwdev) const struct rtw89_regd *regd = regulatory->regd; enum rtw89_reg_6ghz_power sel; const struct rtw89_chan *chan; + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif; + unsigned int link_id; int count = 0; u8 index;
rtw89_for_each_rtwvif(rtwdev, rtwvif) { - chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx); - if (chan->band_type != RTW89_BAND_6G) - continue; + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) { + chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx); + if (chan->band_type != RTW89_BAND_6G) + continue;
- if (count != 0 && rtwvif->reg_6ghz_power == sel) - continue; + if (count != 0 && rtwvif_link->reg_6ghz_power == sel) + continue;
- sel = rtwvif->reg_6ghz_power; - count++; + sel = rtwvif_link->reg_6ghz_power; + count++; + } }
if (count != 1) @@ -908,35 +921,41 @@ static bool __rtw89_reg_6ghz_power_recalc(struct rtw89_dev *rtwdev) }
static int rtw89_reg_6ghz_power_recalc(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, bool active, + struct rtw89_vif_link *rtwvif_link, bool active, unsigned int *changed) { - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct ieee80211_bss_conf *bss_conf; + + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
if (active) { - switch (vif->bss_conf.power_type) { + switch (bss_conf->power_type) { case IEEE80211_REG_VLP_AP: - rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_VLP; + rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_VLP; break; case IEEE80211_REG_LPI_AP: - rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_LPI; + rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_LPI; break; case IEEE80211_REG_SP_AP: - rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_STD; + rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_STD; break; default: - rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT; + rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT; break; } } else { - rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT; + rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT; }
+ rcu_read_unlock(); + *changed += __rtw89_reg_6ghz_power_recalc(rtwdev); return 0; }
-int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, bool active) { unsigned int changed = 0; @@ -948,11 +967,11 @@ int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, * so must do reg_6ghz_tpe_recalc() after reg_6ghz_power_recalc(). */
- ret = rtw89_reg_6ghz_power_recalc(rtwdev, rtwvif, active, &changed); + ret = rtw89_reg_6ghz_power_recalc(rtwdev, rtwvif_link, active, &changed); if (ret) return ret;
- ret = rtw89_reg_6ghz_tpe_recalc(rtwdev, rtwvif, active, &changed); + ret = rtw89_reg_6ghz_tpe_recalc(rtwdev, rtwvif_link, active, &changed); if (ret) return ret;
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8851b.c b/drivers/net/wireless/realtek/rtw89/rtw8851b.c index 1679bd408ef3..f9766bf30e71 100644 --- a/drivers/net/wireless/realtek/rtw89/rtw8851b.c +++ b/drivers/net/wireless/realtek/rtw89/rtw8851b.c @@ -1590,10 +1590,11 @@ static void rtw8851b_rfk_init(struct rtw89_dev *rtwdev) rtw8851b_rx_dck(rtwdev, RTW89_PHY_0, RTW89_CHANCTX_0); }
-static void rtw8851b_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void rtw8851b_rfk_channel(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { - enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx; - enum rtw89_phy_idx phy_idx = rtwvif->phy_idx; + enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx; + enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
rtw8851b_rx_dck(rtwdev, phy_idx, chanctx_idx); rtw8851b_iqk(rtwdev, phy_idx, chanctx_idx); @@ -1608,10 +1609,12 @@ static void rtw8851b_rfk_band_changed(struct rtw89_dev *rtwdev, rtw8851b_tssi_scan(rtwdev, phy_idx, chan); }
-static void rtw8851b_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +static void rtw8851b_rfk_scan(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool start) { - rtw8851b_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx, rtwvif->chanctx_idx); + rtw8851b_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx, + rtwvif_link->chanctx_idx); }
static void rtw8851b_rfk_track(struct rtw89_dev *rtwdev) diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852a.c b/drivers/net/wireless/realtek/rtw89/rtw8852a.c index dde96bd63021..42d369d2e916 100644 --- a/drivers/net/wireless/realtek/rtw89/rtw8852a.c +++ b/drivers/net/wireless/realtek/rtw89/rtw8852a.c @@ -1350,10 +1350,11 @@ static void rtw8852a_rfk_init(struct rtw89_dev *rtwdev) rtw8852a_rx_dck(rtwdev, RTW89_PHY_0, true, RTW89_CHANCTX_0); }
-static void rtw8852a_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void rtw8852a_rfk_channel(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { - enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx; - enum rtw89_phy_idx phy_idx = rtwvif->phy_idx; + enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx; + enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
rtw8852a_rx_dck(rtwdev, phy_idx, true, chanctx_idx); rtw8852a_iqk(rtwdev, phy_idx, chanctx_idx); @@ -1368,10 +1369,11 @@ static void rtw8852a_rfk_band_changed(struct rtw89_dev *rtwdev, rtw8852a_tssi_scan(rtwdev, phy_idx, chan); }
-static void rtw8852a_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +static void rtw8852a_rfk_scan(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool start) { - rtw8852a_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx); + rtw8852a_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx); }
static void rtw8852a_rfk_track(struct rtw89_dev *rtwdev) diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b.c b/drivers/net/wireless/realtek/rtw89/rtw8852b.c index 12be52f76427..364aa21cbd44 100644 --- a/drivers/net/wireless/realtek/rtw89/rtw8852b.c +++ b/drivers/net/wireless/realtek/rtw89/rtw8852b.c @@ -562,10 +562,11 @@ static void rtw8852b_rfk_init(struct rtw89_dev *rtwdev) rtw8852b_rx_dck(rtwdev, RTW89_PHY_0, RTW89_CHANCTX_0); }
-static void rtw8852b_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void rtw8852b_rfk_channel(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { - enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx; - enum rtw89_phy_idx phy_idx = rtwvif->phy_idx; + enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx; + enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
rtw8852b_rx_dck(rtwdev, phy_idx, chanctx_idx); rtw8852b_iqk(rtwdev, phy_idx, chanctx_idx); @@ -580,10 +581,12 @@ static void rtw8852b_rfk_band_changed(struct rtw89_dev *rtwdev, rtw8852b_tssi_scan(rtwdev, phy_idx, chan); }
-static void rtw8852b_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +static void rtw8852b_rfk_scan(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool start) { - rtw8852b_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx, rtwvif->chanctx_idx); + rtw8852b_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx, + rtwvif_link->chanctx_idx); }
static void rtw8852b_rfk_track(struct rtw89_dev *rtwdev) diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852bt.c b/drivers/net/wireless/realtek/rtw89/rtw8852bt.c index 7dfdcb5964e1..dab7e71ec6a1 100644 --- a/drivers/net/wireless/realtek/rtw89/rtw8852bt.c +++ b/drivers/net/wireless/realtek/rtw89/rtw8852bt.c @@ -535,10 +535,11 @@ static void rtw8852bt_rfk_init(struct rtw89_dev *rtwdev) rtw8852bt_rx_dck(rtwdev, RTW89_PHY_0, RTW89_CHANCTX_0); }
-static void rtw8852bt_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void rtw8852bt_rfk_channel(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { - enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx; - enum rtw89_phy_idx phy_idx = rtwvif->phy_idx; + enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx; + enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
rtw8852bt_rx_dck(rtwdev, phy_idx, chanctx_idx); rtw8852bt_iqk(rtwdev, phy_idx, chanctx_idx); @@ -553,10 +554,12 @@ static void rtw8852bt_rfk_band_changed(struct rtw89_dev *rtwdev, rtw8852bt_tssi_scan(rtwdev, phy_idx, chan); }
-static void rtw8852bt_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +static void rtw8852bt_rfk_scan(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool start) { - rtw8852bt_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx, rtwvif->chanctx_idx); + rtw8852bt_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx, + rtwvif_link->chanctx_idx); }
static void rtw8852bt_rfk_track(struct rtw89_dev *rtwdev) diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c.c b/drivers/net/wireless/realtek/rtw89/rtw8852c.c index 1c6e89ab0f4b..dbe77abb2c48 100644 --- a/drivers/net/wireless/realtek/rtw89/rtw8852c.c +++ b/drivers/net/wireless/realtek/rtw89/rtw8852c.c @@ -1846,10 +1846,11 @@ static void rtw8852c_rfk_init(struct rtw89_dev *rtwdev) rtw8852c_rx_dck(rtwdev, RTW89_PHY_0, false); }
-static void rtw8852c_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void rtw8852c_rfk_channel(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { - enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx; - enum rtw89_phy_idx phy_idx = rtwvif->phy_idx; + enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx; + enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
rtw8852c_mcc_get_ch_info(rtwdev, phy_idx); rtw8852c_rx_dck(rtwdev, phy_idx, false); @@ -1866,10 +1867,11 @@ static void rtw8852c_rfk_band_changed(struct rtw89_dev *rtwdev, rtw8852c_tssi_scan(rtwdev, phy_idx, chan); }
-static void rtw8852c_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +static void rtw8852c_rfk_scan(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool start) { - rtw8852c_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx); + rtw8852c_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx); }
static void rtw8852c_rfk_track(struct rtw89_dev *rtwdev) diff --git a/drivers/net/wireless/realtek/rtw89/rtw8922a.c b/drivers/net/wireless/realtek/rtw89/rtw8922a.c index 63b1ff2f98ed..ef7747adbcc2 100644 --- a/drivers/net/wireless/realtek/rtw89/rtw8922a.c +++ b/drivers/net/wireless/realtek/rtw89/rtw8922a.c @@ -2020,11 +2020,12 @@ static void _wait_rx_mode(struct rtw89_dev *rtwdev, u8 kpath) } }
-static void rtw8922a_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void rtw8922a_rfk_channel(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { - enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx; + enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx; const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, chanctx_idx); - enum rtw89_phy_idx phy_idx = rtwvif->phy_idx; + enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx; u8 phy_map = rtw89_btc_phymap(rtwdev, phy_idx, RF_AB, chanctx_idx); u32 tx_en;
@@ -2050,7 +2051,8 @@ static void rtw8922a_rfk_band_changed(struct rtw89_dev *rtwdev, rtw89_phy_rfk_tssi_and_wait(rtwdev, phy_idx, chan, RTW89_TSSI_SCAN, 6); }
-static void rtw8922a_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, +static void rtw8922a_rfk_scan(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link, bool start) { } diff --git a/drivers/net/wireless/realtek/rtw89/ser.c b/drivers/net/wireless/realtek/rtw89/ser.c index 5fc2faa9ba5a..7b203bb7f151 100644 --- a/drivers/net/wireless/realtek/rtw89/ser.c +++ b/drivers/net/wireless/realtek/rtw89/ser.c @@ -300,37 +300,54 @@ static void drv_resume_rx(struct rtw89_ser *ser)
static void ser_reset_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) { - rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port); - rtwvif->net_type = RTW89_NET_TYPE_NO_LINK; - rtwvif->trigger = false; + struct rtw89_vif_link *rtwvif_link; + unsigned int link_id; + rtwvif->tdls_peer = 0; + + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) { + rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif_link->port); + rtwvif_link->net_type = RTW89_NET_TYPE_NO_LINK; + rtwvif_link->trigger = false; + } }
static void ser_sta_deinit_cam_iter(void *data, struct ieee80211_sta *sta) { struct rtw89_vif *target_rtwvif = (struct rtw89_vif *)data; - struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv; + struct rtw89_sta *rtwsta = sta_to_rtwsta(sta); struct rtw89_vif *rtwvif = rtwsta->rtwvif; struct rtw89_dev *rtwdev = rtwvif->rtwdev; + struct rtw89_vif_link *rtwvif_link; + struct rtw89_sta_link *rtwsta_link; + unsigned int link_id;
if (rtwvif != target_rtwvif) return;
- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls) - rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta->addr_cam); - if (sta->tdls) - rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta->bssid_cam); + rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) { + rtwvif_link = rtwsta_link->rtwvif_link;
- INIT_LIST_HEAD(&rtwsta->ba_cam_list); + if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls) + rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta_link->addr_cam); + if (sta->tdls) + rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta_link->bssid_cam); + + INIT_LIST_HEAD(&rtwsta_link->ba_cam_list); + } }
static void ser_deinit_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) { + struct rtw89_vif_link *rtwvif_link; + unsigned int link_id; + ieee80211_iterate_stations_atomic(rtwdev->hw, ser_sta_deinit_cam_iter, rtwvif);
- rtw89_cam_deinit(rtwdev, rtwvif); + rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) + rtw89_cam_deinit(rtwdev, rtwvif_link);
bitmap_zero(rtwdev->cam_info.ba_cam_map, RTW89_MAX_BA_CAM_NUM); } diff --git a/drivers/net/wireless/realtek/rtw89/wow.c b/drivers/net/wireless/realtek/rtw89/wow.c index 86e24e07780d..3e81fd974ec1 100644 --- a/drivers/net/wireless/realtek/rtw89/wow.c +++ b/drivers/net/wireless/realtek/rtw89/wow.c @@ -421,7 +421,8 @@ static void rtw89_wow_construct_key_info(struct rtw89_dev *rtwdev) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct rtw89_wow_key_info *key_info = &rtw_wow->key_info; - struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link; + struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link); bool err = false;
rcu_read_lock(); @@ -596,7 +597,8 @@ static int rtw89_wow_get_aoac_rpt(struct rtw89_dev *rtwdev, bool rx_ready) static struct ieee80211_key_conf *rtw89_wow_gtk_rekey(struct rtw89_dev *rtwdev, u32 cipher, u8 keyidx, u8 *gtk) { - struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link; + struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link); const struct rtw89_cipher_info *cipher_info; struct ieee80211_key_conf *rekey_conf; struct ieee80211_key_conf *key; @@ -632,11 +634,13 @@ static struct ieee80211_key_conf *rtw89_wow_gtk_rekey(struct rtw89_dev *rtwdev,
static void rtw89_wow_update_key_info(struct rtw89_dev *rtwdev, bool rx_ready) { - struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link; + struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link); struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct rtw89_wow_aoac_report *aoac_rpt = &rtw_wow->aoac_rpt; struct rtw89_set_key_info_iter_data data = {.error = false, .rx_ready = rx_ready}; + struct ieee80211_bss_conf *bss_conf; struct ieee80211_key_conf *key;
rcu_read_lock(); @@ -669,9 +673,15 @@ static void rtw89_wow_update_key_info(struct rtw89_dev *rtwdev, bool rx_ready) return;
rtw89_rx_pn_set_pmf(rtwdev, key, aoac_rpt->igtk_ipn); - ieee80211_gtk_rekey_notify(wow_vif, wow_vif->bss_conf.bssid, + + rcu_read_lock(); + + bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true); + ieee80211_gtk_rekey_notify(wow_vif, bss_conf->bssid, aoac_rpt->eapol_key_replay_count, - GFP_KERNEL); + GFP_ATOMIC); + + rcu_read_unlock(); }
static void rtw89_wow_leave_deep_ps(struct rtw89_dev *rtwdev) @@ -681,27 +691,24 @@ static void rtw89_wow_leave_deep_ps(struct rtw89_dev *rtwdev)
static void rtw89_wow_enter_deep_ps(struct rtw89_dev *rtwdev) { - struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
- __rtw89_enter_ps_mode(rtwdev, rtwvif); + __rtw89_enter_ps_mode(rtwdev, rtwvif_link); }
static void rtw89_wow_enter_ps(struct rtw89_dev *rtwdev) { - struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
if (rtw89_wow_mgd_linked(rtwdev)) - rtw89_enter_lps(rtwdev, rtwvif, false); + rtw89_enter_lps(rtwdev, rtwvif_link, false); else if (rtw89_wow_no_link(rtwdev)) - rtw89_fw_h2c_fwips(rtwdev, rtwvif, true); + rtw89_fw_h2c_fwips(rtwdev, rtwvif_link, true); }
static void rtw89_wow_leave_ps(struct rtw89_dev *rtwdev, bool enable_wow) { - struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
if (rtw89_wow_mgd_linked(rtwdev)) { rtw89_leave_lps(rtwdev); @@ -709,7 +716,7 @@ static void rtw89_wow_leave_ps(struct rtw89_dev *rtwdev, bool enable_wow) if (enable_wow) rtw89_leave_ips(rtwdev); else - rtw89_fw_h2c_fwips(rtwdev, rtwvif, false); + rtw89_fw_h2c_fwips(rtwdev, rtwvif_link, false); } }
@@ -734,6 +741,8 @@ static void rtw89_wow_set_rx_filter(struct rtw89_dev *rtwdev, bool enable)
static void rtw89_wow_show_wakeup_reason(struct rtw89_dev *rtwdev) { + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link; + struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link); struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct rtw89_wow_aoac_report *aoac_rpt = &rtw_wow->aoac_rpt; struct cfg80211_wowlan_nd_info nd_info; @@ -780,35 +789,34 @@ static void rtw89_wow_show_wakeup_reason(struct rtw89_dev *rtwdev) break; default: rtw89_warn(rtwdev, "Unknown wakeup reason %x\n", reason); - ieee80211_report_wowlan_wakeup(rtwdev->wow.wow_vif, NULL, - GFP_KERNEL); + ieee80211_report_wowlan_wakeup(wow_vif, NULL, GFP_KERNEL); return; }
- ieee80211_report_wowlan_wakeup(rtwdev->wow.wow_vif, &wakeup, - GFP_KERNEL); + ieee80211_report_wowlan_wakeup(wow_vif, &wakeup, GFP_KERNEL); }
-static void rtw89_wow_vif_iter(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif) +static void rtw89_wow_vif_iter(struct rtw89_dev *rtwdev, + struct rtw89_vif_link *rtwvif_link) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; - struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif); + struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
/* Current WoWLAN function support setting of only vif in * infra mode or no link mode. When one suitable vif is found, * stop the iteration. */ - if (rtw_wow->wow_vif || vif->type != NL80211_IFTYPE_STATION) + if (rtw_wow->rtwvif_link || vif->type != NL80211_IFTYPE_STATION) return;
- switch (rtwvif->net_type) { + switch (rtwvif_link->net_type) { case RTW89_NET_TYPE_INFRA: if (rtw_wow_has_mgd_features(rtwdev)) - rtw_wow->wow_vif = vif; + rtw_wow->rtwvif_link = rtwvif_link; break; case RTW89_NET_TYPE_NO_LINK: if (rtw_wow->pno_inited) - rtw_wow->wow_vif = vif; + rtw_wow->rtwvif_link = rtwvif_link; break; default: break; @@ -865,7 +873,7 @@ static u16 rtw89_calc_crc(u8 *pdata, int length) return ~crc; }
-static int rtw89_wow_pattern_get_type(struct rtw89_vif *rtwvif, +static int rtw89_wow_pattern_get_type(struct rtw89_vif_link *rtwvif_link, struct rtw89_wow_cam_info *rtw_pattern, const u8 *pattern, u8 da_mask) { @@ -885,7 +893,7 @@ static int rtw89_wow_pattern_get_type(struct rtw89_vif *rtwvif, rtw_pattern->bc = true; else if (is_multicast_ether_addr(da)) rtw_pattern->mc = true; - else if (ether_addr_equal(da, rtwvif->mac_addr) && + else if (ether_addr_equal(da, rtwvif_link->mac_addr) && da_mask == GENMASK(5, 0)) rtw_pattern->uc = true; else if (!da_mask) /*da_mask == 0 mean wildcard*/ @@ -897,7 +905,7 @@ static int rtw89_wow_pattern_get_type(struct rtw89_vif *rtwvif, }
static int rtw89_wow_pattern_generate(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, const struct cfg80211_pkt_pattern *pkt_pattern, struct rtw89_wow_cam_info *rtw_pattern) { @@ -916,7 +924,7 @@ static int rtw89_wow_pattern_generate(struct rtw89_dev *rtwdev, mask_len = DIV_ROUND_UP(len, 8); memset(rtw_pattern, 0, sizeof(*rtw_pattern));
- ret = rtw89_wow_pattern_get_type(rtwvif, rtw_pattern, pattern, + ret = rtw89_wow_pattern_get_type(rtwvif_link, rtw_pattern, pattern, mask[0] & GENMASK(5, 0)); if (ret) return ret; @@ -970,7 +978,7 @@ static int rtw89_wow_pattern_generate(struct rtw89_dev *rtwdev, }
static int rtw89_wow_parse_patterns(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif, + struct rtw89_vif_link *rtwvif_link, struct cfg80211_wowlan *wowlan) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; @@ -983,7 +991,7 @@ static int rtw89_wow_parse_patterns(struct rtw89_dev *rtwdev,
for (i = 0; i < wowlan->n_patterns; i++) { rtw_pattern = &rtw_wow->patterns[i]; - ret = rtw89_wow_pattern_generate(rtwdev, rtwvif, + ret = rtw89_wow_pattern_generate(rtwdev, rtwvif_link, &wowlan->patterns[i], rtw_pattern); if (ret) { @@ -1040,7 +1048,7 @@ static void rtw89_wow_clear_wakeups(struct rtw89_dev *rtwdev) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
- rtw_wow->wow_vif = NULL; + rtw_wow->rtwvif_link = NULL; rtw89_core_release_all_bits_map(rtw_wow->flags, RTW89_WOW_FLAG_NUM); rtw_wow->pattern_cnt = 0; rtw_wow->pno_inited = false; @@ -1066,6 +1074,7 @@ static int rtw89_wow_set_wakeups(struct rtw89_dev *rtwdev, struct cfg80211_wowlan *wowlan) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; + struct rtw89_vif_link *rtwvif_link; struct rtw89_vif *rtwvif;
if (wowlan->disconnect) @@ -1078,36 +1087,40 @@ static int rtw89_wow_set_wakeups(struct rtw89_dev *rtwdev, if (wowlan->nd_config) rtw89_wow_init_pno(rtwdev, wowlan->nd_config);
- rtw89_for_each_rtwvif(rtwdev, rtwvif) - rtw89_wow_vif_iter(rtwdev, rtwvif); + rtw89_for_each_rtwvif(rtwdev, rtwvif) { + /* use the link on HW-0 to do wow flow */ + rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0); + if (!rtwvif_link) + continue; + + rtw89_wow_vif_iter(rtwdev, rtwvif_link); + }
- if (!rtw_wow->wow_vif) + rtwvif_link = rtw_wow->rtwvif_link; + if (!rtwvif_link) return -EPERM;
- rtwvif = (struct rtw89_vif *)rtw_wow->wow_vif->drv_priv; - return rtw89_wow_parse_patterns(rtwdev, rtwvif, wowlan); + return rtw89_wow_parse_patterns(rtwdev, rtwvif_link, wowlan); }
static int rtw89_wow_cfg_wake_pno(struct rtw89_dev *rtwdev, bool wow) { - struct rtw89_wow_param *rtw_wow = &rtwdev->wow; - struct ieee80211_vif *wow_vif = rtw_wow->wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link; int ret;
- ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif, true); + ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif_link, true); if (ret) { rtw89_err(rtwdev, "failed to config pno\n"); return ret; }
- ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif, wow); + ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif_link, wow); if (ret) { rtw89_err(rtwdev, "failed to fw wow wakeup ctrl\n"); return ret; }
- ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif, wow); + ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif_link, wow); if (ret) { rtw89_err(rtwdev, "failed to fw wow global\n"); return ret; @@ -1119,34 +1132,39 @@ static int rtw89_wow_cfg_wake_pno(struct rtw89_dev *rtwdev, bool wow) static int rtw89_wow_cfg_wake(struct rtw89_dev *rtwdev, bool wow) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; - struct ieee80211_vif *wow_vif = rtw_wow->wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link; + struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link); struct ieee80211_sta *wow_sta; - struct rtw89_sta *rtwsta = NULL; + struct rtw89_sta_link *rtwsta_link = NULL; + struct rtw89_sta *rtwsta; int ret;
- wow_sta = ieee80211_find_sta(wow_vif, rtwvif->bssid); - if (wow_sta) - rtwsta = (struct rtw89_sta *)wow_sta->drv_priv; + wow_sta = ieee80211_find_sta(wow_vif, wow_vif->cfg.ap_addr); + if (wow_sta) { + rtwsta = sta_to_rtwsta(wow_sta); + rtwsta_link = rtwsta->links[rtwvif_link->link_id]; + if (!rtwsta_link) + return -ENOLINK; + }
if (wow) { if (rtw_wow->pattern_cnt) - rtwvif->wowlan_pattern = true; + rtwvif_link->wowlan_pattern = true; if (test_bit(RTW89_WOW_FLAG_EN_MAGIC_PKT, rtw_wow->flags)) - rtwvif->wowlan_magic = true; + rtwvif_link->wowlan_magic = true; } else { - rtwvif->wowlan_pattern = false; - rtwvif->wowlan_magic = false; + rtwvif_link->wowlan_pattern = false; + rtwvif_link->wowlan_magic = false; }
- ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif, wow); + ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif_link, wow); if (ret) { rtw89_err(rtwdev, "failed to fw wow wakeup ctrl\n"); return ret; }
if (wow) { - ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta); + ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link); if (ret) { rtw89_err(rtwdev, "failed to update dctl cam sec entry: %d\n", ret); @@ -1154,13 +1172,13 @@ static int rtw89_wow_cfg_wake(struct rtw89_dev *rtwdev, bool wow) } }
- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL); + ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL); if (ret) { rtw89_warn(rtwdev, "failed to send h2c cam\n"); return ret; }
- ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif, wow); + ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif_link, wow); if (ret) { rtw89_err(rtwdev, "failed to fw wow global\n"); return ret; @@ -1190,25 +1208,30 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow) enum rtw89_fw_type fw_type = wow ? RTW89_FW_WOWLAN : RTW89_FW_NORMAL; enum rtw89_chip_gen chip_gen = rtwdev->chip->chip_gen; struct rtw89_wow_param *rtw_wow = &rtwdev->wow; - struct ieee80211_vif *wow_vif = rtw_wow->wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link; + struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link); enum rtw89_core_chip_id chip_id = rtwdev->chip->chip_id; const struct rtw89_chip_info *chip = rtwdev->chip; bool include_bb = !!chip->bbmcu_nr; bool disable_intr_for_dlfw = false; struct ieee80211_sta *wow_sta; - struct rtw89_sta *rtwsta = NULL; + struct rtw89_sta_link *rtwsta_link = NULL; + struct rtw89_sta *rtwsta; bool is_conn = true; int ret;
if (chip_id == RTL8852C || chip_id == RTL8922A) disable_intr_for_dlfw = true;
- wow_sta = ieee80211_find_sta(wow_vif, rtwvif->bssid); - if (wow_sta) - rtwsta = (struct rtw89_sta *)wow_sta->drv_priv; - else + wow_sta = ieee80211_find_sta(wow_vif, wow_vif->cfg.ap_addr); + if (wow_sta) { + rtwsta = sta_to_rtwsta(wow_sta); + rtwsta_link = rtwsta->links[rtwvif_link->link_id]; + if (!rtwsta_link) + return -ENOLINK; + } else { is_conn = false; + }
if (disable_intr_for_dlfw) rtw89_hci_disable_intr(rtwdev); @@ -1224,14 +1247,14 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
rtw89_phy_init_rf_reg(rtwdev, true);
- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta, + ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, rtwsta_link, RTW89_ROLE_FW_RESTORE); if (ret) { rtw89_warn(rtwdev, "failed to send h2c role maintain\n"); return ret; }
- ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, wow_vif, wow_sta); + ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link); if (ret) { rtw89_warn(rtwdev, "failed to send h2c assoc cmac tbl\n"); return ret; @@ -1240,27 +1263,27 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow) if (!is_conn) rtw89_cam_reset_keys(rtwdev);
- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, !is_conn); + ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, rtwsta_link, !is_conn); if (ret) { rtw89_warn(rtwdev, "failed to send h2c join info\n"); return ret; }
- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL); + ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL); if (ret) { rtw89_warn(rtwdev, "failed to send h2c cam\n"); return ret; }
if (is_conn) { - ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif, rtwsta->mac_id); + ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif_link, rtwsta_link->mac_id); if (ret) { rtw89_warn(rtwdev, "failed to send h2c general packet\n"); return ret; } - rtw89_phy_ra_assoc(rtwdev, wow_sta); - rtw89_phy_set_bss_color(rtwdev, wow_vif); - rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, wow_vif); + rtw89_phy_ra_assoc(rtwdev, rtwsta_link); + rtw89_phy_set_bss_color(rtwdev, rtwvif_link); + rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, rtwvif_link); }
if (chip_gen == RTW89_CHIP_BE) @@ -1363,21 +1386,20 @@ static int rtw89_wow_disable_trx_pre(struct rtw89_dev *rtwdev)
static int rtw89_wow_disable_trx_post(struct rtw89_dev *rtwdev) { - struct rtw89_wow_param *rtw_wow = &rtwdev->wow; - struct ieee80211_vif *vif = rtw_wow->wow_vif; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link; int ret;
ret = rtw89_mac_cfg_ppdu_status(rtwdev, RTW89_MAC_0, true); if (ret) rtw89_err(rtwdev, "cfg ppdu status\n");
- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true); + rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true);
return ret; }
static void rtw89_fw_release_pno_pkt_list(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct list_head *pkt_list = &rtw_wow->pno_pkt_list; @@ -1391,7 +1413,7 @@ static void rtw89_fw_release_pno_pkt_list(struct rtw89_dev *rtwdev, }
static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev, - struct rtw89_vif *rtwvif) + struct rtw89_vif_link *rtwvif_link) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config; @@ -1401,7 +1423,7 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev, int ret;
for (i = 0; i < num; i++) { - skb = ieee80211_probereq_get(rtwdev->hw, rtwvif->mac_addr, + skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr, nd_config->match_sets[i].ssid.ssid, nd_config->match_sets[i].ssid.ssid_len, nd_config->ie_len); @@ -1413,7 +1435,7 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev, info = kzalloc(sizeof(*info), GFP_KERNEL); if (!info) { kfree_skb(skb); - rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif); + rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif_link); return -ENOMEM; }
@@ -1421,7 +1443,7 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev, if (ret) { kfree_skb(skb); kfree(info); - rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif); + rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif_link); return ret; }
@@ -1436,20 +1458,19 @@ static int rtw89_pno_scan_offload(struct rtw89_dev *rtwdev, bool enable) { const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def; struct rtw89_wow_param *rtw_wow = &rtwdev->wow; - struct ieee80211_vif *wow_vif = rtw_wow->wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link; int interval = rtw_wow->nd_config->scan_plans[0].interval; struct rtw89_scan_option opt = {}; int ret;
if (enable) { - ret = rtw89_pno_scan_update_probe_req(rtwdev, rtwvif); + ret = rtw89_pno_scan_update_probe_req(rtwdev, rtwvif_link); if (ret) { rtw89_err(rtwdev, "Update probe request failed\n"); return ret; }
- ret = mac->add_chan_list_pno(rtwdev, rtwvif); + ret = mac->add_chan_list_pno(rtwdev, rtwvif_link); if (ret) { rtw89_err(rtwdev, "Update channel list failed\n"); return ret; @@ -1471,7 +1492,7 @@ static int rtw89_pno_scan_offload(struct rtw89_dev *rtwdev, bool enable) opt.opch_end = RTW89_CHAN_INVALID; }
- mac->scan_offload(rtwdev, &opt, rtwvif, true); + mac->scan_offload(rtwdev, &opt, rtwvif_link, true);
return 0; } @@ -1479,8 +1500,7 @@ static int rtw89_pno_scan_offload(struct rtw89_dev *rtwdev, bool enable) static int rtw89_wow_fw_start(struct rtw89_dev *rtwdev) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; - struct ieee80211_vif *wow_vif = rtw_wow->wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link; int ret;
if (rtw89_wow_no_link(rtwdev)) { @@ -1499,25 +1519,25 @@ static int rtw89_wow_fw_start(struct rtw89_dev *rtwdev) rtw89_wow_pattern_write(rtwdev); rtw89_wow_construct_key_info(rtwdev);
- ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif, true); + ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif_link, true); if (ret) { rtw89_err(rtwdev, "wow: failed to enable keep alive\n"); return ret; }
- ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif, true); + ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif_link, true); if (ret) { rtw89_err(rtwdev, "wow: failed to enable disconnect detect\n"); return ret; }
- ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif, true); + ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif_link, true); if (ret) { rtw89_err(rtwdev, "wow: failed to enable GTK offload\n"); return ret; }
- ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif, true); + ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif_link, true); if (ret) rtw89_warn(rtwdev, "wow: failed to enable arp offload\n"); } @@ -1548,8 +1568,7 @@ static int rtw89_wow_fw_start(struct rtw89_dev *rtwdev) static int rtw89_wow_fw_stop(struct rtw89_dev *rtwdev) { struct rtw89_wow_param *rtw_wow = &rtwdev->wow; - struct ieee80211_vif *wow_vif = rtw_wow->wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link; int ret;
if (rtw89_wow_no_link(rtwdev)) { @@ -1559,35 +1578,35 @@ static int rtw89_wow_fw_stop(struct rtw89_dev *rtwdev) return ret; }
- ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif, false); + ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif_link, false); if (ret) { rtw89_err(rtwdev, "wow: failed to disable pno\n"); return ret; }
- rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif); + rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif_link); } else { rtw89_wow_pattern_clear(rtwdev);
- ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif, false); + ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif_link, false); if (ret) { rtw89_err(rtwdev, "wow: failed to disable keep alive\n"); return ret; }
- ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif, false); + ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif_link, false); if (ret) { rtw89_err(rtwdev, "wow: failed to disable disconnect detect\n"); return ret; }
- ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif, false); + ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif_link, false); if (ret) { rtw89_err(rtwdev, "wow: failed to disable GTK offload\n"); return ret; }
- ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif, false); + ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif_link, false); if (ret) rtw89_warn(rtwdev, "wow: failed to disable arp offload\n");
diff --git a/drivers/net/wireless/realtek/rtw89/wow.h b/drivers/net/wireless/realtek/rtw89/wow.h index 3fbc2b87c058..f91991e8f2e3 100644 --- a/drivers/net/wireless/realtek/rtw89/wow.h +++ b/drivers/net/wireless/realtek/rtw89/wow.h @@ -97,18 +97,16 @@ static inline int rtw89_wow_get_sec_hdr_len(struct rtw89_dev *rtwdev) #ifdef CONFIG_PM static inline bool rtw89_wow_mgd_linked(struct rtw89_dev *rtwdev) { - struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
- return rtwvif->net_type == RTW89_NET_TYPE_INFRA; + return rtwvif_link->net_type == RTW89_NET_TYPE_INFRA; }
static inline bool rtw89_wow_no_link(struct rtw89_dev *rtwdev) { - struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif; - struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv; + struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
- return rtwvif->net_type == RTW89_NET_TYPE_NO_LINK; + return rtwvif_link->net_type == RTW89_NET_TYPE_NO_LINK; }
static inline bool rtw_wow_has_mgd_features(struct rtw89_dev *rtwdev) diff --git a/drivers/net/wireless/silabs/wfx/main.c b/drivers/net/wireless/silabs/wfx/main.c index e7198520bdff..64441c8bc460 100644 --- a/drivers/net/wireless/silabs/wfx/main.c +++ b/drivers/net/wireless/silabs/wfx/main.c @@ -480,10 +480,23 @@ static int __init wfx_core_init(void) { int ret = 0;
- if (IS_ENABLED(CONFIG_SPI)) + if (IS_ENABLED(CONFIG_SPI)) { ret = spi_register_driver(&wfx_spi_driver); - if (IS_ENABLED(CONFIG_MMC) && !ret) + if (ret) + goto out; + } + if (IS_ENABLED(CONFIG_MMC)) { ret = sdio_register_driver(&wfx_sdio_driver); + if (ret) + goto unregister_spi; + } + + return 0; + +unregister_spi: + if (IS_ENABLED(CONFIG_SPI)) + spi_unregister_driver(&wfx_spi_driver); +out: return ret; } module_init(wfx_core_init); diff --git a/drivers/net/wireless/st/cw1200/cw1200_spi.c b/drivers/net/wireless/st/cw1200/cw1200_spi.c index 4f346fb977a9..862964a8cc87 100644 --- a/drivers/net/wireless/st/cw1200/cw1200_spi.c +++ b/drivers/net/wireless/st/cw1200/cw1200_spi.c @@ -450,7 +450,7 @@ static int __maybe_unused cw1200_spi_suspend(struct device *dev) { struct hwbus_priv *self = spi_get_drvdata(to_spi_device(dev));
- if (!cw1200_can_suspend(self->core)) + if (self && !cw1200_can_suspend(self->core)) return -EAGAIN;
/* XXX notify host that we have to keep CW1200 powered on? */ diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 855b42c92284..f0d4c6f3cb05 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -4591,6 +4591,11 @@ EXPORT_SYMBOL_GPL(nvme_alloc_admin_tag_set);
void nvme_remove_admin_tag_set(struct nvme_ctrl *ctrl) { + /* + * As we're about to destroy the queue and free tagset + * we can not have keep-alive work running. + */ + nvme_stop_keep_alive(ctrl); blk_mq_destroy_queue(ctrl->admin_q); blk_put_queue(ctrl->admin_q); if (ctrl->ops->flags & NVME_F_FABRICS) { diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 6a15873055b9..f25582e4d88b 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -165,7 +165,8 @@ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl) int srcu_idx;
srcu_idx = srcu_read_lock(&ctrl->srcu); - list_for_each_entry_rcu(ns, &ctrl->namespaces, list) { + list_for_each_entry_srcu(ns, &ctrl->namespaces, list, + srcu_read_lock_held(&ctrl->srcu)) { if (!ns->head->disk) continue; kblockd_schedule_work(&ns->head->requeue_work); @@ -209,7 +210,8 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl) int srcu_idx;
srcu_idx = srcu_read_lock(&ctrl->srcu); - list_for_each_entry_rcu(ns, &ctrl->namespaces, list) { + list_for_each_entry_srcu(ns, &ctrl->namespaces, list, + srcu_read_lock_held(&ctrl->srcu)) { nvme_mpath_clear_current_path(ns); kblockd_schedule_work(&ns->head->requeue_work); } @@ -224,7 +226,8 @@ void nvme_mpath_revalidate_paths(struct nvme_ns *ns) int srcu_idx;
srcu_idx = srcu_read_lock(&head->srcu); - list_for_each_entry_rcu(ns, &head->list, siblings) { + list_for_each_entry_srcu(ns, &head->list, siblings, + srcu_read_lock_held(&head->srcu)) { if (capacity != get_capacity(ns->disk)) clear_bit(NVME_NS_READY, &ns->flags); } @@ -257,7 +260,8 @@ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node) int found_distance = INT_MAX, fallback_distance = INT_MAX, distance; struct nvme_ns *found = NULL, *fallback = NULL, *ns;
- list_for_each_entry_rcu(ns, &head->list, siblings) { + list_for_each_entry_srcu(ns, &head->list, siblings, + srcu_read_lock_held(&head->srcu)) { if (nvme_path_is_disabled(ns)) continue;
@@ -356,7 +360,8 @@ static struct nvme_ns *nvme_queue_depth_path(struct nvme_ns_head *head) unsigned int min_depth_opt = UINT_MAX, min_depth_nonopt = UINT_MAX; unsigned int depth;
- list_for_each_entry_rcu(ns, &head->list, siblings) { + list_for_each_entry_srcu(ns, &head->list, siblings, + srcu_read_lock_held(&head->srcu)) { if (nvme_path_is_disabled(ns)) continue;
@@ -424,7 +429,8 @@ static bool nvme_available_path(struct nvme_ns_head *head) if (!test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) return NULL;
- list_for_each_entry_rcu(ns, &head->list, siblings) { + list_for_each_entry_srcu(ns, &head->list, siblings, + srcu_read_lock_held(&head->srcu)) { if (test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ns->ctrl->flags)) continue; switch (nvme_ctrl_state(ns->ctrl)) { @@ -785,7 +791,8 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl, return 0;
srcu_idx = srcu_read_lock(&ctrl->srcu); - list_for_each_entry_rcu(ns, &ctrl->namespaces, list) { + list_for_each_entry_srcu(ns, &ctrl->namespaces, list, + srcu_read_lock_held(&ctrl->srcu)) { unsigned nsid; again: nsid = le32_to_cpu(desc->nsids[n]); diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 4b9fda0b1d9a..55af3dfbc260 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -153,6 +153,7 @@ struct nvme_dev { /* host memory buffer support: */ u64 host_mem_size; u32 nr_host_mem_descs; + u32 host_mem_descs_size; dma_addr_t host_mem_descs_dma; struct nvme_host_mem_buf_desc *host_mem_descs; void **host_mem_desc_bufs; @@ -904,9 +905,10 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist) { + struct request *req; + spin_lock(&nvmeq->sq_lock); - while (!rq_list_empty(*rqlist)) { - struct request *req = rq_list_pop(rqlist); + while ((req = rq_list_pop(rqlist))) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
nvme_sq_copy_cmd(nvmeq, &iod->cmd); @@ -931,31 +933,25 @@ static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
static void nvme_queue_rqs(struct request **rqlist) { - struct request *req, *next, *prev = NULL; + struct request *submit_list = NULL; struct request *requeue_list = NULL; + struct request **requeue_lastp = &requeue_list; + struct nvme_queue *nvmeq = NULL; + struct request *req;
- rq_list_for_each_safe(rqlist, req, next) { - struct nvme_queue *nvmeq = req->mq_hctx->driver_data; - - if (!nvme_prep_rq_batch(nvmeq, req)) { - /* detach 'req' and add to remainder list */ - rq_list_move(rqlist, &requeue_list, req, prev); - - req = prev; - if (!req) - continue; - } + while ((req = rq_list_pop(rqlist))) { + if (nvmeq && nvmeq != req->mq_hctx->driver_data) + nvme_submit_cmds(nvmeq, &submit_list); + nvmeq = req->mq_hctx->driver_data;
- if (!next || req->mq_hctx != next->mq_hctx) { - /* detach rest of list, and submit */ - req->rq_next = NULL; - nvme_submit_cmds(nvmeq, rqlist); - *rqlist = next; - prev = NULL; - } else - prev = req; + if (nvme_prep_rq_batch(nvmeq, req)) + rq_list_add(&submit_list, req); /* reverse order */ + else + rq_list_add_tail(&requeue_lastp, req); }
+ if (nvmeq) + nvme_submit_cmds(nvmeq, &submit_list); *rqlist = requeue_list; }
@@ -1966,10 +1962,10 @@ static void nvme_free_host_mem(struct nvme_dev *dev)
kfree(dev->host_mem_desc_bufs); dev->host_mem_desc_bufs = NULL; - dma_free_coherent(dev->dev, - dev->nr_host_mem_descs * sizeof(*dev->host_mem_descs), + dma_free_coherent(dev->dev, dev->host_mem_descs_size, dev->host_mem_descs, dev->host_mem_descs_dma); dev->host_mem_descs = NULL; + dev->host_mem_descs_size = 0; dev->nr_host_mem_descs = 0; }
@@ -1977,7 +1973,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred, u32 chunk_size) { struct nvme_host_mem_buf_desc *descs; - u32 max_entries, len; + u32 max_entries, len, descs_size; dma_addr_t descs_dma; int i = 0; void **bufs; @@ -1990,8 +1986,9 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred, if (dev->ctrl.hmmaxd && dev->ctrl.hmmaxd < max_entries) max_entries = dev->ctrl.hmmaxd;
- descs = dma_alloc_coherent(dev->dev, max_entries * sizeof(*descs), - &descs_dma, GFP_KERNEL); + descs_size = max_entries * sizeof(*descs); + descs = dma_alloc_coherent(dev->dev, descs_size, &descs_dma, + GFP_KERNEL); if (!descs) goto out;
@@ -2020,6 +2017,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred, dev->host_mem_size = size; dev->host_mem_descs = descs; dev->host_mem_descs_dma = descs_dma; + dev->host_mem_descs_size = descs_size; dev->host_mem_desc_bufs = bufs; return 0;
@@ -2034,8 +2032,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
kfree(bufs); out_free_descs: - dma_free_coherent(dev->dev, max_entries * sizeof(*descs), descs, - descs_dma); + dma_free_coherent(dev->dev, descs_size, descs, descs_dma); out: dev->host_mem_descs = NULL; return -ENOMEM; diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index 4d528c10df3a..546e76ac407c 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -457,6 +457,7 @@ int __initdata dt_root_addr_cells; int __initdata dt_root_size_cells;
void *initial_boot_params __ro_after_init; +phys_addr_t initial_boot_params_pa __ro_after_init;
#ifdef CONFIG_OF_EARLY_FLATTREE
@@ -1136,17 +1137,18 @@ static void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align) return ptr; }
-bool __init early_init_dt_verify(void *params) +bool __init early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys) { - if (!params) + if (!dt_virt) return false;
/* check device tree validity */ - if (fdt_check_header(params)) + if (fdt_check_header(dt_virt)) return false;
/* Setup flat device-tree pointer */ - initial_boot_params = params; + initial_boot_params = dt_virt; + initial_boot_params_pa = dt_phys; of_fdt_crc32 = crc32_be(~0, initial_boot_params, fdt_totalsize(initial_boot_params));
@@ -1173,11 +1175,11 @@ void __init early_init_dt_scan_nodes(void) early_init_dt_check_for_usable_mem_range(); }
-bool __init early_init_dt_scan(void *params) +bool __init early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys) { bool status;
- status = early_init_dt_verify(params); + status = early_init_dt_verify(dt_virt, dt_phys); if (!status) return false;
diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c index 9ccde2fd77cb..5b924597a4de 100644 --- a/drivers/of/kexec.c +++ b/drivers/of/kexec.c @@ -301,7 +301,7 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image, }
/* Remove memory reservation for the current device tree. */ - ret = fdt_find_and_del_mem_rsv(fdt, __pa(initial_boot_params), + ret = fdt_find_and_del_mem_rsv(fdt, initial_boot_params_pa, fdt_totalsize(initial_boot_params)); if (ret == -EINVAL) { pr_err("Error removing memory reservation.\n"); diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c index 284f2e0e4d26..e091c3e55b5c 100644 --- a/drivers/pci/controller/cadence/pci-j721e.c +++ b/drivers/pci/controller/cadence/pci-j721e.c @@ -572,15 +572,14 @@ static int j721e_pcie_probe(struct platform_device *pdev) pcie->refclk = clk;
/* - * The "Power Sequencing and Reset Signal Timings" table of the - * PCI Express Card Electromechanical Specification, Revision - * 5.1, Section 2.9.2, Symbol "T_PERST-CLK", indicates PERST# - * should be deasserted after minimum of 100us once REFCLK is - * stable. The REFCLK to the connector in RC mode is selected - * while enabling the PHY. So deassert PERST# after 100 us. + * Section 2.2 of the PCI Express Card Electromechanical + * Specification (Revision 5.1) mandates that the deassertion + * of the PERST# signal should be delayed by 100 ms (TPVPERL). + * This shall ensure that the power and the reference clock + * are stable. */ if (gpiod) { - fsleep(PCIE_T_PERST_CLK_US); + msleep(PCIE_T_PVPERL_MS); gpiod_set_value_cansleep(gpiod, 1); }
@@ -671,15 +670,14 @@ static int j721e_pcie_resume_noirq(struct device *dev) return ret;
/* - * The "Power Sequencing and Reset Signal Timings" table of the - * PCI Express Card Electromechanical Specification, Revision - * 5.1, Section 2.9.2, Symbol "T_PERST-CLK", indicates PERST# - * should be deasserted after minimum of 100us once REFCLK is - * stable. The REFCLK to the connector in RC mode is selected - * while enabling the PHY. So deassert PERST# after 100 us. + * Section 2.2 of the PCI Express Card Electromechanical + * Specification (Revision 5.1) mandates that the deassertion + * of the PERST# signal should be delayed by 100 ms (TPVPERL). + * This shall ensure that the power and the reference clock + * are stable. */ if (pcie->reset_gpio) { - fsleep(PCIE_T_PERST_CLK_US); + msleep(PCIE_T_PVPERL_MS); gpiod_set_value_cansleep(pcie->reset_gpio, 1); }
diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c index e588fcc54589..b5ca5260f904 100644 --- a/drivers/pci/controller/dwc/pcie-qcom-ep.c +++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c @@ -396,6 +396,10 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci) return ret; }
+ /* Perform cleanup that requires refclk */ + pci_epc_deinit_notify(pci->ep.epc); + dw_pcie_ep_cleanup(&pci->ep); + /* Assert WAKE# to RC to indicate device is ready */ gpiod_set_value_cansleep(pcie_ep->wake, 1); usleep_range(WAKE_DELAY_US, WAKE_DELAY_US + 500); @@ -540,8 +544,6 @@ static void qcom_pcie_perst_assert(struct dw_pcie *pci) { struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
- pci_epc_deinit_notify(pci->ep.epc); - dw_pcie_ep_cleanup(&pci->ep); qcom_pcie_disable_resources(pcie_ep); pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED; } diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index ef44a82be058..2b33d03ed054 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -133,6 +133,7 @@
/* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */ #define PARF_INT_ALL_LINK_UP BIT(13) +#define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23)
/* PARF_NO_SNOOP_OVERIDE register fields */ #define WR_NO_SNOOP_OVERIDE_EN BIT(1) @@ -1716,7 +1717,8 @@ static int qcom_pcie_probe(struct platform_device *pdev) goto err_host_deinit; }
- writel_relaxed(PARF_INT_ALL_LINK_UP, pcie->parf + PARF_INT_ALL_MASK); + writel_relaxed(PARF_INT_ALL_LINK_UP | PARF_INT_MSI_DEV_0_7, + pcie->parf + PARF_INT_ALL_MASK); }
qcom_pcie_icc_opp_update(pcie); diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c index c1394f2ab63f..ced3b7e7bdad 100644 --- a/drivers/pci/controller/dwc/pcie-tegra194.c +++ b/drivers/pci/controller/dwc/pcie-tegra194.c @@ -1704,9 +1704,6 @@ static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie) if (ret) dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret);
- pci_epc_deinit_notify(pcie->pci.ep.epc); - dw_pcie_ep_cleanup(&pcie->pci.ep); - reset_control_assert(pcie->core_rst);
tegra_pcie_disable_phy(pcie); @@ -1785,6 +1782,10 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie) goto fail_phy; }
+ /* Perform cleanup that requires refclk */ + pci_epc_deinit_notify(pcie->pci.ep.epc); + dw_pcie_ep_cleanup(&pcie->pci.ep); + /* Clear any stale interrupt statuses */ appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0); appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0); diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c index 7d070b1def11..54286a40bdfb 100644 --- a/drivers/pci/endpoint/functions/pci-epf-mhi.c +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c @@ -867,12 +867,18 @@ static int pci_epf_mhi_bind(struct pci_epf *epf) { struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); struct pci_epc *epc = epf->epc; + struct device *dev = &epf->dev; struct platform_device *pdev = to_platform_device(epc->dev.parent); struct resource *res; int ret;
/* Get MMIO base address from Endpoint controller */ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mmio"); + if (!res) { + dev_err(dev, "Failed to get "mmio" resource\n"); + return -ENODEV; + } + epf_mhi->mmio_phys = res->start; epf_mhi->mmio_size = resource_size(res);
diff --git a/drivers/pci/hotplug/cpqphp_pci.c b/drivers/pci/hotplug/cpqphp_pci.c index 718bc6cf12cb..974c7db3265b 100644 --- a/drivers/pci/hotplug/cpqphp_pci.c +++ b/drivers/pci/hotplug/cpqphp_pci.c @@ -135,11 +135,13 @@ int cpqhp_unconfigure_device(struct pci_func *func) static int PCI_RefinedAccessConfig(struct pci_bus *bus, unsigned int devfn, u8 offset, u32 *value) { u32 vendID = 0; + int ret;
- if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID) == -1) - return -1; + ret = pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID); + if (ret != PCIBIOS_SUCCESSFUL) + return PCIBIOS_DEVICE_NOT_FOUND; if (PCI_POSSIBLE_ERROR(vendID)) - return -1; + return PCIBIOS_DEVICE_NOT_FOUND; return pci_bus_read_config_dword(bus, devfn, offset, value); }
@@ -202,13 +204,15 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_ { u16 tdevice; u32 work; + int ret; u8 tbus;
ctrl->pci_bus->number = bus_num;
for (tdevice = 0; tdevice < 0xFF; tdevice++) { /* Scan for access first */ - if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1) + ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work); + if (ret) continue; dbg("Looking for nonbridge bus_num %d dev_num %d\n", bus_num, tdevice); /* Yep we got one. Not a bridge ? */ @@ -220,7 +224,8 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_ } for (tdevice = 0; tdevice < 0xFF; tdevice++) { /* Scan for access first */ - if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1) + ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work); + if (ret) continue; dbg("Looking for bridge bus_num %d dev_num %d\n", bus_num, tdevice); /* Yep we got one. bridge ? */ diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 225a6cd2e9ca..08f170fd3efb 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -5248,7 +5248,7 @@ static ssize_t reset_method_store(struct device *dev, const char *buf, size_t count) { struct pci_dev *pdev = to_pci_dev(dev); - char *options, *name; + char *options, *tmp_options, *name; int m, n; u8 reset_methods[PCI_NUM_RESET_METHODS] = { 0 };
@@ -5268,7 +5268,8 @@ static ssize_t reset_method_store(struct device *dev, return -ENOMEM;
n = 0; - while ((name = strsep(&options, " ")) != NULL) { + tmp_options = options; + while ((name = strsep(&tmp_options, " ")) != NULL) { if (sysfs_streq(name, "")) continue;
diff --git a/drivers/pci/slot.c b/drivers/pci/slot.c index 0f87cade10f7..ed645c7a4e4b 100644 --- a/drivers/pci/slot.c +++ b/drivers/pci/slot.c @@ -79,6 +79,7 @@ static void pci_slot_release(struct kobject *kobj) up_read(&pci_bus_sem);
list_del(&slot->list); + pci_bus_put(slot->bus);
kfree(slot); } @@ -261,7 +262,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr, goto err; }
- slot->bus = parent; + slot->bus = pci_bus_get(parent); slot->number = slot_nr;
slot->kobj.kset = pci_slots_kset; @@ -269,6 +270,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr, slot_name = make_slot_name(name); if (!slot_name) { err = -ENOMEM; + pci_bus_put(slot->bus); kfree(slot); goto err; } diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c index 397a46410f7c..30506c43776f 100644 --- a/drivers/perf/arm-cmn.c +++ b/drivers/perf/arm-cmn.c @@ -2178,8 +2178,6 @@ static int arm_cmn_init_dtcs(struct arm_cmn *cmn) continue;
xp = arm_cmn_node_to_xp(cmn, dn); - dn->portid_bits = xp->portid_bits; - dn->deviceid_bits = xp->deviceid_bits; dn->dtc = xp->dtc; dn->dtm = xp->dtm; if (cmn->multi_dtm) @@ -2420,6 +2418,8 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset) }
arm_cmn_init_node_info(cmn, reg & CMN_CHILD_NODE_ADDR, dn); + dn->portid_bits = xp->portid_bits; + dn->deviceid_bits = xp->deviceid_bits;
switch (dn->type) { case CMN_TYPE_DTC: diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c index d5fa92ba8373..dabdb9f7bb82 100644 --- a/drivers/perf/arm_smmuv3_pmu.c +++ b/drivers/perf/arm_smmuv3_pmu.c @@ -431,6 +431,17 @@ static int smmu_pmu_event_init(struct perf_event *event) return -EINVAL; }
+ /* + * Ensure all events are on the same cpu so all events are in the + * same cpu context, to avoid races on pmu_enable etc. + */ + event->cpu = smmu_pmu->on_cpu; + + hwc->idx = -1; + + if (event->group_leader == event) + return 0; + for_each_sibling_event(sibling, event->group_leader) { if (is_software_event(sibling)) continue; @@ -442,14 +453,6 @@ static int smmu_pmu_event_init(struct perf_event *event) return -EINVAL; }
- hwc->idx = -1; - - /* - * Ensure all events are on the same cpu so all events are in the - * same cpu context, to avoid races on pmu_enable etc. - */ - event->cpu = smmu_pmu->on_cpu; - return 0; }
diff --git a/drivers/phy/phy-airoha-pcie-regs.h b/drivers/phy/phy-airoha-pcie-regs.h index bb1f679ca1df..b938a7b468fe 100644 --- a/drivers/phy/phy-airoha-pcie-regs.h +++ b/drivers/phy/phy-airoha-pcie-regs.h @@ -197,9 +197,9 @@ #define CSR_2L_PXP_TX1_MULTLANE_EN BIT(0)
#define REG_CSR_2L_RX0_REV0 0x00fc -#define CSR_2L_PXP_VOS_PNINV GENMASK(3, 2) -#define CSR_2L_PXP_FE_GAIN_NORMAL_MODE GENMASK(6, 4) -#define CSR_2L_PXP_FE_GAIN_TRAIN_MODE GENMASK(10, 8) +#define CSR_2L_PXP_VOS_PNINV GENMASK(19, 18) +#define CSR_2L_PXP_FE_GAIN_NORMAL_MODE GENMASK(22, 20) +#define CSR_2L_PXP_FE_GAIN_TRAIN_MODE GENMASK(26, 24)
#define REG_CSR_2L_RX0_PHYCK_DIV 0x0100 #define CSR_2L_PXP_RX0_PHYCK_SEL GENMASK(9, 8) diff --git a/drivers/phy/phy-airoha-pcie.c b/drivers/phy/phy-airoha-pcie.c index 1e410eb41058..56e9ade8a9fd 100644 --- a/drivers/phy/phy-airoha-pcie.c +++ b/drivers/phy/phy-airoha-pcie.c @@ -459,7 +459,7 @@ static void airoha_pcie_phy_init_clk_out(struct airoha_pcie_phy *pcie_phy) airoha_phy_csr_2l_clear_bits(pcie_phy, REG_CSR_2L_CLKTX1_OFFSET, CSR_2L_PXP_CLKTX1_SR); airoha_phy_csr_2l_update_field(pcie_phy, REG_CSR_2L_PLL_CMN_RESERVE0, - CSR_2L_PXP_PLL_RESERVE_MASK, 0xdd); + CSR_2L_PXP_PLL_RESERVE_MASK, 0xd0d); }
static void airoha_pcie_phy_init_csr_2l(struct airoha_pcie_phy *pcie_phy) @@ -471,9 +471,9 @@ static void airoha_pcie_phy_init_csr_2l(struct airoha_pcie_phy *pcie_phy) PCIE_SW_XFI_RXPCS_RST | PCIE_SW_REF_RST | PCIE_SW_RX_RST); airoha_phy_pma0_set_bits(pcie_phy, REG_PCIE_PMA_TX_RESET, - PCIE_TX_TOP_RST | REG_PCIE_PMA_TX_RESET); + PCIE_TX_TOP_RST | PCIE_TX_CAL_RST); airoha_phy_pma1_set_bits(pcie_phy, REG_PCIE_PMA_TX_RESET, - PCIE_TX_TOP_RST | REG_PCIE_PMA_TX_RESET); + PCIE_TX_TOP_RST | PCIE_TX_CAL_RST); }
static void airoha_pcie_phy_init_rx(struct airoha_pcie_phy *pcie_phy) @@ -802,7 +802,7 @@ static void airoha_pcie_phy_init_ssc_jcpll(struct airoha_pcie_phy *pcie_phy) airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SDM_IFM, CSR_2L_PXP_JCPLL_SDM_IFM); airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SDM_HREN, - REG_CSR_2L_JCPLL_SDM_HREN); + CSR_2L_PXP_JCPLL_SDM_HREN); airoha_phy_csr_2l_clear_bits(pcie_phy, REG_CSR_2L_JCPLL_RST_DLY, CSR_2L_PXP_JCPLL_SDM_DI_EN); airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SSC, diff --git a/drivers/phy/realtek/phy-rtk-usb2.c b/drivers/phy/realtek/phy-rtk-usb2.c index e3ad7cea5109..e8ca2ec5998f 100644 --- a/drivers/phy/realtek/phy-rtk-usb2.c +++ b/drivers/phy/realtek/phy-rtk-usb2.c @@ -1023,6 +1023,8 @@ static int rtk_usb2phy_probe(struct platform_device *pdev)
rtk_phy->dev = &pdev->dev; rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL); + if (!rtk_phy->phy_cfg) + return -ENOMEM;
memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg));
diff --git a/drivers/phy/realtek/phy-rtk-usb3.c b/drivers/phy/realtek/phy-rtk-usb3.c index dfcf4b921bba..96af483e5444 100644 --- a/drivers/phy/realtek/phy-rtk-usb3.c +++ b/drivers/phy/realtek/phy-rtk-usb3.c @@ -577,6 +577,8 @@ static int rtk_usb3phy_probe(struct platform_device *pdev)
rtk_phy->dev = &pdev->dev; rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL); + if (!rtk_phy->phy_cfg) + return -ENOMEM;
memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg));
diff --git a/drivers/pinctrl/pinctrl-k210.c b/drivers/pinctrl/pinctrl-k210.c index 0f6b55fec31d..a71805997b02 100644 --- a/drivers/pinctrl/pinctrl-k210.c +++ b/drivers/pinctrl/pinctrl-k210.c @@ -183,7 +183,7 @@ static const u32 k210_pinconf_mode_id_to_mode[] = { [K210_PC_DEFAULT_INT13] = K210_PC_MODE_IN | K210_PC_PU, };
-#undef DEFAULT +#undef K210_PC_DEFAULT
/* * Pin functions configuration information. diff --git a/drivers/pinctrl/pinctrl-zynqmp.c b/drivers/pinctrl/pinctrl-zynqmp.c index 3c6d56fdb8c9..93454d2a26bc 100644 --- a/drivers/pinctrl/pinctrl-zynqmp.c +++ b/drivers/pinctrl/pinctrl-zynqmp.c @@ -49,7 +49,6 @@ * @name: Name of the pin mux function * @groups: List of pin groups for this function * @ngroups: Number of entries in @groups - * @node: Firmware node matching with the function * * This structure holds information about pin control function * and function group names supporting that function. diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c index d2dd66769aa8..a0eb4e01b3a7 100644 --- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c +++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c @@ -667,7 +667,7 @@ static void pmic_gpio_config_dbg_show(struct pinctrl_dev *pctldev, "push-pull", "open-drain", "open-source" }; static const char *const strengths[] = { - "no", "high", "medium", "low" + "no", "low", "medium", "high" };
pad = pctldev->desc->pins[pin].drv_data; diff --git a/drivers/pinctrl/renesas/Kconfig b/drivers/pinctrl/renesas/Kconfig index 14bd55d64731..7f3f41c7fe54 100644 --- a/drivers/pinctrl/renesas/Kconfig +++ b/drivers/pinctrl/renesas/Kconfig @@ -41,6 +41,7 @@ config PINCTRL_RENESAS select PINCTRL_PFC_R8A779H0 if ARCH_R8A779H0 select PINCTRL_RZG2L if ARCH_RZG2L select PINCTRL_RZV2M if ARCH_R9A09G011 + select PINCTRL_RZG2L if ARCH_R9A09G057 select PINCTRL_PFC_SH7203 if CPU_SUBTYPE_SH7203 select PINCTRL_PFC_SH7264 if CPU_SUBTYPE_SH7264 select PINCTRL_PFC_SH7269 if CPU_SUBTYPE_SH7269 diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c index 5a403915fed2..3a81837b5e62 100644 --- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c +++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c @@ -2710,7 +2710,7 @@ static int rzg2l_pinctrl_register(struct rzg2l_pinctrl *pctrl)
ret = pinctrl_enable(pctrl->pctl); if (ret) - dev_err_probe(pctrl->dev, ret, "pinctrl enable failed\n"); + return dev_err_probe(pctrl->dev, ret, "pinctrl enable failed\n");
ret = rzg2l_gpio_register(pctrl); if (ret) diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c index c7781aea0b88..f1324466efac 100644 --- a/drivers/platform/chrome/cros_ec_typec.c +++ b/drivers/platform/chrome/cros_ec_typec.c @@ -409,6 +409,7 @@ static int cros_typec_init_ports(struct cros_typec_data *typec) return 0;
unregister_ports: + fwnode_handle_put(fwnode); cros_unregister_ports(typec); return ret; } diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c index abdca3f05c5c..89f5f44857d5 100644 --- a/drivers/platform/x86/asus-wmi.c +++ b/drivers/platform/x86/asus-wmi.c @@ -3696,10 +3696,28 @@ static int asus_wmi_custom_fan_curve_init(struct asus_wmi *asus) /* Throttle thermal policy ****************************************************/ static int throttle_thermal_policy_write(struct asus_wmi *asus) { - u8 value = asus->throttle_thermal_policy_mode; u32 retval; + u8 value; int err;
+ if (asus->throttle_thermal_policy_dev == ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY_VIVO) { + switch (asus->throttle_thermal_policy_mode) { + case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT: + value = ASUS_THROTTLE_THERMAL_POLICY_DEFAULT_VIVO; + break; + case ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST: + value = ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST_VIVO; + break; + case ASUS_THROTTLE_THERMAL_POLICY_SILENT: + value = ASUS_THROTTLE_THERMAL_POLICY_SILENT_VIVO; + break; + default: + return -EINVAL; + } + } else { + value = asus->throttle_thermal_policy_mode; + } + err = asus_wmi_set_devstate(asus->throttle_thermal_policy_dev, value, &retval);
@@ -3804,46 +3822,6 @@ static ssize_t throttle_thermal_policy_store(struct device *dev, static DEVICE_ATTR_RW(throttle_thermal_policy);
/* Platform profile ***********************************************************/ -static int asus_wmi_platform_profile_to_vivo(struct asus_wmi *asus, int mode) -{ - bool vivo; - - vivo = asus->throttle_thermal_policy_dev == ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY_VIVO; - - if (vivo) { - switch (mode) { - case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT: - return ASUS_THROTTLE_THERMAL_POLICY_DEFAULT_VIVO; - case ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST: - return ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST_VIVO; - case ASUS_THROTTLE_THERMAL_POLICY_SILENT: - return ASUS_THROTTLE_THERMAL_POLICY_SILENT_VIVO; - } - } - - return mode; -} - -static int asus_wmi_platform_profile_mode_from_vivo(struct asus_wmi *asus, int mode) -{ - bool vivo; - - vivo = asus->throttle_thermal_policy_dev == ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY_VIVO; - - if (vivo) { - switch (mode) { - case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT_VIVO: - return ASUS_THROTTLE_THERMAL_POLICY_DEFAULT; - case ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST_VIVO: - return ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST; - case ASUS_THROTTLE_THERMAL_POLICY_SILENT_VIVO: - return ASUS_THROTTLE_THERMAL_POLICY_SILENT; - } - } - - return mode; -} - static int asus_wmi_platform_profile_get(struct platform_profile_handler *pprof, enum platform_profile_option *profile) { @@ -3853,7 +3831,7 @@ static int asus_wmi_platform_profile_get(struct platform_profile_handler *pprof, asus = container_of(pprof, struct asus_wmi, platform_profile_handler); tp = asus->throttle_thermal_policy_mode;
- switch (asus_wmi_platform_profile_mode_from_vivo(asus, tp)) { + switch (tp) { case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT: *profile = PLATFORM_PROFILE_BALANCED; break; @@ -3892,7 +3870,7 @@ static int asus_wmi_platform_profile_set(struct platform_profile_handler *pprof, return -EOPNOTSUPP; }
- asus->throttle_thermal_policy_mode = asus_wmi_platform_profile_to_vivo(asus, tp); + asus->throttle_thermal_policy_mode = tp; return throttle_thermal_policy_write(asus); }
diff --git a/drivers/platform/x86/intel/bxtwc_tmu.c b/drivers/platform/x86/intel/bxtwc_tmu.c index d0e2a3c293b0..9ac801b929b9 100644 --- a/drivers/platform/x86/intel/bxtwc_tmu.c +++ b/drivers/platform/x86/intel/bxtwc_tmu.c @@ -48,9 +48,8 @@ static irqreturn_t bxt_wcove_tmu_irq_handler(int irq, void *data) static int bxt_wcove_tmu_probe(struct platform_device *pdev) { struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent); - struct regmap_irq_chip_data *regmap_irq_chip; struct wcove_tmu *wctmu; - int ret, virq, irq; + int ret;
wctmu = devm_kzalloc(&pdev->dev, sizeof(*wctmu), GFP_KERNEL); if (!wctmu) @@ -59,27 +58,18 @@ static int bxt_wcove_tmu_probe(struct platform_device *pdev) wctmu->dev = &pdev->dev; wctmu->regmap = pmic->regmap;
- irq = platform_get_irq(pdev, 0); - if (irq < 0) - return irq; + wctmu->irq = platform_get_irq(pdev, 0); + if (wctmu->irq < 0) + return wctmu->irq;
- regmap_irq_chip = pmic->irq_chip_data_tmu; - virq = regmap_irq_get_virq(regmap_irq_chip, irq); - if (virq < 0) { - dev_err(&pdev->dev, - "failed to get virtual interrupt=%d\n", irq); - return virq; - } - - ret = devm_request_threaded_irq(&pdev->dev, virq, + ret = devm_request_threaded_irq(&pdev->dev, wctmu->irq, NULL, bxt_wcove_tmu_irq_handler, IRQF_ONESHOT, "bxt_wcove_tmu", wctmu); if (ret) { dev_err(&pdev->dev, "request irq failed: %d,virq: %d\n", - ret, virq); + ret, wctmu->irq); return ret; } - wctmu->irq = virq;
/* Unmask TMU second level Wake & System alarm */ regmap_update_bits(wctmu->regmap, BXTWC_MTMUIRQ_REG, diff --git a/drivers/platform/x86/intel/pmt/class.c b/drivers/platform/x86/intel/pmt/class.c index c04bb7f97a4d..c3ca2ac91b05 100644 --- a/drivers/platform/x86/intel/pmt/class.c +++ b/drivers/platform/x86/intel/pmt/class.c @@ -59,10 +59,12 @@ pmt_memcpy64_fromio(void *to, const u64 __iomem *from, size_t count) }
int pmt_telem_read_mmio(struct pci_dev *pdev, struct pmt_callbacks *cb, u32 guid, void *buf, - void __iomem *addr, u32 count) + void __iomem *addr, loff_t off, u32 count) { if (cb && cb->read_telem) - return cb->read_telem(pdev, guid, buf, count); + return cb->read_telem(pdev, guid, buf, off, count); + + addr += off;
if (guid == GUID_SPR_PUNIT) /* PUNIT on SPR only supports aligned 64-bit read */ @@ -96,7 +98,7 @@ intel_pmt_read(struct file *filp, struct kobject *kobj, count = entry->size - off;
count = pmt_telem_read_mmio(entry->ep->pcidev, entry->cb, entry->header.guid, buf, - entry->base + off, count); + entry->base, off, count);
return count; } diff --git a/drivers/platform/x86/intel/pmt/class.h b/drivers/platform/x86/intel/pmt/class.h index a267ac964423..b2006d57779d 100644 --- a/drivers/platform/x86/intel/pmt/class.h +++ b/drivers/platform/x86/intel/pmt/class.h @@ -62,7 +62,7 @@ struct intel_pmt_namespace { };
int pmt_telem_read_mmio(struct pci_dev *pdev, struct pmt_callbacks *cb, u32 guid, void *buf, - void __iomem *addr, u32 count); + void __iomem *addr, loff_t off, u32 count); bool intel_pmt_is_early_client_hw(struct device *dev); int intel_pmt_dev_create(struct intel_pmt_entry *entry, struct intel_pmt_namespace *ns, diff --git a/drivers/platform/x86/intel/pmt/telemetry.c b/drivers/platform/x86/intel/pmt/telemetry.c index c9feac859e57..0cea617c6c2e 100644 --- a/drivers/platform/x86/intel/pmt/telemetry.c +++ b/drivers/platform/x86/intel/pmt/telemetry.c @@ -219,7 +219,7 @@ int pmt_telem_read(struct telem_endpoint *ep, u32 id, u64 *data, u32 count) if (offset + NUM_BYTES_QWORD(count) > size) return -EINVAL;
- pmt_telem_read_mmio(ep->pcidev, ep->cb, ep->header.guid, data, ep->base + offset, + pmt_telem_read_mmio(ep->pcidev, ep->cb, ep->header.guid, data, ep->base, offset, NUM_BYTES_QWORD(count));
return ep->present ? 0 : -EPIPE; diff --git a/drivers/platform/x86/panasonic-laptop.c b/drivers/platform/x86/panasonic-laptop.c index 2bf94d0ab324..22ca70eb8227 100644 --- a/drivers/platform/x86/panasonic-laptop.c +++ b/drivers/platform/x86/panasonic-laptop.c @@ -614,8 +614,7 @@ static ssize_t eco_mode_show(struct device *dev, struct device_attribute *attr, result = 1; break; default: - result = -EIO; - break; + return -EIO; } return sysfs_emit(buf, "%u\n", result); } @@ -761,7 +760,12 @@ static ssize_t current_brightness_store(struct device *dev, struct device_attrib static ssize_t cdpower_show(struct device *dev, struct device_attribute *attr, char *buf) { - return sysfs_emit(buf, "%d\n", get_optd_power_state()); + int state = get_optd_power_state(); + + if (state < 0) + return state; + + return sysfs_emit(buf, "%d\n", state); }
static ssize_t cdpower_store(struct device *dev, struct device_attribute *attr, diff --git a/drivers/pmdomain/ti/ti_sci_pm_domains.c b/drivers/pmdomain/ti/ti_sci_pm_domains.c index 1510d5ddae3d..0df3eb7ff09a 100644 --- a/drivers/pmdomain/ti/ti_sci_pm_domains.c +++ b/drivers/pmdomain/ti/ti_sci_pm_domains.c @@ -161,6 +161,7 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev) break;
if (args.args_count >= 1 && args.np == dev->of_node) { + of_node_put(args.np); if (args.args[0] > max_id) { max_id = args.args[0]; } else { @@ -192,7 +193,10 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev) pm_genpd_init(&pd->pd, NULL, true);
list_add(&pd->node, &pd_provider->pd_list); + } else { + of_node_put(args.np); } + index++; } } diff --git a/drivers/power/reset/Kconfig b/drivers/power/reset/Kconfig index 389d5a193e5d..f5fc33a8bf44 100644 --- a/drivers/power/reset/Kconfig +++ b/drivers/power/reset/Kconfig @@ -79,6 +79,7 @@ config POWER_RESET_EP93XX bool "Cirrus EP93XX reset driver" if COMPILE_TEST depends on MFD_SYSCON default ARCH_EP93XX + select AUXILIARY_BUS help This driver provides restart support for Cirrus EP93XX SoC.
diff --git a/drivers/power/sequencing/Kconfig b/drivers/power/sequencing/Kconfig index c9f1cdb66524..ddcc42a98492 100644 --- a/drivers/power/sequencing/Kconfig +++ b/drivers/power/sequencing/Kconfig @@ -16,6 +16,7 @@ if POWER_SEQUENCING config POWER_SEQUENCING_QCOM_WCN tristate "Qualcomm WCN family PMU driver" default m if ARCH_QCOM + depends on OF help Say Y here to enable the power sequencing driver for Qualcomm WCN Bluetooth/WLAN chipsets. diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c index 750fda543308..51fb88aca0f9 100644 --- a/drivers/power/supply/bq27xxx_battery.c +++ b/drivers/power/supply/bq27xxx_battery.c @@ -449,9 +449,29 @@ static u8 [BQ27XXX_REG_AP] = 0x18, BQ27XXX_DM_REG_ROWS, }, + bq27426_regs[BQ27XXX_REG_MAX] = { + [BQ27XXX_REG_CTRL] = 0x00, + [BQ27XXX_REG_TEMP] = 0x02, + [BQ27XXX_REG_INT_TEMP] = 0x1e, + [BQ27XXX_REG_VOLT] = 0x04, + [BQ27XXX_REG_AI] = 0x10, + [BQ27XXX_REG_FLAGS] = 0x06, + [BQ27XXX_REG_TTE] = INVALID_REG_ADDR, + [BQ27XXX_REG_TTF] = INVALID_REG_ADDR, + [BQ27XXX_REG_TTES] = INVALID_REG_ADDR, + [BQ27XXX_REG_TTECP] = INVALID_REG_ADDR, + [BQ27XXX_REG_NAC] = 0x08, + [BQ27XXX_REG_RC] = 0x0c, + [BQ27XXX_REG_FCC] = 0x0e, + [BQ27XXX_REG_CYCT] = INVALID_REG_ADDR, + [BQ27XXX_REG_AE] = INVALID_REG_ADDR, + [BQ27XXX_REG_SOC] = 0x1c, + [BQ27XXX_REG_DCAP] = INVALID_REG_ADDR, + [BQ27XXX_REG_AP] = 0x18, + BQ27XXX_DM_REG_ROWS, + }, #define bq27411_regs bq27421_regs #define bq27425_regs bq27421_regs -#define bq27426_regs bq27421_regs #define bq27441_regs bq27421_regs #define bq27621_regs bq27421_regs bq27z561_regs[BQ27XXX_REG_MAX] = { @@ -769,10 +789,23 @@ static enum power_supply_property bq27421_props[] = { }; #define bq27411_props bq27421_props #define bq27425_props bq27421_props -#define bq27426_props bq27421_props #define bq27441_props bq27421_props #define bq27621_props bq27421_props
+static enum power_supply_property bq27426_props[] = { + POWER_SUPPLY_PROP_STATUS, + POWER_SUPPLY_PROP_PRESENT, + POWER_SUPPLY_PROP_VOLTAGE_NOW, + POWER_SUPPLY_PROP_CURRENT_NOW, + POWER_SUPPLY_PROP_CAPACITY, + POWER_SUPPLY_PROP_CAPACITY_LEVEL, + POWER_SUPPLY_PROP_TEMP, + POWER_SUPPLY_PROP_TECHNOLOGY, + POWER_SUPPLY_PROP_CHARGE_FULL, + POWER_SUPPLY_PROP_CHARGE_NOW, + POWER_SUPPLY_PROP_MANUFACTURER, +}; + static enum power_supply_property bq27z561_props[] = { POWER_SUPPLY_PROP_STATUS, POWER_SUPPLY_PROP_PRESENT, diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c index 49534458a9f7..73cc9c236e83 100644 --- a/drivers/power/supply/power_supply_core.c +++ b/drivers/power/supply/power_supply_core.c @@ -484,8 +484,6 @@ EXPORT_SYMBOL_GPL(power_supply_get_by_name); */ void power_supply_put(struct power_supply *psy) { - might_sleep(); - atomic_dec(&psy->use_cnt); put_device(&psy->dev); } diff --git a/drivers/power/supply/rt9471.c b/drivers/power/supply/rt9471.c index c04af1ee89c6..67b86ac91a21 100644 --- a/drivers/power/supply/rt9471.c +++ b/drivers/power/supply/rt9471.c @@ -139,6 +139,19 @@ enum { RT9471_PORTSTAT_DCP, };
+enum { + RT9471_ICSTAT_SLEEP = 0, + RT9471_ICSTAT_VBUSRDY, + RT9471_ICSTAT_TRICKLECHG, + RT9471_ICSTAT_PRECHG, + RT9471_ICSTAT_FASTCHG, + RT9471_ICSTAT_IEOC, + RT9471_ICSTAT_BGCHG, + RT9471_ICSTAT_CHGDONE, + RT9471_ICSTAT_CHGFAULT, + RT9471_ICSTAT_OTG = 15, +}; + struct rt9471_chip { struct device *dev; struct regmap *regmap; @@ -153,8 +166,8 @@ struct rt9471_chip { };
static const struct reg_field rt9471_reg_fields[F_MAX_FIELDS] = { - [F_WDT] = REG_FIELD(RT9471_REG_TOP, 0, 0), - [F_WDT_RST] = REG_FIELD(RT9471_REG_TOP, 1, 1), + [F_WDT] = REG_FIELD(RT9471_REG_TOP, 0, 1), + [F_WDT_RST] = REG_FIELD(RT9471_REG_TOP, 2, 2), [F_CHG_EN] = REG_FIELD(RT9471_REG_FUNC, 0, 0), [F_HZ] = REG_FIELD(RT9471_REG_FUNC, 5, 5), [F_BATFET_DIS] = REG_FIELD(RT9471_REG_FUNC, 7, 7), @@ -255,31 +268,32 @@ static int rt9471_get_ieoc(struct rt9471_chip *chip, int *microamp)
static int rt9471_get_status(struct rt9471_chip *chip, int *status) { - unsigned int chg_ready, chg_done, fault_stat; + unsigned int ic_stat; int ret;
- ret = regmap_field_read(chip->rm_fields[F_ST_CHG_RDY], &chg_ready); - if (ret) - return ret; - - ret = regmap_field_read(chip->rm_fields[F_ST_CHG_DONE], &chg_done); + ret = regmap_field_read(chip->rm_fields[F_IC_STAT], &ic_stat); if (ret) return ret;
- ret = regmap_read(chip->regmap, RT9471_REG_STAT1, &fault_stat); - if (ret) - return ret; - - fault_stat &= RT9471_CHGFAULT_MASK; - - if (chg_ready && chg_done) - *status = POWER_SUPPLY_STATUS_FULL; - else if (chg_ready && fault_stat) + switch (ic_stat) { + case RT9471_ICSTAT_VBUSRDY: + case RT9471_ICSTAT_CHGFAULT: *status = POWER_SUPPLY_STATUS_NOT_CHARGING; - else if (chg_ready && !fault_stat) + break; + case RT9471_ICSTAT_TRICKLECHG ... RT9471_ICSTAT_BGCHG: *status = POWER_SUPPLY_STATUS_CHARGING; - else + break; + case RT9471_ICSTAT_CHGDONE: + *status = POWER_SUPPLY_STATUS_FULL; + break; + case RT9471_ICSTAT_SLEEP: + case RT9471_ICSTAT_OTG: *status = POWER_SUPPLY_STATUS_DISCHARGING; + break; + default: + *status = POWER_SUPPLY_STATUS_UNKNOWN; + break; + }
return 0; } diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c index 6e752e148b98..210368099a06 100644 --- a/drivers/pwm/core.c +++ b/drivers/pwm/core.c @@ -75,7 +75,7 @@ static void pwm_apply_debug(struct pwm_device *pwm, state->duty_cycle < state->period) dev_warn(pwmchip_parent(chip), ".apply ignored .polarity\n");
- if (state->enabled && + if (state->enabled && s2.enabled && last->polarity == state->polarity && last->period > s2.period && last->period <= state->period) @@ -83,7 +83,11 @@ static void pwm_apply_debug(struct pwm_device *pwm, ".apply didn't pick the best available period (requested: %llu, applied: %llu, possible: %llu)\n", state->period, s2.period, last->period);
- if (state->enabled && state->period < s2.period) + /* + * Rounding period up is fine only if duty_cycle is 0 then, because a + * flat line doesn't have a characteristic period. + */ + if (state->enabled && s2.enabled && state->period < s2.period && s2.duty_cycle) dev_warn(pwmchip_parent(chip), ".apply is supposed to round down period (requested: %llu, applied: %llu)\n", state->period, s2.period); @@ -99,7 +103,7 @@ static void pwm_apply_debug(struct pwm_device *pwm, s2.duty_cycle, s2.period, last->duty_cycle, last->period);
- if (state->enabled && state->duty_cycle < s2.duty_cycle) + if (state->enabled && s2.enabled && state->duty_cycle < s2.duty_cycle) dev_warn(pwmchip_parent(chip), ".apply is supposed to round down duty_cycle (requested: %llu/%llu, applied: %llu/%llu)\n", state->duty_cycle, state->period, diff --git a/drivers/pwm/pwm-imx27.c b/drivers/pwm/pwm-imx27.c index 9e2bbf5b4a8c..037598719431 100644 --- a/drivers/pwm/pwm-imx27.c +++ b/drivers/pwm/pwm-imx27.c @@ -26,6 +26,7 @@ #define MX3_PWMSR 0x04 /* PWM Status Register */ #define MX3_PWMSAR 0x0C /* PWM Sample Register */ #define MX3_PWMPR 0x10 /* PWM Period Register */ +#define MX3_PWMCNR 0x14 /* PWM Counter Register */
#define MX3_PWMCR_FWM GENMASK(27, 26) #define MX3_PWMCR_STOPEN BIT(25) @@ -219,10 +220,12 @@ static void pwm_imx27_wait_fifo_slot(struct pwm_chip *chip, static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm, const struct pwm_state *state) { - unsigned long period_cycles, duty_cycles, prescale; + unsigned long period_cycles, duty_cycles, prescale, period_us, tmp; struct pwm_imx27_chip *imx = to_pwm_imx27_chip(chip); unsigned long long c; unsigned long long clkrate; + unsigned long flags; + int val; int ret; u32 cr;
@@ -263,7 +266,98 @@ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm, pwm_imx27_sw_reset(chip); }
- writel(duty_cycles, imx->mmio_base + MX3_PWMSAR); + val = readl(imx->mmio_base + MX3_PWMPR); + val = val >= MX3_PWMPR_MAX ? MX3_PWMPR_MAX : val; + cr = readl(imx->mmio_base + MX3_PWMCR); + tmp = NSEC_PER_SEC * (u64)(val + 2) * MX3_PWMCR_PRESCALER_GET(cr); + tmp = DIV_ROUND_UP_ULL(tmp, clkrate); + period_us = DIV_ROUND_UP_ULL(tmp, 1000); + + /* + * ERR051198: + * PWM: PWM output may not function correctly if the FIFO is empty when + * a new SAR value is programmed + * + * Description: + * When the PWM FIFO is empty, a new value programmed to the PWM Sample + * register (PWM_PWMSAR) will be directly applied even if the current + * timer period has not expired. + * + * If the new SAMPLE value programmed in the PWM_PWMSAR register is + * less than the previous value, and the PWM counter register + * (PWM_PWMCNR) that contains the current COUNT value is greater than + * the new programmed SAMPLE value, the current period will not flip + * the level. This may result in an output pulse with a duty cycle of + * 100%. + * + * Consider a change from + * ________ + * / ______/ + * ^ * ^ + * to + * ____ + * / __________/ + * ^ ^ + * At the time marked by *, the new write value will be directly applied + * to SAR even the current period is not over if FIFO is empty. + * + * ________ ____________________ + * / ______/ __________/ + * ^ ^ * ^ ^ + * |<-- old SAR -->| |<-- new SAR -->| + * + * That is the output is active for a whole period. + * + * Workaround: + * Check new SAR less than old SAR and current counter is in errata + * windows, write extra old SAR into FIFO and new SAR will effect at + * next period. + * + * Sometime period is quite long, such as over 1 second. If add old SAR + * into FIFO unconditional, new SAR have to wait for next period. It + * may be too long. + * + * Turn off the interrupt to ensure that not IRQ and schedule happen + * during above operations. If any irq and schedule happen, counter + * in PWM will be out of data and take wrong action. + * + * Add a safety margin 1.5us because it needs some time to complete + * IO write. + * + * Use writel_relaxed() to minimize the interval between two writes to + * the SAR register to increase the fastest PWM frequency supported. + * + * When the PWM period is longer than 2us(or <500kHz), this workaround + * can solve this problem. No software workaround is available if PWM + * period is shorter than IO write. Just try best to fill old data + * into FIFO. + */ + c = clkrate * 1500; + do_div(c, NSEC_PER_SEC); + + local_irq_save(flags); + val = FIELD_GET(MX3_PWMSR_FIFOAV, readl_relaxed(imx->mmio_base + MX3_PWMSR)); + + if (duty_cycles < imx->duty_cycle && (cr & MX3_PWMCR_EN)) { + if (period_us < 2) { /* 2us = 500 kHz */ + /* Best effort attempt to fix up >500 kHz case */ + udelay(3 * period_us); + writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR); + writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR); + } else if (val < MX3_PWMSR_FIFOAV_2WORDS) { + val = readl_relaxed(imx->mmio_base + MX3_PWMCNR); + /* + * If counter is close to period, controller may roll over when + * next IO write. + */ + if ((val + c >= duty_cycles && val < imx->duty_cycle) || + val + c >= period_cycles) + writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR); + } + } + writel_relaxed(duty_cycles, imx->mmio_base + MX3_PWMSAR); + local_irq_restore(flags); + writel(period_cycles, imx->mmio_base + MX3_PWMPR);
/* diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c index 28e7ce60cb61..25ed9f713974 100644 --- a/drivers/regulator/qcom_smd-regulator.c +++ b/drivers/regulator/qcom_smd-regulator.c @@ -11,7 +11,7 @@ #include <linux/regulator/of_regulator.h> #include <linux/soc/qcom/smd-rpm.h>
-struct qcom_smd_rpm *smd_vreg_rpm; +static struct qcom_smd_rpm *smd_vreg_rpm;
struct qcom_rpm_reg { struct device *dev; diff --git a/drivers/regulator/rk808-regulator.c b/drivers/regulator/rk808-regulator.c index 01a8d0487918..37476d2558fd 100644 --- a/drivers/regulator/rk808-regulator.c +++ b/drivers/regulator/rk808-regulator.c @@ -1853,7 +1853,7 @@ static int rk808_regulator_dt_parse_pdata(struct device *dev, }
if (!pdata->dvs_gpio[i]) { - dev_info(dev, "there is no dvs%d gpio\n", i); + dev_dbg(dev, "there is no dvs%d gpio\n", i); continue; }
@@ -1889,12 +1889,6 @@ static int rk808_regulator_probe(struct platform_device *pdev) if (!pdata) return -ENOMEM;
- ret = rk808_regulator_dt_parse_pdata(&pdev->dev, regmap, pdata); - if (ret < 0) - return ret; - - platform_set_drvdata(pdev, pdata); - switch (rk808->variant) { case RK805_ID: regulators = rk805_reg; @@ -1905,6 +1899,11 @@ static int rk808_regulator_probe(struct platform_device *pdev) nregulators = ARRAY_SIZE(rk806_reg); break; case RK808_ID: + /* DVS0/1 GPIOs are supported on the RK808 only */ + ret = rk808_regulator_dt_parse_pdata(&pdev->dev, regmap, pdata); + if (ret < 0) + return ret; + regulators = rk808_reg; nregulators = RK808_NUM_REGULATORS; break; @@ -1930,6 +1929,8 @@ static int rk808_regulator_probe(struct platform_device *pdev) return -EINVAL; }
+ platform_set_drvdata(pdev, pdata); + config.dev = &pdev->dev; config.driver_data = pdata; config.regmap = regmap; diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig index 955e4e38477e..62f8548fb46a 100644 --- a/drivers/remoteproc/Kconfig +++ b/drivers/remoteproc/Kconfig @@ -341,9 +341,9 @@ config TI_K3_DSP_REMOTEPROC
config TI_K3_M4_REMOTEPROC tristate "TI K3 M4 remoteproc support" - depends on ARCH_OMAP2PLUS || ARCH_K3 - select MAILBOX - select OMAP2PLUS_MBOX + depends on ARCH_K3 || COMPILE_TEST + depends on TI_SCI_PROTOCOL || (COMPILE_TEST && TI_SCI_PROTOCOL=n) + depends on OMAP2PLUS_MBOX help Say m here to support TI's M4 remote processor subsystems on various TI K3 family of SoCs through the remote processor diff --git a/drivers/remoteproc/qcom_q6v5_adsp.c b/drivers/remoteproc/qcom_q6v5_adsp.c index 572dcb0f055b..223f6ca0745d 100644 --- a/drivers/remoteproc/qcom_q6v5_adsp.c +++ b/drivers/remoteproc/qcom_q6v5_adsp.c @@ -734,15 +734,22 @@ static int adsp_probe(struct platform_device *pdev) desc->ssctl_id); if (IS_ERR(adsp->sysmon)) { ret = PTR_ERR(adsp->sysmon); - goto disable_pm; + goto deinit_remove_glink_pdm_ssr; }
ret = rproc_add(rproc); if (ret) - goto disable_pm; + goto remove_sysmon;
return 0;
+remove_sysmon: + qcom_remove_sysmon_subdev(adsp->sysmon); +deinit_remove_glink_pdm_ssr: + qcom_q6v5_deinit(&adsp->q6v5); + qcom_remove_glink_subdev(rproc, &adsp->glink_subdev); + qcom_remove_pdm_subdev(rproc, &adsp->pdm_subdev); + qcom_remove_ssr_subdev(rproc, &adsp->ssr_subdev); disable_pm: qcom_rproc_pds_detach(adsp);
diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c index 2a42215ce8e0..32c3531b20c7 100644 --- a/drivers/remoteproc/qcom_q6v5_mss.c +++ b/drivers/remoteproc/qcom_q6v5_mss.c @@ -1162,6 +1162,9 @@ static int q6v5_mba_load(struct q6v5 *qproc) goto disable_active_clks; }
+ if (qproc->has_mba_logs) + qcom_pil_info_store("mba", qproc->mba_phys, MBA_LOG_SIZE); + writel(qproc->mba_phys, qproc->rmb_base + RMB_MBA_IMAGE_REG); if (qproc->dp_size) { writel(qproc->mba_phys + SZ_1M, qproc->rmb_base + RMB_PMI_CODE_START_REG); @@ -1172,9 +1175,6 @@ static int q6v5_mba_load(struct q6v5 *qproc) if (ret) goto reclaim_mba;
- if (qproc->has_mba_logs) - qcom_pil_info_store("mba", qproc->mba_phys, MBA_LOG_SIZE); - ret = q6v5_rmb_mba_wait(qproc, 0, 5000); if (ret == -ETIMEDOUT) { dev_err(qproc->dev, "MBA boot timed out\n"); diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c index ef82835e98a4..f4f4b3df3884 100644 --- a/drivers/remoteproc/qcom_q6v5_pas.c +++ b/drivers/remoteproc/qcom_q6v5_pas.c @@ -759,16 +759,16 @@ static int adsp_probe(struct platform_device *pdev)
ret = adsp_init_clock(adsp); if (ret) - goto free_rproc; + goto unassign_mem;
ret = adsp_init_regulator(adsp); if (ret) - goto free_rproc; + goto unassign_mem;
ret = adsp_pds_attach(&pdev->dev, adsp->proxy_pds, desc->proxy_pd_names); if (ret < 0) - goto free_rproc; + goto unassign_mem; adsp->proxy_pd_count = ret;
ret = qcom_q6v5_init(&adsp->q6v5, pdev, rproc, desc->crash_reason_smem, desc->load_state, @@ -784,18 +784,28 @@ static int adsp_probe(struct platform_device *pdev) desc->ssctl_id); if (IS_ERR(adsp->sysmon)) { ret = PTR_ERR(adsp->sysmon); - goto detach_proxy_pds; + goto deinit_remove_pdm_smd_glink; }
qcom_add_ssr_subdev(rproc, &adsp->ssr_subdev, desc->ssr_name); ret = rproc_add(rproc); if (ret) - goto detach_proxy_pds; + goto remove_ssr_sysmon;
return 0;
+remove_ssr_sysmon: + qcom_remove_ssr_subdev(rproc, &adsp->ssr_subdev); + qcom_remove_sysmon_subdev(adsp->sysmon); +deinit_remove_pdm_smd_glink: + qcom_remove_pdm_subdev(rproc, &adsp->pdm_subdev); + qcom_remove_smd_subdev(rproc, &adsp->smd_subdev); + qcom_remove_glink_subdev(rproc, &adsp->glink_subdev); + qcom_q6v5_deinit(&adsp->q6v5); detach_proxy_pds: adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count); +unassign_mem: + adsp_unassign_memory_region(adsp); free_rproc: device_init_wakeup(adsp->dev, false);
@@ -907,6 +917,7 @@ static const struct adsp_data sm8250_adsp_resource = { .crash_reason_smem = 423, .firmware_name = "adsp.mdt", .pas_id = 1, + .minidump_id = 5, .auto_boot = true, .proxy_pd_names = (char*[]){ "lcx", @@ -1124,6 +1135,7 @@ static const struct adsp_data sm8350_cdsp_resource = { .crash_reason_smem = 601, .firmware_name = "cdsp.mdt", .pas_id = 18, + .minidump_id = 7, .auto_boot = true, .proxy_pd_names = (char*[]){ "cx", diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c index d3af1dfa3c7d..a2f9d85c7156 100644 --- a/drivers/rpmsg/qcom_glink_native.c +++ b/drivers/rpmsg/qcom_glink_native.c @@ -1204,7 +1204,8 @@ void qcom_glink_native_rx(struct qcom_glink *glink) ret = qcom_glink_rx_open_ack(glink, param1); break; case GLINK_CMD_OPEN: - ret = qcom_glink_rx_defer(glink, param2); + /* upper 16 bits of param2 are the "prio" field */ + ret = qcom_glink_rx_defer(glink, param2 & 0xffff); break; case GLINK_CMD_TX_DATA: case GLINK_CMD_TX_DATA_CONT: diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c index cca650b2e0b9..aaf76406cd7d 100644 --- a/drivers/rtc/interface.c +++ b/drivers/rtc/interface.c @@ -904,13 +904,18 @@ void rtc_timer_do_work(struct work_struct *work) struct timerqueue_node *next; ktime_t now; struct rtc_time tm; + int err;
struct rtc_device *rtc = container_of(work, struct rtc_device, irqwork);
mutex_lock(&rtc->ops_lock); again: - __rtc_read_time(rtc, &tm); + err = __rtc_read_time(rtc, &tm); + if (err) { + mutex_unlock(&rtc->ops_lock); + return; + } now = rtc_tm_to_ktime(tm); while ((next = timerqueue_getnext(&rtc->timerqueue))) { if (next->expires > now) diff --git a/drivers/rtc/rtc-ab-eoz9.c b/drivers/rtc/rtc-ab-eoz9.c index 02f7d0711287..e17bce9a2746 100644 --- a/drivers/rtc/rtc-ab-eoz9.c +++ b/drivers/rtc/rtc-ab-eoz9.c @@ -396,13 +396,6 @@ static int abeoz9z3_temp_read(struct device *dev, if (ret < 0) return ret;
- if ((val & ABEOZ9_REG_CTRL_STATUS_V1F) || - (val & ABEOZ9_REG_CTRL_STATUS_V2F)) { - dev_err(dev, - "thermometer might be disabled due to low voltage\n"); - return -EINVAL; - } - switch (attr) { case hwmon_temp_input: ret = regmap_read(regmap, ABEOZ9_REG_REG_TEMP, &val); diff --git a/drivers/rtc/rtc-abx80x.c b/drivers/rtc/rtc-abx80x.c index 1298962402ff..3fee27914ba8 100644 --- a/drivers/rtc/rtc-abx80x.c +++ b/drivers/rtc/rtc-abx80x.c @@ -39,7 +39,7 @@ #define ABX8XX_REG_STATUS 0x0f #define ABX8XX_STATUS_AF BIT(2) #define ABX8XX_STATUS_BLF BIT(4) -#define ABX8XX_STATUS_WDT BIT(6) +#define ABX8XX_STATUS_WDT BIT(5)
#define ABX8XX_REG_CTRL1 0x10 #define ABX8XX_CTRL_WRITE BIT(0) diff --git a/drivers/rtc/rtc-rzn1.c b/drivers/rtc/rtc-rzn1.c index 56ebbd4d0481..8570c8e63d70 100644 --- a/drivers/rtc/rtc-rzn1.c +++ b/drivers/rtc/rtc-rzn1.c @@ -111,8 +111,8 @@ static int rzn1_rtc_read_time(struct device *dev, struct rtc_time *tm) tm->tm_hour = bcd2bin(tm->tm_hour); tm->tm_wday = bcd2bin(tm->tm_wday); tm->tm_mday = bcd2bin(tm->tm_mday); - tm->tm_mon = bcd2bin(tm->tm_mon); - tm->tm_year = bcd2bin(tm->tm_year); + tm->tm_mon = bcd2bin(tm->tm_mon) - 1; + tm->tm_year = bcd2bin(tm->tm_year) + 100;
return 0; } @@ -128,8 +128,8 @@ static int rzn1_rtc_set_time(struct device *dev, struct rtc_time *tm) tm->tm_hour = bin2bcd(tm->tm_hour); tm->tm_wday = bin2bcd(rzn1_rtc_tm_to_wday(tm)); tm->tm_mday = bin2bcd(tm->tm_mday); - tm->tm_mon = bin2bcd(tm->tm_mon); - tm->tm_year = bin2bcd(tm->tm_year); + tm->tm_mon = bin2bcd(tm->tm_mon + 1); + tm->tm_year = bin2bcd(tm->tm_year - 100);
val = readl(rtc->base + RZN1_RTC_CTL2); if (!(val & RZN1_RTC_CTL2_STOPPED)) { diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c index d492a2d26600..c6d4522411b3 100644 --- a/drivers/rtc/rtc-st-lpc.c +++ b/drivers/rtc/rtc-st-lpc.c @@ -218,15 +218,14 @@ static int st_rtc_probe(struct platform_device *pdev) return -EINVAL; }
- ret = devm_request_irq(&pdev->dev, rtc->irq, st_rtc_handler, 0, - pdev->name, rtc); + ret = devm_request_irq(&pdev->dev, rtc->irq, st_rtc_handler, + IRQF_NO_AUTOEN, pdev->name, rtc); if (ret) { dev_err(&pdev->dev, "Failed to request irq %i\n", rtc->irq); return ret; }
enable_irq_wake(rtc->irq); - disable_irq(rtc->irq);
rtc->clk = devm_clk_get_enabled(&pdev->dev, NULL); if (IS_ERR(rtc->clk)) diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c index c32e818f06db..ad17ab0a9314 100644 --- a/drivers/s390/cio/cio.c +++ b/drivers/s390/cio/cio.c @@ -459,10 +459,14 @@ int cio_update_schib(struct subchannel *sch) { struct schib schib;
- if (stsch(sch->schid, &schib) || !css_sch_is_valid(&schib)) + if (stsch(sch->schid, &schib)) return -ENODEV;
memcpy(&sch->schib, &schib, sizeof(schib)); + + if (!css_sch_is_valid(&schib)) + return -EACCES; + return 0; } EXPORT_SYMBOL_GPL(cio_update_schib); diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c index b0f23242e171..9498825d9c7a 100644 --- a/drivers/s390/cio/device.c +++ b/drivers/s390/cio/device.c @@ -1387,14 +1387,18 @@ enum io_sch_action { IO_SCH_VERIFY, IO_SCH_DISC, IO_SCH_NOP, + IO_SCH_ORPH_CDEV, };
static enum io_sch_action sch_get_action(struct subchannel *sch) { struct ccw_device *cdev; + int rc;
cdev = sch_get_cdev(sch); - if (cio_update_schib(sch)) { + rc = cio_update_schib(sch); + + if (rc == -ENODEV) { /* Not operational. */ if (!cdev) return IO_SCH_UNREG; @@ -1402,6 +1406,16 @@ static enum io_sch_action sch_get_action(struct subchannel *sch) return IO_SCH_UNREG; return IO_SCH_ORPH_UNREG; } + + /* Avoid unregistering subchannels without working device. */ + if (rc == -EACCES) { + if (!cdev) + return IO_SCH_NOP; + if (ccw_device_notify(cdev, CIO_GONE) != NOTIFY_OK) + return IO_SCH_UNREG_CDEV; + return IO_SCH_ORPH_CDEV; + } + /* Operational. */ if (!cdev) return IO_SCH_ATTACH; @@ -1471,6 +1485,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process) rc = 0; goto out_unlock; case IO_SCH_ORPH_UNREG: + case IO_SCH_ORPH_CDEV: case IO_SCH_ORPH_ATTACH: ccw_device_set_disconnected(cdev); break; @@ -1502,6 +1517,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process) /* Handle attached ccw device. */ switch (action) { case IO_SCH_ORPH_UNREG: + case IO_SCH_ORPH_CDEV: case IO_SCH_ORPH_ATTACH: /* Move ccw device to orphanage. */ rc = ccw_device_move_to_orph(cdev); diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c index 62eca9419ad7..21fa7ac849e5 100644 --- a/drivers/s390/virtio/virtio_ccw.c +++ b/drivers/s390/virtio/virtio_ccw.c @@ -58,6 +58,8 @@ struct virtio_ccw_device { struct virtio_device vdev; __u8 config[VIRTIO_CCW_CONFIG_SIZE]; struct ccw_device *cdev; + /* we make cdev->dev.dma_parms point to this */ + struct device_dma_parameters dma_parms; __u32 curr_io; int err; unsigned int revision; /* Transport revision */ @@ -1303,6 +1305,7 @@ static int virtio_ccw_offline(struct ccw_device *cdev) unregister_virtio_device(&vcdev->vdev); spin_lock_irqsave(get_ccwdev_lock(cdev), flags); dev_set_drvdata(&cdev->dev, NULL); + cdev->dev.dma_parms = NULL; spin_unlock_irqrestore(get_ccwdev_lock(cdev), flags); return 0; } @@ -1366,6 +1369,7 @@ static int virtio_ccw_online(struct ccw_device *cdev) } vcdev->vdev.dev.parent = &cdev->dev; vcdev->cdev = cdev; + cdev->dev.dma_parms = &vcdev->dma_parms; vcdev->dma_area = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*vcdev->dma_area), &vcdev->dma_area_addr); diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c index 62cb7a864fd5..70c7515a822f 100644 --- a/drivers/scsi/bfa/bfad.c +++ b/drivers/scsi/bfa/bfad.c @@ -1693,9 +1693,8 @@ bfad_init(void)
error = bfad_im_module_init(); if (error) { - error = -ENOMEM; printk(KERN_WARNING "bfad_im_module_init failure\n"); - goto ext; + return -ENOMEM; }
if (strcmp(FCPI_NAME, " fcpim") == 0) diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c index 6219807ce3b9..ffd15fa4f9e5 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_main.c +++ b/drivers/scsi/hisi_sas/hisi_sas_main.c @@ -1545,10 +1545,16 @@ void hisi_sas_controller_reset_done(struct hisi_hba *hisi_hba) /* Init and wait for PHYs to come up and all libsas event finished. */ for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) { struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no]; + struct asd_sas_phy *sas_phy = &phy->sas_phy;
- if (!(hisi_hba->phy_state & BIT(phy_no))) + if (!sas_phy->phy->enabled) continue;
+ if (!(hisi_hba->phy_state & BIT(phy_no))) { + hisi_sas_phy_enable(hisi_hba, phy_no, 1); + continue; + } + async_schedule_domain(hisi_sas_async_init_wait_phyup, phy, &async); } diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c index cf13148ba281..e979ec1478c1 100644 --- a/drivers/scsi/qedf/qedf_main.c +++ b/drivers/scsi/qedf/qedf_main.c @@ -2738,6 +2738,7 @@ static int qedf_alloc_and_init_sb(struct qedf_ctx *qedf, sb_id, QED_SB_TYPE_STORAGE);
if (ret) { + dma_free_coherent(&qedf->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys); QEDF_ERR(&qedf->dbg_ctx, "Status block initialization failed (0x%x) for id = %d.\n", ret, sb_id); diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c index c5aec26019d6..628d59dda20c 100644 --- a/drivers/scsi/qedi/qedi_main.c +++ b/drivers/scsi/qedi/qedi_main.c @@ -369,6 +369,7 @@ static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi, ret = qedi_ops->common->sb_init(qedi->cdev, sb_info, sb_virt, sb_phys, sb_id, QED_SB_TYPE_STORAGE); if (ret) { + dma_free_coherent(&qedi->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys); QEDI_ERR(&qedi->dbg_ctx, "Status block initialization failed for id = %d.\n", sb_id); diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c index f86be197fedd..84334ab39c81 100644 --- a/drivers/scsi/sg.c +++ b/drivers/scsi/sg.c @@ -307,10 +307,6 @@ sg_open(struct inode *inode, struct file *filp) if (retval) goto sg_put;
- retval = scsi_autopm_get_device(device); - if (retval) - goto sdp_put; - /* scsi_block_when_processing_errors() may block so bypass * check if O_NONBLOCK. Permits SCSI commands to be issued * during error recovery. Tread carefully. */ @@ -318,7 +314,7 @@ sg_open(struct inode *inode, struct file *filp) scsi_block_when_processing_errors(device))) { retval = -ENXIO; /* we are in error recovery for this device */ - goto error_out; + goto sdp_put; }
mutex_lock(&sdp->open_rel_lock); @@ -371,8 +367,6 @@ sg_open(struct inode *inode, struct file *filp) } error_mutex_locked: mutex_unlock(&sdp->open_rel_lock); -error_out: - scsi_autopm_put_device(device); sdp_put: kref_put(&sdp->d_ref, sg_device_destroy); scsi_device_put(device); @@ -392,7 +386,6 @@ sg_release(struct inode *inode, struct file *filp) SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp, "sg_release\n"));
mutex_lock(&sdp->open_rel_lock); - scsi_autopm_put_device(sdp->device); kref_put(&sfp->f_ref, sg_remove_sfp); sdp->open_cnt--;
diff --git a/drivers/sh/intc/core.c b/drivers/sh/intc/core.c index 74350b5871dc..ea571eeb3078 100644 --- a/drivers/sh/intc/core.c +++ b/drivers/sh/intc/core.c @@ -209,7 +209,6 @@ int __init register_intc_controller(struct intc_desc *desc) goto err0;
INIT_LIST_HEAD(&d->list); - list_add_tail(&d->list, &intc_list);
raw_spin_lock_init(&d->lock); INIT_RADIX_TREE(&d->tree, GFP_ATOMIC); @@ -369,6 +368,7 @@ int __init register_intc_controller(struct intc_desc *desc)
d->skip_suspend = desc->skip_syscore_suspend;
+ list_add_tail(&d->list, &intc_list); nr_intc_controllers++;
return 0; diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c index 19cc581b06d0..b3f773e135fd 100644 --- a/drivers/soc/fsl/qe/qmc.c +++ b/drivers/soc/fsl/qe/qmc.c @@ -2004,8 +2004,10 @@ static int qmc_probe(struct platform_device *pdev)
/* Set the irq handler */ irq = platform_get_irq(pdev, 0); - if (irq < 0) + if (irq < 0) { + ret = irq; goto err_exit_xcc; + } ret = devm_request_irq(qmc->dev, irq, qmc_irq_handler, 0, "qmc", qmc); if (ret < 0) goto err_exit_xcc; diff --git a/drivers/soc/fsl/rcpm.c b/drivers/soc/fsl/rcpm.c index 3d0cae30c769..06bd94b29fb3 100644 --- a/drivers/soc/fsl/rcpm.c +++ b/drivers/soc/fsl/rcpm.c @@ -36,6 +36,7 @@ static void copy_ippdexpcr1_setting(u32 val) return;
regs = of_iomap(np, 0); + of_node_put(np); if (!regs) return;
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index 2e8f24d5da80..4cb959106efa 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -585,7 +585,8 @@ int geni_se_clk_tbl_get(struct geni_se *se, unsigned long **tbl)
for (i = 0; i < MAX_CLK_PERF_LEVEL; i++) { freq = clk_round_rate(se->clk, freq + 1); - if (freq <= 0 || freq == se->clk_perf_tbl[i - 1]) + if (freq <= 0 || + (i > 0 && freq == se->clk_perf_tbl[i - 1])) break; se->clk_perf_tbl[i] = freq; } diff --git a/drivers/soc/ti/smartreflex.c b/drivers/soc/ti/smartreflex.c index d6219060b616..38add2ab5613 100644 --- a/drivers/soc/ti/smartreflex.c +++ b/drivers/soc/ti/smartreflex.c @@ -202,10 +202,10 @@ static int sr_late_init(struct omap_sr *sr_info)
if (sr_class->notify && sr_class->notify_flags && sr_info->irq) { ret = devm_request_irq(&sr_info->pdev->dev, sr_info->irq, - sr_interrupt, 0, sr_info->name, sr_info); + sr_interrupt, IRQF_NO_AUTOEN, + sr_info->name, sr_info); if (ret) goto error; - disable_irq(sr_info->irq); }
return ret; diff --git a/drivers/soc/xilinx/xlnx_event_manager.c b/drivers/soc/xilinx/xlnx_event_manager.c index f529e1346247..85df6b9c04ee 100644 --- a/drivers/soc/xilinx/xlnx_event_manager.c +++ b/drivers/soc/xilinx/xlnx_event_manager.c @@ -188,8 +188,10 @@ static int xlnx_add_cb_for_suspend(event_cb_func_t cb_fun, void *data) INIT_LIST_HEAD(&eve_data->cb_list_head);
cb_data = kmalloc(sizeof(*cb_data), GFP_KERNEL); - if (!cb_data) + if (!cb_data) { + kfree(eve_data); return -ENOMEM; + } cb_data->eve_cb = cb_fun; cb_data->agent_data = data;
diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c index 95cdfc28361e..caecb2ad2a15 100644 --- a/drivers/spi/atmel-quadspi.c +++ b/drivers/spi/atmel-quadspi.c @@ -183,7 +183,7 @@ static const char *atmel_qspi_reg_name(u32 offset, char *tmp, size_t sz) case QSPI_MR: return "MR"; case QSPI_RD: - return "MR"; + return "RD"; case QSPI_TD: return "TD"; case QSPI_SR: diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c index 977e8b55c82b..9573b8fa4fbf 100644 --- a/drivers/spi/spi-fsl-lpspi.c +++ b/drivers/spi/spi-fsl-lpspi.c @@ -891,7 +891,7 @@ static int fsl_lpspi_probe(struct platform_device *pdev) return ret; }
- ret = devm_request_irq(&pdev->dev, irq, fsl_lpspi_isr, 0, + ret = devm_request_irq(&pdev->dev, irq, fsl_lpspi_isr, IRQF_NO_AUTOEN, dev_name(&pdev->dev), fsl_lpspi); if (ret) { dev_err(&pdev->dev, "can't get irq%d: %d\n", irq, ret); @@ -948,14 +948,10 @@ static int fsl_lpspi_probe(struct platform_device *pdev) ret = fsl_lpspi_dma_init(&pdev->dev, fsl_lpspi, controller); if (ret == -EPROBE_DEFER) goto out_pm_get; - if (ret < 0) + if (ret < 0) { dev_warn(&pdev->dev, "dma setup error %d, use pio\n", ret); - else - /* - * disable LPSPI module IRQ when enable DMA mode successfully, - * to prevent the unexpected LPSPI module IRQ events. - */ - disable_irq(irq); + enable_irq(irq); + }
ret = devm_spi_register_controller(&pdev->dev, controller); if (ret < 0) { diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c index afbd64a217eb..43f11b0e9e76 100644 --- a/drivers/spi/spi-tegra210-quad.c +++ b/drivers/spi/spi-tegra210-quad.c @@ -341,7 +341,7 @@ tegra_qspi_fill_tx_fifo_from_client_txbuf(struct tegra_qspi *tqspi, struct spi_t for (count = 0; count < max_n_32bit; count++) { u32 x = 0;
- for (i = 0; len && (i < bytes_per_word); i++, len--) + for (i = 0; len && (i < min(4, bytes_per_word)); i++, len--) x |= (u32)(*tx_buf++) << (i * 8); tegra_qspi_writel(tqspi, x, QSPI_TX_FIFO); } diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c index fcd0ca996684..b9df39e06e7c 100644 --- a/drivers/spi/spi-zynqmp-gqspi.c +++ b/drivers/spi/spi-zynqmp-gqspi.c @@ -1351,6 +1351,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
clk_dis_all: pm_runtime_disable(&pdev->dev); + pm_runtime_dont_use_autosuspend(&pdev->dev); pm_runtime_put_noidle(&pdev->dev); pm_runtime_set_suspended(&pdev->dev); clk_disable_unprepare(xqspi->refclk); @@ -1379,6 +1380,7 @@ static void zynqmp_qspi_remove(struct platform_device *pdev) zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
pm_runtime_disable(&pdev->dev); + pm_runtime_dont_use_autosuspend(&pdev->dev); pm_runtime_put_noidle(&pdev->dev); pm_runtime_set_suspended(&pdev->dev); clk_disable_unprepare(xqspi->refclk); diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index c1dad30a4528..0f3e6e2c2474 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -424,6 +424,16 @@ static int spi_probe(struct device *dev) spi->irq = 0; }
+ if (has_acpi_companion(dev) && spi->irq < 0) { + struct acpi_device *adev = to_acpi_device_node(dev->fwnode); + + spi->irq = acpi_dev_gpio_irq_get(adev, 0); + if (spi->irq == -EPROBE_DEFER) + return -EPROBE_DEFER; + if (spi->irq < 0) + spi->irq = 0; + } + ret = dev_pm_domain_attach(dev, true); if (ret) return ret; @@ -2869,9 +2879,6 @@ static acpi_status acpi_register_spi_device(struct spi_controller *ctlr, acpi_set_modalias(adev, acpi_device_hid(adev), spi->modalias, sizeof(spi->modalias));
- if (spi->irq < 0) - spi->irq = acpi_dev_gpio_irq_get(adev, 0); - acpi_device_set_enumerated(adev);
adev->power.flags.ignore_parent = true; diff --git a/drivers/staging/media/atomisp/pci/sh_css_params.c b/drivers/staging/media/atomisp/pci/sh_css_params.c index 232744973ab8..b1feb6f6ebe8 100644 --- a/drivers/staging/media/atomisp/pci/sh_css_params.c +++ b/drivers/staging/media/atomisp/pci/sh_css_params.c @@ -4181,6 +4181,8 @@ ia_css_3a_statistics_allocate(const struct ia_css_3a_grid_info *grid) goto err; /* No weighted histogram, no structure, treat the histogram data as a byte dump in a byte array */ me->rgby_data = kvmalloc(sizeof_hmem(HMEM0_ID), GFP_KERNEL); + if (!me->rgby_data) + goto err;
IA_CSS_LEAVE("return=%p", me); return me; diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c index 6c488b1e2624..5fab33adf58e 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c @@ -1715,7 +1715,6 @@ MODULE_DEVICE_TABLE(of, vchiq_of_match);
static int vchiq_probe(struct platform_device *pdev) { - struct device_node *fw_node; const struct vchiq_platform_info *info; struct vchiq_drv_mgmt *mgmt; int ret; @@ -1724,8 +1723,8 @@ static int vchiq_probe(struct platform_device *pdev) if (!info) return -EINVAL;
- fw_node = of_find_compatible_node(NULL, NULL, - "raspberrypi,bcm2835-firmware"); + struct device_node *fw_node __free(device_node) = + of_find_compatible_node(NULL, NULL, "raspberrypi,bcm2835-firmware"); if (!fw_node) { dev_err(&pdev->dev, "Missing firmware node\n"); return -ENOENT; @@ -1736,7 +1735,6 @@ static int vchiq_probe(struct platform_device *pdev) return -ENOMEM;
mgmt->fw = devm_rpi_firmware_get(&pdev->dev, fw_node); - of_node_put(fw_node); if (!mgmt->fw) return -EPROBE_DEFER;
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c index 440e07b1d5cd..287ac5b0495f 100644 --- a/drivers/target/target_core_pscsi.c +++ b/drivers/target/target_core_pscsi.c @@ -369,7 +369,7 @@ static int pscsi_create_type_disk(struct se_device *dev, struct scsi_device *sd) bdev_file = bdev_file_open_by_path(dev->udev_path, BLK_OPEN_WRITE | BLK_OPEN_READ, pdv, NULL); if (IS_ERR(bdev_file)) { - pr_err("pSCSI: bdev_open_by_path() failed\n"); + pr_err("pSCSI: bdev_file_open_by_path() failed\n"); scsi_device_put(sd); return PTR_ERR(bdev_file); } diff --git a/drivers/thermal/testing/zone.c b/drivers/thermal/testing/zone.c index c6d8c66f40f9..1f01f4952703 100644 --- a/drivers/thermal/testing/zone.c +++ b/drivers/thermal/testing/zone.c @@ -185,7 +185,7 @@ static void tt_add_tz_work_fn(struct work_struct *work) int tt_add_tz(void) { struct tt_thermal_zone *tt_zone __free(kfree); - struct tt_work *tt_work __free(kfree); + struct tt_work *tt_work __free(kfree) = NULL; int ret;
tt_zone = kzalloc(sizeof(*tt_zone), GFP_KERNEL); @@ -237,7 +237,7 @@ static void tt_zone_unregister_tz(struct tt_thermal_zone *tt_zone)
int tt_del_tz(const char *arg) { - struct tt_work *tt_work __free(kfree); + struct tt_work *tt_work __free(kfree) = NULL; struct tt_thermal_zone *tt_zone, *aux; int ret; int id; @@ -310,6 +310,9 @@ static void tt_put_tt_zone(struct tt_thermal_zone *tt_zone) tt_zone->refcount--; }
+DEFINE_FREE(put_tt_zone, struct tt_thermal_zone *, + if (!IS_ERR_OR_NULL(_T)) tt_put_tt_zone(_T)) + static void tt_zone_add_trip_work_fn(struct work_struct *work) { struct tt_work *tt_work = tt_work_of_work(work); @@ -332,9 +335,9 @@ static void tt_zone_add_trip_work_fn(struct work_struct *work)
int tt_zone_add_trip(const char *arg) { + struct tt_thermal_zone *tt_zone __free(put_tt_zone) = NULL; + struct tt_trip *tt_trip __free(kfree) = NULL; struct tt_work *tt_work __free(kfree); - struct tt_trip *tt_trip __free(kfree); - struct tt_thermal_zone *tt_zone; int id;
tt_work = kzalloc(sizeof(*tt_work), GFP_KERNEL); @@ -350,10 +353,8 @@ int tt_zone_add_trip(const char *arg) return PTR_ERR(tt_zone);
id = ida_alloc(&tt_zone->ida, GFP_KERNEL); - if (id < 0) { - tt_put_tt_zone(tt_zone); + if (id < 0) return id; - }
tt_trip->trip.type = THERMAL_TRIP_ACTIVE; tt_trip->trip.temperature = THERMAL_TEMP_INVALID; @@ -366,7 +367,7 @@ int tt_zone_add_trip(const char *arg) tt_zone->num_trips++;
INIT_WORK(&tt_work->work, tt_zone_add_trip_work_fn); - tt_work->tt_zone = tt_zone; + tt_work->tt_zone = no_free_ptr(tt_zone); tt_work->tt_trip = no_free_ptr(tt_trip); schedule_work(&(no_free_ptr(tt_work)->work));
@@ -391,7 +392,7 @@ static struct thermal_zone_device_ops tt_zone_ops = {
static int tt_zone_register_tz(struct tt_thermal_zone *tt_zone) { - struct thermal_trip *trips __free(kfree); + struct thermal_trip *trips __free(kfree) = NULL; struct thermal_zone_device *tz; struct tt_trip *tt_trip; int i; @@ -425,23 +426,18 @@ static int tt_zone_register_tz(struct tt_thermal_zone *tt_zone)
int tt_zone_reg(const char *arg) { - struct tt_thermal_zone *tt_zone; - int ret; + struct tt_thermal_zone *tt_zone __free(put_tt_zone);
tt_zone = tt_get_tt_zone(arg); if (IS_ERR(tt_zone)) return PTR_ERR(tt_zone);
- ret = tt_zone_register_tz(tt_zone); - - tt_put_tt_zone(tt_zone); - - return ret; + return tt_zone_register_tz(tt_zone); }
int tt_zone_unreg(const char *arg) { - struct tt_thermal_zone *tt_zone; + struct tt_thermal_zone *tt_zone __free(put_tt_zone);
tt_zone = tt_get_tt_zone(arg); if (IS_ERR(tt_zone)) @@ -449,8 +445,6 @@ int tt_zone_unreg(const char *arg)
tt_zone_unregister_tz(tt_zone);
- tt_put_tt_zone(tt_zone); - return 0; }
diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c index 8f03985f971c..1d2f2b307bac 100644 --- a/drivers/thermal/thermal_core.c +++ b/drivers/thermal/thermal_core.c @@ -40,6 +40,8 @@ static DEFINE_MUTEX(thermal_governor_lock);
static struct thermal_governor *def_governor;
+static bool thermal_pm_suspended; + /* * Governor section: set of functions to handle thermal governors * @@ -547,7 +549,7 @@ void __thermal_zone_device_update(struct thermal_zone_device *tz, int low = -INT_MAX, high = INT_MAX; int temp, ret;
- if (tz->suspended || tz->mode != THERMAL_DEVICE_ENABLED) + if (tz->state != TZ_STATE_READY || tz->mode != THERMAL_DEVICE_ENABLED) return;
ret = __thermal_zone_get_temp(tz, &temp); @@ -1332,6 +1334,24 @@ int thermal_zone_get_crit_temp(struct thermal_zone_device *tz, int *temp) } EXPORT_SYMBOL_GPL(thermal_zone_get_crit_temp);
+static void thermal_zone_init_complete(struct thermal_zone_device *tz) +{ + mutex_lock(&tz->lock); + + tz->state &= ~TZ_STATE_FLAG_INIT; + /* + * If system suspend or resume is in progress at this point, the + * new thermal zone needs to be marked as suspended because + * thermal_pm_notify() has run already. + */ + if (thermal_pm_suspended) + tz->state |= TZ_STATE_FLAG_SUSPENDED; + + __thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED); + + mutex_unlock(&tz->lock); +} + /** * thermal_zone_device_register_with_trips() - register a new thermal zone device * @type: the thermal zone device type @@ -1451,6 +1471,8 @@ thermal_zone_device_register_with_trips(const char *type, tz->passive_delay_jiffies = msecs_to_jiffies(passive_delay); tz->recheck_delay_jiffies = THERMAL_RECHECK_DELAY;
+ tz->state = TZ_STATE_FLAG_INIT; + /* sys I/F */ /* Add nodes that are always present via .groups */ result = thermal_zone_create_device_groups(tz); @@ -1465,6 +1487,7 @@ thermal_zone_device_register_with_trips(const char *type, thermal_zone_destroy_device_groups(tz); goto remove_id; } + thermal_zone_device_init(tz); result = device_register(&tz->device); if (result) goto release_device; @@ -1501,12 +1524,9 @@ thermal_zone_device_register_with_trips(const char *type, list_for_each_entry(cdev, &thermal_cdev_list, node) thermal_zone_cdev_bind(tz, cdev);
- mutex_unlock(&thermal_list_lock); + thermal_zone_init_complete(tz);
- thermal_zone_device_init(tz); - /* Update the new thermal zone and mark it as already updated. */ - if (atomic_cmpxchg(&tz->need_update, 1, 0)) - thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED); + mutex_unlock(&thermal_list_lock);
thermal_notify_tz_create(tz);
@@ -1662,7 +1682,7 @@ static void thermal_zone_device_resume(struct work_struct *work)
mutex_lock(&tz->lock);
- tz->suspended = false; + tz->state &= ~(TZ_STATE_FLAG_SUSPENDED | TZ_STATE_FLAG_RESUMING);
thermal_debug_tz_resume(tz); thermal_zone_device_init(tz); @@ -1670,7 +1690,48 @@ static void thermal_zone_device_resume(struct work_struct *work) __thermal_zone_device_update(tz, THERMAL_TZ_RESUME);
complete(&tz->resume); - tz->resuming = false; + + mutex_unlock(&tz->lock); +} + +static void thermal_zone_pm_prepare(struct thermal_zone_device *tz) +{ + mutex_lock(&tz->lock); + + if (tz->state & TZ_STATE_FLAG_RESUMING) { + /* + * thermal_zone_device_resume() queued up for this zone has not + * acquired the lock yet, so release it to let the function run + * and wait util it has done the work. + */ + mutex_unlock(&tz->lock); + + wait_for_completion(&tz->resume); + + mutex_lock(&tz->lock); + } + + tz->state |= TZ_STATE_FLAG_SUSPENDED; + + mutex_unlock(&tz->lock); +} + +static void thermal_zone_pm_complete(struct thermal_zone_device *tz) +{ + mutex_lock(&tz->lock); + + cancel_delayed_work(&tz->poll_queue); + + reinit_completion(&tz->resume); + tz->state |= TZ_STATE_FLAG_RESUMING; + + /* + * Replace the work function with the resume one, which will restore the + * original work function and schedule the polling work if needed. + */ + INIT_DELAYED_WORK(&tz->poll_queue, thermal_zone_device_resume); + /* Queue up the work without a delay. */ + mod_delayed_work(system_freezable_power_efficient_wq, &tz->poll_queue, 0);
mutex_unlock(&tz->lock); } @@ -1686,27 +1747,10 @@ static int thermal_pm_notify(struct notifier_block *nb, case PM_SUSPEND_PREPARE: mutex_lock(&thermal_list_lock);
- list_for_each_entry(tz, &thermal_tz_list, node) { - mutex_lock(&tz->lock); - - if (tz->resuming) { - /* - * thermal_zone_device_resume() queued up for - * this zone has not acquired the lock yet, so - * release it to let the function run and wait - * util it has done the work. - */ - mutex_unlock(&tz->lock); - - wait_for_completion(&tz->resume); - - mutex_lock(&tz->lock); - } + thermal_pm_suspended = true;
- tz->suspended = true; - - mutex_unlock(&tz->lock); - } + list_for_each_entry(tz, &thermal_tz_list, node) + thermal_zone_pm_prepare(tz);
mutex_unlock(&thermal_list_lock); break; @@ -1715,27 +1759,10 @@ static int thermal_pm_notify(struct notifier_block *nb, case PM_POST_SUSPEND: mutex_lock(&thermal_list_lock);
- list_for_each_entry(tz, &thermal_tz_list, node) { - mutex_lock(&tz->lock); - - cancel_delayed_work(&tz->poll_queue); + thermal_pm_suspended = false;
- reinit_completion(&tz->resume); - tz->resuming = true; - - /* - * Replace the work function with the resume one, which - * will restore the original work function and schedule - * the polling work if needed. - */ - INIT_DELAYED_WORK(&tz->poll_queue, - thermal_zone_device_resume); - /* Queue up the work without a delay. */ - mod_delayed_work(system_freezable_power_efficient_wq, - &tz->poll_queue, 0); - - mutex_unlock(&tz->lock); - } + list_for_each_entry(tz, &thermal_tz_list, node) + thermal_zone_pm_complete(tz);
mutex_unlock(&thermal_list_lock); break; diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h index a64d39b1c86b..421522a2bb9d 100644 --- a/drivers/thermal/thermal_core.h +++ b/drivers/thermal/thermal_core.h @@ -61,6 +61,12 @@ struct thermal_governor { struct list_head governor_list; };
+#define TZ_STATE_FLAG_SUSPENDED BIT(0) +#define TZ_STATE_FLAG_RESUMING BIT(1) +#define TZ_STATE_FLAG_INIT BIT(2) + +#define TZ_STATE_READY 0 + /** * struct thermal_zone_device - structure for a thermal zone * @id: unique id number for each thermal zone @@ -100,8 +106,7 @@ struct thermal_governor { * @node: node in thermal_tz_list (in thermal_core.c) * @poll_queue: delayed work for polling * @notify_event: Last notification event - * @suspended: thermal zone suspend indicator - * @resuming: indicates whether or not thermal zone resume is in progress + * @state: current state of the thermal zone * @trips: array of struct thermal_trip objects */ struct thermal_zone_device { @@ -134,8 +139,7 @@ struct thermal_zone_device { struct list_head node; struct delayed_work poll_queue; enum thermal_notify_event notify_event; - bool suspended; - bool resuming; + u8 state; #ifdef CONFIG_THERMAL_DEBUGFS struct thermal_debugfs *debugfs; #endif diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c index e2aa2a1a02dd..ecbce226b874 100644 --- a/drivers/tty/serial/8250/8250_fintek.c +++ b/drivers/tty/serial/8250/8250_fintek.c @@ -21,6 +21,7 @@ #define CHIP_ID_F81866 0x1010 #define CHIP_ID_F81966 0x0215 #define CHIP_ID_F81216AD 0x1602 +#define CHIP_ID_F81216E 0x1617 #define CHIP_ID_F81216H 0x0501 #define CHIP_ID_F81216 0x0802 #define VENDOR_ID1 0x23 @@ -158,6 +159,7 @@ static int fintek_8250_check_id(struct fintek_8250 *pdata) case CHIP_ID_F81866: case CHIP_ID_F81966: case CHIP_ID_F81216AD: + case CHIP_ID_F81216E: case CHIP_ID_F81216H: case CHIP_ID_F81216: break; @@ -181,6 +183,7 @@ static int fintek_8250_get_ldn_range(struct fintek_8250 *pdata, int *min, return 0;
case CHIP_ID_F81216AD: + case CHIP_ID_F81216E: case CHIP_ID_F81216H: case CHIP_ID_F81216: *min = F81216_LDN_LOW; @@ -250,6 +253,7 @@ static void fintek_8250_set_irq_mode(struct fintek_8250 *pdata, bool is_level) break;
case CHIP_ID_F81216AD: + case CHIP_ID_F81216E: case CHIP_ID_F81216H: case CHIP_ID_F81216: sio_write_mask_reg(pdata, FINTEK_IRQ_MODE, IRQ_SHARE, @@ -263,7 +267,8 @@ static void fintek_8250_set_irq_mode(struct fintek_8250 *pdata, bool is_level) static void fintek_8250_set_max_fifo(struct fintek_8250 *pdata) { switch (pdata->pid) { - case CHIP_ID_F81216H: /* 128Bytes FIFO */ + case CHIP_ID_F81216E: /* 128Bytes FIFO */ + case CHIP_ID_F81216H: case CHIP_ID_F81966: case CHIP_ID_F81866: sio_write_mask_reg(pdata, FIFO_CTRL, @@ -297,6 +302,7 @@ static void fintek_8250_set_termios(struct uart_port *port, goto exit;
switch (pdata->pid) { + case CHIP_ID_F81216E: case CHIP_ID_F81216H: reg = RS485; break; @@ -346,6 +352,7 @@ static void fintek_8250_set_termios_handler(struct uart_8250_port *uart) struct fintek_8250 *pdata = uart->port.private_data;
switch (pdata->pid) { + case CHIP_ID_F81216E: case CHIP_ID_F81216H: case CHIP_ID_F81966: case CHIP_ID_F81866: @@ -438,6 +445,11 @@ static void fintek_8250_set_rs485_handler(struct uart_8250_port *uart) uart->port.rs485_supported = fintek_8250_rs485_supported; break;
+ case CHIP_ID_F81216E: /* F81216E does not support RS485 delays */ + uart->port.rs485_config = fintek_8250_rs485_config; + uart->port.rs485_supported = fintek_8250_rs485_supported; + break; + default: /* No RS485 Auto direction functional */ break; } diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c index 88b58f44e4e9..0dd68bdbfbcf 100644 --- a/drivers/tty/serial/8250/8250_omap.c +++ b/drivers/tty/serial/8250/8250_omap.c @@ -776,12 +776,12 @@ static void omap_8250_shutdown(struct uart_port *port) struct uart_8250_port *up = up_to_u8250p(port); struct omap8250_priv *priv = port->private_data;
+ pm_runtime_get_sync(port->dev); + flush_work(&priv->qos_work); if (up->dma) omap_8250_rx_dma_flush(up);
- pm_runtime_get_sync(port->dev); - serial_out(up, UART_OMAP_WER, 0); if (priv->habit & UART_HAS_EFR2) serial_out(up, UART_OMAP_EFR2, 0x0); diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c index 7d0134ecd82f..9529a512cbd4 100644 --- a/drivers/tty/serial/amba-pl011.c +++ b/drivers/tty/serial/amba-pl011.c @@ -1819,6 +1819,13 @@ static void pl011_unthrottle_rx(struct uart_port *port)
pl011_write(uap->im, uap, REG_IMSC);
+#ifdef CONFIG_DMA_ENGINE + if (uap->using_rx_dma) { + uap->dmacr |= UART011_RXDMAE; + pl011_write(uap->dmacr, uap, REG_DMACR); + } +#endif + uart_port_unlock_irqrestore(&uap->port, flags); }
diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c index 9771072da177..dcb1769c3625 100644 --- a/drivers/tty/tty_io.c +++ b/drivers/tty/tty_io.c @@ -3631,7 +3631,7 @@ static struct ctl_table tty_table[] = { .data = &tty_ldisc_autoload, .maxlen = sizeof(tty_ldisc_autoload), .mode = 0644, - .proc_handler = proc_dointvec, + .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_ONE, }, diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h index eab81dfdcc35..0b9ba338b265 100644 --- a/drivers/usb/dwc3/core.h +++ b/drivers/usb/dwc3/core.h @@ -915,6 +915,7 @@ struct dwc3_hwparams { #define DWC3_MODE(n) ((n) & 0x7)
/* HWPARAMS1 */ +#define DWC3_SPRAM_TYPE(n) (((n) >> 23) & 1) #define DWC3_NUM_INT(n) (((n) & (0x3f << 15)) >> 15)
/* HWPARAMS3 */ @@ -925,6 +926,9 @@ struct dwc3_hwparams { #define DWC3_NUM_IN_EPS(p) (((p)->hwparams3 & \ (DWC3_NUM_IN_EPS_MASK)) >> 18)
+/* HWPARAMS6 */ +#define DWC3_RAM0_DEPTH(n) (((n) & (0xffff0000)) >> 16) + /* HWPARAMS7 */ #define DWC3_RAM1_DEPTH(n) ((n) & 0xffff)
diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c index c9533a99e47c..874497f86499 100644 --- a/drivers/usb/dwc3/ep0.c +++ b/drivers/usb/dwc3/ep0.c @@ -232,7 +232,7 @@ void dwc3_ep0_stall_and_restart(struct dwc3 *dwc) /* stall is always issued on EP0 */ dep = dwc->eps[0]; __dwc3_gadget_ep_set_halt(dep, 1, false); - dep->flags &= DWC3_EP_RESOURCE_ALLOCATED; + dep->flags &= DWC3_EP_RESOURCE_ALLOCATED | DWC3_EP_TRANSFER_STARTED; dep->flags |= DWC3_EP_ENABLED; dwc->delayed_status = false;
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index 4959c26d3b71..56744b11e67c 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -687,6 +687,44 @@ static int dwc3_gadget_calc_tx_fifo_size(struct dwc3 *dwc, int mult) return fifo_size; }
+/** + * dwc3_gadget_calc_ram_depth - calculates the ram depth for txfifo + * @dwc: pointer to the DWC3 context + */ +static int dwc3_gadget_calc_ram_depth(struct dwc3 *dwc) +{ + int ram_depth; + int fifo_0_start; + bool is_single_port_ram; + + /* Check supporting RAM type by HW */ + is_single_port_ram = DWC3_SPRAM_TYPE(dwc->hwparams.hwparams1); + + /* + * If a single port RAM is utilized, then allocate TxFIFOs from + * RAM0. otherwise, allocate them from RAM1. + */ + ram_depth = is_single_port_ram ? DWC3_RAM0_DEPTH(dwc->hwparams.hwparams6) : + DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7); + + /* + * In a single port RAM configuration, the available RAM is shared + * between the RX and TX FIFOs. This means that the txfifo can begin + * at a non-zero address. + */ + if (is_single_port_ram) { + u32 reg; + + /* Check if TXFIFOs start at non-zero addr */ + reg = dwc3_readl(dwc->regs, DWC3_GTXFIFOSIZ(0)); + fifo_0_start = DWC3_GTXFIFOSIZ_TXFSTADDR(reg); + + ram_depth -= (fifo_0_start >> 16); + } + + return ram_depth; +} + /** * dwc3_gadget_clear_tx_fifos - Clears txfifo allocation * @dwc: pointer to the DWC3 context @@ -753,7 +791,7 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep) { struct dwc3 *dwc = dep->dwc; int fifo_0_start; - int ram1_depth; + int ram_depth; int fifo_size; int min_depth; int num_in_ep; @@ -773,7 +811,7 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep) if (dep->flags & DWC3_EP_TXFIFO_RESIZED) return 0;
- ram1_depth = DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7); + ram_depth = dwc3_gadget_calc_ram_depth(dwc);
if ((dep->endpoint.maxburst > 1 && usb_endpoint_xfer_bulk(dep->endpoint.desc)) || @@ -794,7 +832,7 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep)
/* Reserve at least one FIFO for the number of IN EPs */ min_depth = num_in_ep * (fifo + 1); - remaining = ram1_depth - min_depth - dwc->last_fifo_depth; + remaining = ram_depth - min_depth - dwc->last_fifo_depth; remaining = max_t(int, 0, remaining); /* * We've already reserved 1 FIFO per EP, so check what we can fit in @@ -820,9 +858,9 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep) dwc->last_fifo_depth += DWC31_GTXFIFOSIZ_TXFDEP(fifo_size);
/* Check fifo size allocation doesn't exceed available RAM size. */ - if (dwc->last_fifo_depth >= ram1_depth) { + if (dwc->last_fifo_depth >= ram_depth) { dev_err(dwc->dev, "Fifosize(%d) > RAM size(%d) %s depth:%d\n", - dwc->last_fifo_depth, ram1_depth, + dwc->last_fifo_depth, ram_depth, dep->endpoint.name, fifo_size); if (DWC3_IP_IS(DWC3)) fifo_size = DWC3_GTXFIFOSIZ_TXFDEP(fifo_size); @@ -1177,11 +1215,14 @@ static u32 dwc3_calc_trbs_left(struct dwc3_ep *dep) * pending to be processed by the driver. */ if (dep->trb_enqueue == dep->trb_dequeue) { + struct dwc3_request *req; + /* - * If there is any request remained in the started_list at - * this point, that means there is no TRB available. + * If there is any request remained in the started_list with + * active TRBs at this point, then there is no TRB available. */ - if (!list_empty(&dep->started_list)) + req = next_request(&dep->started_list); + if (req && req->num_trbs) return 0;
return DWC3_TRB_NUM - 1; @@ -1414,8 +1455,8 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep, struct scatterlist *s; int i; unsigned int length = req->request.length; - unsigned int remaining = req->request.num_mapped_sgs - - req->num_queued_sgs; + unsigned int remaining = req->num_pending_sgs; + unsigned int num_queued_sgs = req->request.num_mapped_sgs - remaining; unsigned int num_trbs = req->num_trbs; bool needs_extra_trb = dwc3_needs_extra_trb(dep, req);
@@ -1423,7 +1464,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep, * If we resume preparing the request, then get the remaining length of * the request and resume where we left off. */ - for_each_sg(req->request.sg, s, req->num_queued_sgs, i) + for_each_sg(req->request.sg, s, num_queued_sgs, i) length -= sg_dma_len(s);
for_each_sg(sg, s, remaining, i) { @@ -3075,7 +3116,7 @@ static int dwc3_gadget_check_config(struct usb_gadget *g) struct dwc3 *dwc = gadget_to_dwc(g); struct usb_ep *ep; int fifo_size = 0; - int ram1_depth; + int ram_depth; int ep_num = 0;
if (!dwc->do_fifo_resize) @@ -3098,8 +3139,8 @@ static int dwc3_gadget_check_config(struct usb_gadget *g) fifo_size += dwc->max_cfg_eps;
/* Check if we can fit a single fifo per endpoint */ - ram1_depth = DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7); - if (fifo_size > ram1_depth) + ram_depth = dwc3_gadget_calc_ram_depth(dwc); + if (fifo_size > ram_depth) return -ENOMEM;
return 0; diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c index f25dd2cb5d03..cec86c0c6369 100644 --- a/drivers/usb/gadget/composite.c +++ b/drivers/usb/gadget/composite.c @@ -2111,8 +2111,20 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl) memset(buf, 0, w_length); buf[5] = 0x01; switch (ctrl->bRequestType & USB_RECIP_MASK) { + /* + * The Microsoft CompatID OS Descriptor Spec(w_index = 0x4) and + * Extended Prop OS Desc Spec(w_index = 0x5) state that the + * HighByte of wValue is the InterfaceNumber and the LowByte is + * the PageNumber. This high/low byte ordering is incorrectly + * documented in the Spec. USB analyzer output on the below + * request packets show the high/low byte inverted i.e LowByte + * is the InterfaceNumber and the HighByte is the PageNumber. + * Since we dont support >64KB CompatID/ExtendedProp descriptors, + * PageNumber is set to 0. Hence verify that the HighByte is 0 + * for below two cases. + */ case USB_RECIP_DEVICE: - if (w_index != 0x4 || (w_value & 0xff)) + if (w_index != 0x4 || (w_value >> 8)) break; buf[6] = w_index; /* Number of ext compat interfaces */ @@ -2128,9 +2140,9 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl) } break; case USB_RECIP_INTERFACE: - if (w_index != 0x5 || (w_value & 0xff)) + if (w_index != 0x5 || (w_value >> 8)) break; - interface = w_value >> 8; + interface = w_value & 0xFF; if (interface >= MAX_CONFIG_INTERFACES || !os_desc_cfg->interface[interface]) break; diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c index 57a851151225..002bf724d802 100644 --- a/drivers/usb/gadget/function/uvc_video.c +++ b/drivers/usb/gadget/function/uvc_video.c @@ -480,6 +480,10 @@ uvc_video_complete(struct usb_ep *ep, struct usb_request *req) * up later. */ list_add_tail(&to_queue->list, &video->req_free); + /* + * There is a new free request - wake up the pump. + */ + queue_work(video->async_wq, &video->pump); }
spin_unlock_irqrestore(&video->req_lock, flags); diff --git a/drivers/usb/host/ehci-spear.c b/drivers/usb/host/ehci-spear.c index d0e94e4c9fe2..11294f196ee3 100644 --- a/drivers/usb/host/ehci-spear.c +++ b/drivers/usb/host/ehci-spear.c @@ -105,7 +105,9 @@ static int spear_ehci_hcd_drv_probe(struct platform_device *pdev) /* registers start at offset 0x0 */ hcd_to_ehci(hcd)->caps = hcd->regs;
- clk_prepare_enable(sehci->clk); + retval = clk_prepare_enable(sehci->clk); + if (retval) + goto err_put_hcd; retval = usb_add_hcd(hcd, irq, IRQF_SHARED); if (retval) goto err_stop_ehci; @@ -130,8 +132,7 @@ static void spear_ehci_hcd_drv_remove(struct platform_device *pdev)
usb_remove_hcd(hcd);
- if (sehci->clk) - clk_disable_unprepare(sehci->clk); + clk_disable_unprepare(sehci->clk); usb_put_hcd(hcd); }
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index cb07cee9ed0c..3ba9902dd209 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -395,14 +395,12 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
if (pdev->vendor == PCI_VENDOR_ID_ETRON && - pdev->device == PCI_DEVICE_ID_EJ168) { - xhci->quirks |= XHCI_RESET_ON_RESUME; - xhci->quirks |= XHCI_BROKEN_STREAMS; - } - if (pdev->vendor == PCI_VENDOR_ID_ETRON && - pdev->device == PCI_DEVICE_ID_EJ188) { + (pdev->device == PCI_DEVICE_ID_EJ168 || + pdev->device == PCI_DEVICE_ID_EJ188)) { + xhci->quirks |= XHCI_ETRON_HOST; xhci->quirks |= XHCI_RESET_ON_RESUME; xhci->quirks |= XHCI_BROKEN_STREAMS; + xhci->quirks |= XHCI_NO_SOFT_RETRY; }
if (pdev->vendor == PCI_VENDOR_ID_RENESAS && diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 928b93ad1ee8..f318864732f2 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -52,6 +52,7 @@ * endpoint rings; it generates events on the event ring for these. */
+#include <linux/jiffies.h> #include <linux/scatterlist.h> #include <linux/slab.h> #include <linux/dma-mapping.h> @@ -972,6 +973,13 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep) unsigned int slot_id = ep->vdev->slot_id; int err;
+ /* + * This is not going to work if the hardware is changing its dequeue + * pointers as we look at them. Completion handler will call us later. + */ + if (ep->ep_state & SET_DEQ_PENDING) + return 0; + xhci = ep->xhci;
list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, cancelled_td_list) { @@ -1061,6 +1069,19 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep) return 0; }
+/* + * Erase queued TDs from transfer ring(s) and give back those the xHC didn't + * stop on. If necessary, queue commands to move the xHC off cancelled TDs it + * stopped on. Those will be given back later when the commands complete. + * + * Call under xhci->lock on a stopped endpoint. + */ +void xhci_process_cancelled_tds(struct xhci_virt_ep *ep) +{ + xhci_invalidate_cancelled_tds(ep); + xhci_giveback_invalidated_tds(ep); +} + /* * Returns the TD the endpoint ring halted on. * Only call for non-running rings without streams. @@ -1151,16 +1172,35 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id, return; case EP_STATE_STOPPED: /* - * NEC uPD720200 sometimes sets this state and fails with - * Context Error while continuing to process TRBs. - * Be conservative and trust EP_CTX_STATE on other chips. + * Per xHCI 4.6.9, Stop Endpoint command on a Stopped + * EP is a Context State Error, and EP stays Stopped. + * + * But maybe it failed on Halted, and somebody ran Reset + * Endpoint later. EP state is now Stopped and EP_HALTED + * still set because Reset EP handler will run after us. + */ + if (ep->ep_state & EP_HALTED) + break; + /* + * On some HCs EP state remains Stopped for some tens of + * us to a few ms or more after a doorbell ring, and any + * new Stop Endpoint fails without aborting the restart. + * This handler may run quickly enough to still see this + * Stopped state, but it will soon change to Running. + * + * Assume this bug on unexpected Stop Endpoint failures. + * Keep retrying until the EP starts and stops again, on + * chips where this is known to help. Wait for 100ms. */ if (!(xhci->quirks & XHCI_NEC_HOST)) break; + if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100))) + break; fallthrough; case EP_STATE_RUNNING: /* Race, HW handled stop ep cmd before ep was running */ - xhci_dbg(xhci, "Stop ep completion ctx error, ep is running\n"); + xhci_dbg(xhci, "Stop ep completion ctx error, ctx_state %d\n", + GET_EP_CTX_STATE(ep_ctx));
command = xhci_alloc_command(xhci, false, GFP_ATOMIC); if (!command) { @@ -1339,7 +1379,6 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id, struct xhci_ep_ctx *ep_ctx; struct xhci_slot_ctx *slot_ctx; struct xhci_td *td, *tmp_td; - bool deferred = false;
ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3])); stream_id = TRB_TO_STREAM_ID(le32_to_cpu(trb->generic.field[2])); @@ -1440,8 +1479,6 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id, xhci_dbg(ep->xhci, "%s: Giveback cancelled URB %p TD\n", __func__, td->urb); xhci_td_cleanup(ep->xhci, td, ep_ring, td->status); - } else if (td->cancel_status == TD_CLEARING_CACHE_DEFERRED) { - deferred = true; } else { xhci_dbg(ep->xhci, "%s: Keep cancelled URB %p TD as cancel_status is %d\n", __func__, td->urb, td->cancel_status); @@ -1452,11 +1489,15 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id, ep->queued_deq_seg = NULL; ep->queued_deq_ptr = NULL;
- if (deferred) { - /* We have more streams to clear */ + /* Check for deferred or newly cancelled TDs */ + if (!list_empty(&ep->cancelled_td_list)) { xhci_dbg(ep->xhci, "%s: Pending TDs to clear, continuing with invalidation\n", __func__); xhci_invalidate_cancelled_tds(ep); + /* Try to restart the endpoint if all is done */ + ring_doorbell_for_active_rings(xhci, slot_id, ep_index); + /* Start giving back any TDs invalidated above */ + xhci_giveback_invalidated_tds(ep); } else { /* Restart any rings with pending URBs */ xhci_dbg(ep->xhci, "%s: All TDs cleared, ring doorbell\n", __func__); @@ -3727,6 +3768,20 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags, if (!urb->setup_packet) return -EINVAL;
+ if ((xhci->quirks & XHCI_ETRON_HOST) && + urb->dev->speed >= USB_SPEED_SUPER) { + /* + * If next available TRB is the Link TRB in the ring segment then + * enqueue a No Op TRB, this can prevent the Setup and Data Stage + * TRB to be breaked by the Link TRB. + */ + if (trb_is_link(ep_ring->enqueue + 1)) { + field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state; + queue_trb(xhci, ep_ring, false, 0, 0, + TRB_INTR_TARGET(0), field); + } + } + /* 1 TRB for setup, 1 for status */ num_trbs = 2; /* diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 899c0effb5d3..358ed674f782 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -8,6 +8,7 @@ * Some code borrowed from the Linux EHCI driver. */
+#include <linux/jiffies.h> #include <linux/pci.h> #include <linux/iommu.h> #include <linux/iopoll.h> @@ -1768,15 +1769,27 @@ static int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status) } }
- /* Queue a stop endpoint command, but only if this is - * the first cancellation to be handled. - */ - if (!(ep->ep_state & EP_STOP_CMD_PENDING)) { + /* These completion handlers will sort out cancelled TDs for us */ + if (ep->ep_state & (EP_STOP_CMD_PENDING | EP_HALTED | SET_DEQ_PENDING)) { + xhci_dbg(xhci, "Not queuing Stop Endpoint on slot %d ep %d in state 0x%x\n", + urb->dev->slot_id, ep_index, ep->ep_state); + goto done; + } + + /* In this case no commands are pending but the endpoint is stopped */ + if (ep->ep_state & EP_CLEARING_TT) { + /* and cancelled TDs can be given back right away */ + xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n", + urb->dev->slot_id, ep_index, ep->ep_state); + xhci_process_cancelled_tds(ep); + } else { + /* Otherwise, queue a new Stop Endpoint command */ command = xhci_alloc_command(xhci, false, GFP_ATOMIC); if (!command) { ret = -ENOMEM; goto done; } + ep->stop_time = jiffies; ep->ep_state |= EP_STOP_CMD_PENDING; xhci_queue_stop_endpoint(xhci, command, urb->dev->slot_id, ep_index, 0); @@ -3692,6 +3705,8 @@ void xhci_free_device_endpoint_resources(struct xhci_hcd *xhci, xhci->num_active_eps); }
+static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev); + /* * This submits a Reset Device Command, which will set the device state to 0, * set the device address to 0, and disable all the endpoints except the default @@ -3762,6 +3777,23 @@ static int xhci_discover_or_reset_device(struct usb_hcd *hcd, SLOT_STATE_DISABLED) return 0;
+ if (xhci->quirks & XHCI_ETRON_HOST) { + /* + * Obtaining a new device slot to inform the xHCI host that + * the USB device has been reset. + */ + ret = xhci_disable_slot(xhci, udev->slot_id); + xhci_free_virt_device(xhci, udev->slot_id); + if (!ret) { + ret = xhci_alloc_dev(hcd, udev); + if (ret == 1) + ret = 0; + else + ret = -EINVAL; + } + return ret; + } + trace_xhci_discover_or_reset_device(slot_ctx);
xhci_dbg(xhci, "Resetting device with slot ID %u\n", slot_id); diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index f0fb696d5619..673179047eb8 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -690,6 +690,7 @@ struct xhci_virt_ep { /* Bandwidth checking storage */ struct xhci_bw_info bw_info; struct list_head bw_endpoint_list; + unsigned long stop_time; /* Isoch Frame ID checking storage */ int next_frame_id; /* Use new Isoch TRB layout needed for extended TBC support */ @@ -1624,6 +1625,7 @@ struct xhci_hcd { #define XHCI_ZHAOXIN_HOST BIT_ULL(46) #define XHCI_WRITE_64_HI_LO BIT_ULL(47) #define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48) +#define XHCI_ETRON_HOST BIT_ULL(49)
unsigned int num_active_eps; unsigned int limit_active_eps; @@ -1913,6 +1915,7 @@ void xhci_ring_doorbell_for_active_rings(struct xhci_hcd *xhci, void xhci_cleanup_command_queue(struct xhci_hcd *xhci); void inc_deq(struct xhci_hcd *xhci, struct xhci_ring *ring); unsigned int count_trbs(u64 addr, u64 len); +void xhci_process_cancelled_tds(struct xhci_virt_ep *ep);
/* xHCI roothub code */ void xhci_set_link_state(struct xhci_hcd *xhci, struct xhci_port *port, diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c index 6fb5140e29b9..225863321dc4 100644 --- a/drivers/usb/misc/chaoskey.c +++ b/drivers/usb/misc/chaoskey.c @@ -27,6 +27,8 @@ static struct usb_class_driver chaoskey_class; static int chaoskey_rng_read(struct hwrng *rng, void *data, size_t max, bool wait);
+static DEFINE_MUTEX(chaoskey_list_lock); + #define usb_dbg(usb_if, format, arg...) \ dev_dbg(&(usb_if)->dev, format, ## arg)
@@ -233,6 +235,7 @@ static void chaoskey_disconnect(struct usb_interface *interface) usb_deregister_dev(interface, &chaoskey_class);
usb_set_intfdata(interface, NULL); + mutex_lock(&chaoskey_list_lock); mutex_lock(&dev->lock);
dev->present = false; @@ -244,6 +247,7 @@ static void chaoskey_disconnect(struct usb_interface *interface) } else mutex_unlock(&dev->lock);
+ mutex_unlock(&chaoskey_list_lock); usb_dbg(interface, "disconnect done"); }
@@ -251,6 +255,7 @@ static int chaoskey_open(struct inode *inode, struct file *file) { struct chaoskey *dev; struct usb_interface *interface; + int rv = 0;
/* get the interface from minor number and driver information */ interface = usb_find_interface(&chaoskey_driver, iminor(inode)); @@ -266,18 +271,23 @@ static int chaoskey_open(struct inode *inode, struct file *file) }
file->private_data = dev; + mutex_lock(&chaoskey_list_lock); mutex_lock(&dev->lock); - ++dev->open; + if (dev->present) + ++dev->open; + else + rv = -ENODEV; mutex_unlock(&dev->lock); + mutex_unlock(&chaoskey_list_lock);
- usb_dbg(interface, "open success"); - return 0; + return rv; }
static int chaoskey_release(struct inode *inode, struct file *file) { struct chaoskey *dev = file->private_data; struct usb_interface *interface; + int rv = 0;
if (dev == NULL) return -ENODEV; @@ -286,14 +296,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
usb_dbg(interface, "release");
+ mutex_lock(&chaoskey_list_lock); mutex_lock(&dev->lock);
usb_dbg(interface, "open count at release is %d", dev->open);
if (dev->open <= 0) { usb_dbg(interface, "invalid open count (%d)", dev->open); - mutex_unlock(&dev->lock); - return -ENODEV; + rv = -ENODEV; + goto bail; }
--dev->open; @@ -302,13 +313,15 @@ static int chaoskey_release(struct inode *inode, struct file *file) if (dev->open == 0) { mutex_unlock(&dev->lock); chaoskey_free(dev); - } else - mutex_unlock(&dev->lock); - } else - mutex_unlock(&dev->lock); - + goto destruction; + } + } +bail: + mutex_unlock(&dev->lock); +destruction: + mutex_unlock(&chaoskey_list_lock); usb_dbg(interface, "release success"); - return 0; + return rv; }
static void chaos_read_callback(struct urb *urb) diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c index 6d28467ce352..365c10069345 100644 --- a/drivers/usb/misc/iowarrior.c +++ b/drivers/usb/misc/iowarrior.c @@ -277,28 +277,45 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer, struct iowarrior *dev; int read_idx; int offset; + int retval;
dev = file->private_data;
+ if (file->f_flags & O_NONBLOCK) { + retval = mutex_trylock(&dev->mutex); + if (!retval) + return -EAGAIN; + } else { + retval = mutex_lock_interruptible(&dev->mutex); + if (retval) + return -ERESTARTSYS; + } + /* verify that the device wasn't unplugged */ - if (!dev || !dev->present) - return -ENODEV; + if (!dev->present) { + retval = -ENODEV; + goto exit; + }
dev_dbg(&dev->interface->dev, "minor %d, count = %zd\n", dev->minor, count);
/* read count must be packet size (+ time stamp) */ if ((count != dev->report_size) - && (count != (dev->report_size + 1))) - return -EINVAL; + && (count != (dev->report_size + 1))) { + retval = -EINVAL; + goto exit; + }
/* repeat until no buffer overrun in callback handler occur */ do { atomic_set(&dev->overflow_flag, 0); if ((read_idx = read_index(dev)) == -1) { /* queue empty */ - if (file->f_flags & O_NONBLOCK) - return -EAGAIN; + if (file->f_flags & O_NONBLOCK) { + retval = -EAGAIN; + goto exit; + } else { //next line will return when there is either new data, or the device is unplugged int r = wait_event_interruptible(dev->read_wait, @@ -309,28 +326,37 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer, -1)); if (r) { //we were interrupted by a signal - return -ERESTART; + retval = -ERESTART; + goto exit; } if (!dev->present) { //The device was unplugged - return -ENODEV; + retval = -ENODEV; + goto exit; } if (read_idx == -1) { // Can this happen ??? - return 0; + retval = 0; + goto exit; } } }
offset = read_idx * (dev->report_size + 1); if (copy_to_user(buffer, dev->read_queue + offset, count)) { - return -EFAULT; + retval = -EFAULT; + goto exit; } } while (atomic_read(&dev->overflow_flag));
read_idx = ++read_idx == MAX_INTERRUPT_BUFFER ? 0 : read_idx; atomic_set(&dev->read_idx, read_idx); + mutex_unlock(&dev->mutex); return count; + +exit: + mutex_unlock(&dev->mutex); + return retval; }
/* @@ -885,7 +911,6 @@ static int iowarrior_probe(struct usb_interface *interface, static void iowarrior_disconnect(struct usb_interface *interface) { struct iowarrior *dev = usb_get_intfdata(interface); - int minor = dev->minor;
usb_deregister_dev(interface, &iowarrior_class);
@@ -909,9 +934,6 @@ static void iowarrior_disconnect(struct usb_interface *interface) mutex_unlock(&dev->mutex); iowarrior_delete(dev); } - - dev_info(&interface->dev, "I/O-Warror #%d now disconnected\n", - minor - IOWARRIOR_MINOR_BASE); }
/* usb specific object needed to register this driver with the usb subsystem */ diff --git a/drivers/usb/misc/usb-ljca.c b/drivers/usb/misc/usb-ljca.c index 01ceafc4ab78..d9c21f783055 100644 --- a/drivers/usb/misc/usb-ljca.c +++ b/drivers/usb/misc/usb-ljca.c @@ -332,14 +332,11 @@ static int ljca_send(struct ljca_adapter *adap, u8 type, u8 cmd,
ret = usb_bulk_msg(adap->usb_dev, adap->tx_pipe, header, msg_len, &transferred, LJCA_WRITE_TIMEOUT_MS); - - usb_autopm_put_interface(adap->intf); - if (ret < 0) - goto out; + goto out_put; if (transferred != msg_len) { ret = -EIO; - goto out; + goto out_put; }
if (ack) { @@ -347,11 +344,14 @@ static int ljca_send(struct ljca_adapter *adap, u8 type, u8 cmd, timeout); if (!ret) { ret = -ETIMEDOUT; - goto out; + goto out_put; } } ret = adap->actual_length;
+out_put: + usb_autopm_put_interface(adap->intf); + out: spin_lock_irqsave(&adap->lock, flags); adap->ex_buf = NULL; @@ -811,6 +811,14 @@ static int ljca_probe(struct usb_interface *interface, if (ret) goto err_free;
+ /* + * This works around problems with ov2740 initialization on some + * Lenovo platforms. The autosuspend delay, has to be smaller than + * the delay after setting the reset_gpio line in ov2740_resume(). + * Otherwise the sensor randomly fails to initialize. + */ + pm_runtime_set_autosuspend_delay(&usb_dev->dev, 10); + usb_enable_autosuspend(usb_dev);
return 0; diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c index 6aebc736a80c..70dff0db5354 100644 --- a/drivers/usb/misc/yurex.c +++ b/drivers/usb/misc/yurex.c @@ -441,7 +441,10 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer, if (count == 0) goto error;
- mutex_lock(&dev->io_mutex); + retval = mutex_lock_interruptible(&dev->io_mutex); + if (retval < 0) + return -EINTR; + if (dev->disconnected) { /* already disconnected */ mutex_unlock(&dev->io_mutex); retval = -ENODEV; diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c index bdf13911a1e5..c6076df0d50c 100644 --- a/drivers/usb/musb/musb_gadget.c +++ b/drivers/usb/musb/musb_gadget.c @@ -1161,12 +1161,19 @@ void musb_free_request(struct usb_ep *ep, struct usb_request *req) */ void musb_ep_restart(struct musb *musb, struct musb_request *req) { + u16 csr; + void __iomem *epio = req->ep->hw_ep->regs; + trace_musb_req_start(req); musb_ep_select(musb->mregs, req->epnum); - if (req->tx) + if (req->tx) { txstate(musb, req); - else - rxstate(musb, req); + } else { + csr = musb_readw(epio, MUSB_RXCSR); + csr |= MUSB_RXCSR_FLUSHFIFO | MUSB_RXCSR_P_WZC_BITS; + musb_writew(epio, MUSB_RXCSR, csr); + musb_writew(epio, MUSB_RXCSR, csr); + } }
static int musb_ep_restart_resume_work(struct musb *musb, void *data) diff --git a/drivers/usb/typec/tcpm/wcove.c b/drivers/usb/typec/tcpm/wcove.c index cf719307b3f6..60b2766a69bf 100644 --- a/drivers/usb/typec/tcpm/wcove.c +++ b/drivers/usb/typec/tcpm/wcove.c @@ -621,10 +621,6 @@ static int wcove_typec_probe(struct platform_device *pdev) if (irq < 0) return irq;
- irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr, irq); - if (irq < 0) - return irq; - ret = guid_parse(WCOVE_DSM_UUID, &wcove->guid); if (ret) return ret; diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c index bccfc03b5986..fcb8e61136cf 100644 --- a/drivers/usb/typec/ucsi/ucsi_ccg.c +++ b/drivers/usb/typec/ucsi/ucsi_ccg.c @@ -644,6 +644,10 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command) uc->has_multiple_dp) { con_index = (uc->last_cmd_sent >> 16) & UCSI_CMD_CONNECTOR_MASK; + if (con_index == 0) { + ret = -EINVAL; + goto unlock; + } con = &uc->ucsi->connector[con_index - 1]; ucsi_ccg_update_set_new_cam_cmd(uc, con, &command); } @@ -651,6 +655,7 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command) ret = ucsi_sync_control_common(ucsi, command);
pm_runtime_put_sync(uc->dev); +unlock: mutex_unlock(&uc->lock);
return ret; diff --git a/drivers/usb/typec/ucsi/ucsi_glink.c b/drivers/usb/typec/ucsi/ucsi_glink.c index 03c0fa8edc8d..f7000d383a4e 100644 --- a/drivers/usb/typec/ucsi/ucsi_glink.c +++ b/drivers/usb/typec/ucsi/ucsi_glink.c @@ -185,7 +185,7 @@ static void pmic_glink_ucsi_connector_status(struct ucsi_connector *con) struct pmic_glink_ucsi *ucsi = ucsi_get_drvdata(con->ucsi); int orientation;
- if (con->num >= PMIC_GLINK_MAX_PORTS || + if (con->num > PMIC_GLINK_MAX_PORTS || !ucsi->port_orientation[con->num - 1]) return;
diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c index 7d0c83b5b071..8455f08f5d40 100644 --- a/drivers/vdpa/mlx5/core/mr.c +++ b/drivers/vdpa/mlx5/core/mr.c @@ -368,7 +368,6 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr unsigned long lgcd = 0; int log_entity_size; unsigned long size; - u64 start = 0; int err; struct page *pg; unsigned int nsg; @@ -379,10 +378,9 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr struct device *dma = mvdev->vdev.dma_dev;
for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1); - map; map = vhost_iotlb_itree_next(map, start, mr->end - 1)) { + map; map = vhost_iotlb_itree_next(map, mr->start, mr->end - 1)) { size = maplen(map, mr); lgcd = gcd(lgcd, size); - start += size; } log_entity_size = ilog2(lgcd);
diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 41a4b0cf4297..7527e277c898 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -423,6 +423,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, unsigned long filled; unsigned int to_fill; int ret; + int i;
to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*page_list)); page_list = kvzalloc(to_fill * sizeof(*page_list), GFP_KERNEL_ACCOUNT); @@ -443,7 +444,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, GFP_KERNEL_ACCOUNT);
if (ret) - goto err; + goto err_append; buf->allocated_length += filled * PAGE_SIZE; /* clean input for another bulk allocation */ memset(page_list, 0, filled * sizeof(*page_list)); @@ -454,6 +455,9 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, kvfree(page_list); return 0;
+err_append: + for (i = filled - 1; i >= 0; i--) + __free_page(page_list[i]); err: kvfree(page_list); return ret; diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index 242c23eef452..8833e60d42f5 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -640,14 +640,11 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track) O_RDONLY); if (IS_ERR(migf->filp)) { ret = PTR_ERR(migf->filp); - goto end; + kfree(migf); + return ERR_PTR(ret); }
migf->mvdev = mvdev; - ret = mlx5vf_cmd_alloc_pd(migf); - if (ret) - goto out_free; - stream_open(migf->filp->f_inode, migf->filp); mutex_init(&migf->lock); init_waitqueue_head(&migf->poll_wait); @@ -663,6 +660,11 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track) INIT_LIST_HEAD(&migf->buf_list); INIT_LIST_HEAD(&migf->avail_list); spin_lock_init(&migf->list_lock); + + ret = mlx5vf_cmd_alloc_pd(migf); + if (ret) + goto out; + ret = mlx5vf_cmd_query_vhca_migration_state(mvdev, &length, &full_size, 0); if (ret) goto out_pd; @@ -692,10 +694,8 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track) mlx5vf_free_data_buffer(buf); out_pd: mlx5fv_cmd_clean_migf_resources(migf); -out_free: +out: fput(migf->filp); -end: - kfree(migf); return ERR_PTR(ret); }
@@ -1016,13 +1016,19 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev) O_WRONLY); if (IS_ERR(migf->filp)) { ret = PTR_ERR(migf->filp); - goto end; + kfree(migf); + return ERR_PTR(ret); }
+ stream_open(migf->filp->f_inode, migf->filp); + mutex_init(&migf->lock); + INIT_LIST_HEAD(&migf->buf_list); + INIT_LIST_HEAD(&migf->avail_list); + spin_lock_init(&migf->list_lock); migf->mvdev = mvdev; ret = mlx5vf_cmd_alloc_pd(migf); if (ret) - goto out_free; + goto out;
buf = mlx5vf_alloc_data_buffer(migf, 0, DMA_TO_DEVICE); if (IS_ERR(buf)) { @@ -1041,20 +1047,13 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev) migf->buf_header[0] = buf; migf->load_state = MLX5_VF_LOAD_STATE_READ_HEADER;
- stream_open(migf->filp->f_inode, migf->filp); - mutex_init(&migf->lock); - INIT_LIST_HEAD(&migf->buf_list); - INIT_LIST_HEAD(&migf->avail_list); - spin_lock_init(&migf->list_lock); return migf; out_buf: mlx5vf_free_data_buffer(migf->buf[0]); out_pd: mlx5vf_cmd_dealloc_pd(migf); -out_free: +out: fput(migf->filp); -end: - kfree(migf); return ERR_PTR(ret); }
diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c index 97422aafaa7b..ea2745c1ac5e 100644 --- a/drivers/vfio/pci/vfio_pci_config.c +++ b/drivers/vfio/pci/vfio_pci_config.c @@ -313,6 +313,10 @@ static int vfio_virt_config_read(struct vfio_pci_core_device *vdev, int pos, return count; }
+static struct perm_bits direct_ro_perms = { + .readfn = vfio_direct_config_read, +}; + /* Default capability regions to read-only, no-virtualization */ static struct perm_bits cap_perms[PCI_CAP_ID_MAX + 1] = { [0 ... PCI_CAP_ID_MAX] = { .readfn = vfio_direct_config_read } @@ -1897,9 +1901,17 @@ static ssize_t vfio_config_do_rw(struct vfio_pci_core_device *vdev, char __user cap_start = *ppos; } else { if (*ppos >= PCI_CFG_SPACE_SIZE) { - WARN_ON(cap_id > PCI_EXT_CAP_ID_MAX); + /* + * We can get a cap_id that exceeds PCI_EXT_CAP_ID_MAX + * if we're hiding an unknown capability at the start + * of the extended capability list. Use default, ro + * access, which will virtualize the id and next values. + */ + if (cap_id > PCI_EXT_CAP_ID_MAX) + perm = &direct_ro_perms; + else + perm = &ecap_perms[cap_id];
- perm = &ecap_perms[cap_id]; cap_start = vfio_find_cap_start(vdev, *ppos); } else { WARN_ON(cap_id > PCI_CAP_ID_MAX); diff --git a/drivers/video/fbdev/sh7760fb.c b/drivers/video/fbdev/sh7760fb.c index 3d2a27fefc87..130adef2e468 100644 --- a/drivers/video/fbdev/sh7760fb.c +++ b/drivers/video/fbdev/sh7760fb.c @@ -409,12 +409,11 @@ static int sh7760fb_alloc_mem(struct fb_info *info) vram = PAGE_SIZE;
fbmem = dma_alloc_coherent(info->device, vram, &par->fbdma, GFP_KERNEL); - if (!fbmem) return -ENOMEM;
if ((par->fbdma & SH7760FB_DMA_MASK) != SH7760FB_DMA_MASK) { - sh7760fb_free_mem(info); + dma_free_coherent(info->device, vram, fbmem, par->fbdma); dev_err(info->device, "kernel gave me memory at 0x%08lx, which is" "unusable for the LCDC\n", (unsigned long)par->fbdma); return -ENOMEM; diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig index 684b9fe84fff..94c96bcfefe3 100644 --- a/drivers/watchdog/Kconfig +++ b/drivers/watchdog/Kconfig @@ -1509,7 +1509,7 @@ config 60XX_WDT
config SBC8360_WDT tristate "SBC8360 Watchdog Timer" - depends on X86_32 + depends on X86_32 && HAS_IOPORT help
This is the driver for the hardware watchdog on the SBC8360 Single @@ -1522,7 +1522,7 @@ config SBC8360_WDT
config SBC7240_WDT tristate "SBC Nano 7240 Watchdog Timer" - depends on X86_32 && !UML + depends on X86_32 && HAS_IOPORT help This is the driver for the hardware watchdog found on the IEI single board computers EPIC Nano 7240 (and likely others). This diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c index 9f097f1f4a4c..6d32ffb01136 100644 --- a/drivers/xen/xenbus/xenbus_probe.c +++ b/drivers/xen/xenbus/xenbus_probe.c @@ -313,7 +313,7 @@ int xenbus_dev_probe(struct device *_dev) if (err) { dev_warn(&dev->dev, "watch_otherend on %s failed.\n", dev->nodename); - return err; + goto fail_remove; }
dev->spurious_threshold = 1; @@ -322,6 +322,12 @@ int xenbus_dev_probe(struct device *_dev) dev->nodename);
return 0; +fail_remove: + if (drv->remove) { + down(&dev->reclaim_sem); + drv->remove(dev); + up(&dev->reclaim_sem); + } fail_put: module_put(drv->driver.owner); fail: diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 06dc4a57ba78..0a216a078c31 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1251,6 +1251,7 @@ static int load_elf_binary(struct linux_binprm *bprm) } reloc_func_desc = interp_load_addr;
+ allow_write_access(interpreter); fput(interpreter);
kfree(interp_elf_ex); @@ -1347,6 +1348,7 @@ static int load_elf_binary(struct linux_binprm *bprm) kfree(interp_elf_ex); kfree(interp_elf_phdata); out_free_file: + allow_write_access(interpreter); if (interpreter) fput(interpreter); out_free_ph: diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c index 4fe5bb9f1b1f..7d35f0e1bc76 100644 --- a/fs/binfmt_elf_fdpic.c +++ b/fs/binfmt_elf_fdpic.c @@ -394,6 +394,7 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm) goto error; }
+ allow_write_access(interpreter); fput(interpreter); interpreter = NULL; } @@ -465,8 +466,10 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm) retval = 0;
error: - if (interpreter) + if (interpreter) { + allow_write_access(interpreter); fput(interpreter); + } kfree(interpreter_name); kfree(exec_params.phdrs); kfree(exec_params.loadmap); diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c index 31660d8cc2c6..6a3a16f91051 100644 --- a/fs/binfmt_misc.c +++ b/fs/binfmt_misc.c @@ -247,10 +247,13 @@ static int load_misc_binary(struct linux_binprm *bprm) if (retval < 0) goto ret;
- if (fmt->flags & MISC_FMT_OPEN_FILE) + if (fmt->flags & MISC_FMT_OPEN_FILE) { interp_file = file_clone_open(fmt->interp_file); - else + if (!IS_ERR(interp_file)) + deny_write_access(interp_file); + } else { interp_file = open_exec(fmt->interpreter); + } retval = PTR_ERR(interp_file); if (IS_ERR(interp_file)) goto ret; diff --git a/fs/cachefiles/interface.c b/fs/cachefiles/interface.c index 35ba2117a6f6..3e63cfe15874 100644 --- a/fs/cachefiles/interface.c +++ b/fs/cachefiles/interface.c @@ -327,6 +327,8 @@ static void cachefiles_commit_object(struct cachefiles_object *object, static void cachefiles_clean_up_object(struct cachefiles_object *object, struct cachefiles_cache *cache) { + struct file *file; + if (test_bit(FSCACHE_COOKIE_RETIRED, &object->cookie->flags)) { if (!test_bit(CACHEFILES_OBJECT_USING_TMPFILE, &object->flags)) { cachefiles_see_object(object, cachefiles_obj_see_clean_delete); @@ -342,10 +344,14 @@ static void cachefiles_clean_up_object(struct cachefiles_object *object, }
cachefiles_unmark_inode_in_use(object, object->file); - if (object->file) { - fput(object->file); - object->file = NULL; - } + + spin_lock(&object->lock); + file = object->file; + object->file = NULL; + spin_unlock(&object->lock); + + if (file) + fput(file); }
/* diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c index 470c96658385..fe3de9ad57bf 100644 --- a/fs/cachefiles/ondemand.c +++ b/fs/cachefiles/ondemand.c @@ -60,26 +60,36 @@ static ssize_t cachefiles_ondemand_fd_write_iter(struct kiocb *kiocb, { struct cachefiles_object *object = kiocb->ki_filp->private_data; struct cachefiles_cache *cache = object->volume->cache; - struct file *file = object->file; - size_t len = iter->count; + struct file *file; + size_t len = iter->count, aligned_len = len; loff_t pos = kiocb->ki_pos; const struct cred *saved_cred; int ret;
- if (!file) + spin_lock(&object->lock); + file = object->file; + if (!file) { + spin_unlock(&object->lock); return -ENOBUFS; + } + get_file(file); + spin_unlock(&object->lock);
cachefiles_begin_secure(cache, &saved_cred); - ret = __cachefiles_prepare_write(object, file, &pos, &len, len, true); + ret = __cachefiles_prepare_write(object, file, &pos, &aligned_len, len, true); cachefiles_end_secure(cache, saved_cred); if (ret < 0) - return ret; + goto out;
trace_cachefiles_ondemand_fd_write(object, file_inode(file), pos, len); ret = __cachefiles_write(object, file, pos, iter, NULL, NULL); - if (!ret) + if (!ret) { ret = len; + kiocb->ki_pos += ret; + }
+out: + fput(file); return ret; }
@@ -87,12 +97,22 @@ static loff_t cachefiles_ondemand_fd_llseek(struct file *filp, loff_t pos, int whence) { struct cachefiles_object *object = filp->private_data; - struct file *file = object->file; + struct file *file; + loff_t ret;
- if (!file) + spin_lock(&object->lock); + file = object->file; + if (!file) { + spin_unlock(&object->lock); return -ENOBUFS; + } + get_file(file); + spin_unlock(&object->lock);
- return vfs_llseek(file, pos, whence); + ret = vfs_llseek(file, pos, whence); + fput(file); + + return ret; }
static long cachefiles_ondemand_fd_ioctl(struct file *filp, unsigned int ioctl, diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c index 742b30b61c19..0fe8d80ce5e8 100644 --- a/fs/dlm/ast.c +++ b/fs/dlm/ast.c @@ -30,7 +30,7 @@ static void dlm_run_callback(uint32_t ls_id, uint32_t lkb_id, int8_t mode, trace_dlm_bast(ls_id, lkb_id, mode, res_name, res_length); bastfn(astparam, mode); } else if (flags & DLM_CB_CAST) { - trace_dlm_ast(ls_id, lkb_id, sb_status, sb_flags, res_name, + trace_dlm_ast(ls_id, lkb_id, sb_flags, sb_status, res_name, res_length); lksb->sb_status = sb_status; lksb->sb_flags = sb_flags; diff --git a/fs/dlm/recoverd.c b/fs/dlm/recoverd.c index 34f4f9f49a6c..12272a8f6d75 100644 --- a/fs/dlm/recoverd.c +++ b/fs/dlm/recoverd.c @@ -151,7 +151,7 @@ static int ls_recover(struct dlm_ls *ls, struct dlm_recover *rv) error = dlm_recover_members(ls, rv, &neg); if (error) { log_rinfo(ls, "dlm_recover_members error %d", error); - goto fail; + goto fail_root_list; }
dlm_recover_dir_nodeid(ls, &root_list); diff --git a/fs/efs/super.c b/fs/efs/super.c index e4421c10caeb..c59086b7eabf 100644 --- a/fs/efs/super.c +++ b/fs/efs/super.c @@ -15,7 +15,6 @@ #include <linux/vfs.h> #include <linux/blkdev.h> #include <linux/fs_context.h> -#include <linux/fs_parser.h> #include "efs.h" #include <linux/efs_vh.h> #include <linux/efs_fs_sb.h> @@ -49,15 +48,6 @@ static struct pt_types sgi_pt_types[] = { {0, NULL} };
-enum { - Opt_explicit_open, -}; - -static const struct fs_parameter_spec efs_param_spec[] = { - fsparam_flag ("explicit-open", Opt_explicit_open), - {} -}; - /* * File system definition and registration. */ @@ -67,7 +57,6 @@ static struct file_system_type efs_fs_type = { .kill_sb = efs_kill_sb, .fs_flags = FS_REQUIRES_DEV, .init_fs_context = efs_init_fs_context, - .parameters = efs_param_spec, }; MODULE_ALIAS_FS("efs");
@@ -265,7 +254,8 @@ static int efs_fill_super(struct super_block *s, struct fs_context *fc) if (!sb_set_blocksize(s, EFS_BLOCKSIZE)) { pr_err("device does not support %d byte blocks\n", EFS_BLOCKSIZE); - return -EINVAL; + return invalf(fc, "device does not support %d byte blocks\n", + EFS_BLOCKSIZE); }
/* read the vh (volume header) block */ @@ -327,43 +317,22 @@ static int efs_fill_super(struct super_block *s, struct fs_context *fc) return 0; }
-static void efs_free_fc(struct fs_context *fc) -{ - kfree(fc->fs_private); -} - static int efs_get_tree(struct fs_context *fc) { return get_tree_bdev(fc, efs_fill_super); }
-static int efs_parse_param(struct fs_context *fc, struct fs_parameter *param) -{ - int token; - struct fs_parse_result result; - - token = fs_parse(fc, efs_param_spec, param, &result); - if (token < 0) - return token; - return 0; -} - static int efs_reconfigure(struct fs_context *fc) { sync_filesystem(fc->root->d_sb); + fc->sb_flags |= SB_RDONLY;
return 0; }
-struct efs_context { - unsigned long s_mount_opts; -}; - static const struct fs_context_operations efs_context_opts = { - .parse_param = efs_parse_param, .get_tree = efs_get_tree, .reconfigure = efs_reconfigure, - .free = efs_free_fc, };
/* @@ -371,12 +340,6 @@ static const struct fs_context_operations efs_context_opts = { */ static int efs_init_fs_context(struct fs_context *fc) { - struct efs_context *ctx; - - ctx = kzalloc(sizeof(struct efs_context), GFP_KERNEL); - if (!ctx) - return -ENOMEM; - fc->fs_private = ctx; fc->ops = &efs_context_opts;
return 0; diff --git a/fs/erofs/data.c b/fs/erofs/data.c index 61debd799cf9..fa51437e1d99 100644 --- a/fs/erofs/data.c +++ b/fs/erofs/data.c @@ -38,7 +38,7 @@ void *erofs_bread(struct erofs_buf *buf, erofs_off_t offset, } if (!folio || !folio_contains(folio, index)) { erofs_put_metabuf(buf); - folio = read_mapping_folio(buf->mapping, index, NULL); + folio = read_mapping_folio(buf->mapping, index, buf->file); if (IS_ERR(folio)) return folio; } @@ -61,9 +61,11 @@ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb) { struct erofs_sb_info *sbi = EROFS_SB(sb);
- if (erofs_is_fileio_mode(sbi)) - buf->mapping = file_inode(sbi->fdev)->i_mapping; - else if (erofs_is_fscache_mode(sb)) + buf->file = NULL; + if (erofs_is_fileio_mode(sbi)) { + buf->file = sbi->fdev; /* some fs like FUSE needs it */ + buf->mapping = buf->file->f_mapping; + } else if (erofs_is_fscache_mode(sb)) buf->mapping = sbi->s_fscache->inode->i_mapping; else buf->mapping = sb->s_bdev->bd_mapping; diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h index 4efd578d7c62..9b03c8f323a7 100644 --- a/fs/erofs/internal.h +++ b/fs/erofs/internal.h @@ -221,6 +221,7 @@ enum erofs_kmap_type {
struct erofs_buf { struct address_space *mapping; + struct file *file; struct page *page; void *base; enum erofs_kmap_type kmap_type; diff --git a/fs/erofs/super.c b/fs/erofs/super.c index bed3dbe5b7cb..2dd7d819572f 100644 --- a/fs/erofs/super.c +++ b/fs/erofs/super.c @@ -631,7 +631,11 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc) errorfc(fc, "unsupported blksize for fscache mode"); return -EINVAL; } - if (!sb_set_blocksize(sb, 1 << sbi->blkszbits)) { + + if (erofs_is_fileio_mode(sbi)) { + sb->s_blocksize = 1 << sbi->blkszbits; + sb->s_blocksize_bits = sbi->blkszbits; + } else if (!sb_set_blocksize(sb, 1 << sbi->blkszbits)) { errorfc(fc, "failed to set erofs blksize"); return -EINVAL; } diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c index a076cca1f547..4535f2f0a014 100644 --- a/fs/erofs/zmap.c +++ b/fs/erofs/zmap.c @@ -219,7 +219,7 @@ static int z_erofs_load_compact_lcluster(struct z_erofs_maprecorder *m, unsigned int amortizedshift; erofs_off_t pos;
- if (lcn >= totalidx) + if (lcn >= totalidx || vi->z_logical_clusterbits > 14) return -EINVAL;
m->lcn = lcn; @@ -390,7 +390,7 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m) u64 lcn = m->lcn, headlcn = map->m_la >> lclusterbits; int err;
- do { + while (1) { /* handle the last EOF pcluster (no next HEAD lcluster) */ if ((lcn << lclusterbits) >= inode->i_size) { map->m_llen = inode->i_size - map->m_la; @@ -402,14 +402,16 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m) return err;
if (m->type == Z_EROFS_LCLUSTER_TYPE_NONHEAD) { - DBG_BUGON(!m->delta[1] && - m->clusterofs != 1 << lclusterbits); + /* work around invalid d1 generated by pre-1.0 mkfs */ + if (unlikely(!m->delta[1])) { + m->delta[1] = 1; + DBG_BUGON(1); + } } else if (m->type == Z_EROFS_LCLUSTER_TYPE_PLAIN || m->type == Z_EROFS_LCLUSTER_TYPE_HEAD1 || m->type == Z_EROFS_LCLUSTER_TYPE_HEAD2) { - /* go on until the next HEAD lcluster */ if (lcn != headlcn) - break; + break; /* ends at the next HEAD lcluster */ m->delta[1] = 1; } else { erofs_err(inode->i_sb, "unknown type %u @ lcn %llu of nid %llu", @@ -418,8 +420,7 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m) return -EOPNOTSUPP; } lcn += m->delta[1]; - } while (m->delta[1]); - + } map->m_llen = (lcn << lclusterbits) + m->clusterofs - map->m_la; return 0; } diff --git a/fs/exec.c b/fs/exec.c index 6c53920795c2..9c349a74f385 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -883,7 +883,8 @@ EXPORT_SYMBOL(transfer_args_to_stack); */ static struct file *do_open_execat(int fd, struct filename *name, int flags) { - struct file *file; + int err; + struct file *file __free(fput) = NULL; struct open_flags open_exec_flags = { .open_flag = O_LARGEFILE | O_RDONLY | __FMODE_EXEC, .acc_mode = MAY_EXEC, @@ -908,12 +909,14 @@ static struct file *do_open_execat(int fd, struct filename *name, int flags) * an invariant that all non-regular files error out before we get here. */ if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode)) || - path_noexec(&file->f_path)) { - fput(file); + path_noexec(&file->f_path)) return ERR_PTR(-EACCES); - }
- return file; + err = deny_write_access(file); + if (err) + return ERR_PTR(err); + + return no_free_ptr(file); }
/** @@ -923,7 +926,8 @@ static struct file *do_open_execat(int fd, struct filename *name, int flags) * * Returns ERR_PTR on failure or allocated struct file on success. * - * As this is a wrapper for the internal do_open_execat(). Also see + * As this is a wrapper for the internal do_open_execat(), callers + * must call allow_write_access() before fput() on release. Also see * do_close_execat(). */ struct file *open_exec(const char *name) @@ -1475,8 +1479,10 @@ static int prepare_bprm_creds(struct linux_binprm *bprm) /* Matches do_open_execat() */ static void do_close_execat(struct file *file) { - if (file) - fput(file); + if (!file) + return; + allow_write_access(file); + fput(file); }
static void free_bprm(struct linux_binprm *bprm) @@ -1801,6 +1807,7 @@ static int exec_binprm(struct linux_binprm *bprm) bprm->file = bprm->interpreter; bprm->interpreter = NULL;
+ allow_write_access(exec); if (unlikely(bprm->have_execfd)) { if (bprm->executable) { fput(exec); diff --git a/fs/exfat/file.c b/fs/exfat/file.c index a25d7eb789f4..fb38769c3e39 100644 --- a/fs/exfat/file.c +++ b/fs/exfat/file.c @@ -584,6 +584,16 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter) if (ret < 0) goto unlock;
+ if (iocb->ki_flags & IOCB_DIRECT) { + unsigned long align = pos | iov_iter_alignment(iter); + + if (!IS_ALIGNED(align, i_blocksize(inode)) && + !IS_ALIGNED(align, bdev_logical_block_size(inode->i_sb->s_bdev))) { + ret = -EINVAL; + goto unlock; + } + } + if (pos > valid_size) { ret = exfat_extend_valid_size(file, pos); if (ret < 0 && ret != -ENOSPC) { diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c index 2c4c44229352..337197ece599 100644 --- a/fs/exfat/namei.c +++ b/fs/exfat/namei.c @@ -345,6 +345,7 @@ static int exfat_find_empty_entry(struct inode *inode, if (ei->start_clu == EXFAT_EOF_CLUSTER) { ei->start_clu = clu.dir; p_dir->dir = clu.dir; + hint_femp.eidx = 0; }
/* append to the FAT chain */ @@ -637,14 +638,26 @@ static int exfat_find(struct inode *dir, struct qstr *qname, info->size = le64_to_cpu(ep2->dentry.stream.valid_size); info->valid_size = le64_to_cpu(ep2->dentry.stream.valid_size); info->size = le64_to_cpu(ep2->dentry.stream.size); + + info->start_clu = le32_to_cpu(ep2->dentry.stream.start_clu); + if (!is_valid_cluster(sbi, info->start_clu) && info->size) { + exfat_warn(sb, "start_clu is invalid cluster(0x%x)", + info->start_clu); + info->size = 0; + info->valid_size = 0; + } + + if (info->valid_size > info->size) { + exfat_warn(sb, "valid_size(%lld) is greater than size(%lld)", + info->valid_size, info->size); + info->valid_size = info->size; + } + if (info->size == 0) { info->flags = ALLOC_NO_FAT_CHAIN; info->start_clu = EXFAT_EOF_CLUSTER; - } else { + } else info->flags = ep2->dentry.stream.flags; - info->start_clu = - le32_to_cpu(ep2->dentry.stream.start_clu); - }
exfat_get_entry_time(sbi, &info->crtime, ep->dentry.file.create_tz, diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c index 591fb3f710be..8042ad873808 100644 --- a/fs/ext4/balloc.c +++ b/fs/ext4/balloc.c @@ -550,7 +550,8 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group, trace_ext4_read_block_bitmap_load(sb, block_group, ignore_locked); ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO | (ignore_locked ? REQ_RAHEAD : 0), - ext4_end_bitmap_read); + ext4_end_bitmap_read, + ext4_simulate_fail(sb, EXT4_SIM_BBITMAP_EIO)); return bh; verify: err = ext4_validate_block_bitmap(sb, desc, block_group, bh); @@ -577,7 +578,6 @@ int ext4_wait_block_bitmap(struct super_block *sb, ext4_group_t block_group, if (!desc) return -EFSCORRUPTED; wait_on_buffer(bh); - ext4_simulate_fail_bh(sb, bh, EXT4_SIM_BBITMAP_EIO); if (!buffer_uptodate(bh)) { ext4_error_err(sb, EIO, "Cannot read block bitmap - " "block_group = %u, block_bitmap = %llu", diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 44b0d418143c..bbffb76d9a90 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1865,14 +1865,6 @@ static inline bool ext4_simulate_fail(struct super_block *sb, return false; }
-static inline void ext4_simulate_fail_bh(struct super_block *sb, - struct buffer_head *bh, - unsigned long code) -{ - if (!IS_ERR(bh) && ext4_simulate_fail(sb, code)) - clear_buffer_uptodate(bh); -} - /* * Error number codes for s_{first,last}_error_errno * @@ -3100,9 +3092,9 @@ extern struct buffer_head *ext4_sb_bread(struct super_block *sb, extern struct buffer_head *ext4_sb_bread_unmovable(struct super_block *sb, sector_t block); extern void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags, - bh_end_io_t *end_io); + bh_end_io_t *end_io, bool simu_fail); extern int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, - bh_end_io_t *end_io); + bh_end_io_t *end_io, bool simu_fail); extern int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait); extern void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block); extern int ext4_seq_options_show(struct seq_file *seq, void *offset); diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 34e25eee6521..88f98dc44027 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -568,7 +568,7 @@ __read_extent_tree_block(const char *function, unsigned int line,
if (!bh_uptodate_or_lock(bh)) { trace_ext4_ext_load_extent(inode, pblk, _RET_IP_); - err = ext4_read_bh(bh, 0, NULL); + err = ext4_read_bh(bh, 0, NULL, false); if (err < 0) goto errout; } diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c index df853c4d3a8c..383c6edea6dd 100644 --- a/fs/ext4/fsmap.c +++ b/fs/ext4/fsmap.c @@ -185,6 +185,56 @@ static inline ext4_fsblk_t ext4_fsmap_next_pblk(struct ext4_fsmap *fmr) return fmr->fmr_physical + fmr->fmr_length; }
+static int ext4_getfsmap_meta_helper(struct super_block *sb, + ext4_group_t agno, ext4_grpblk_t start, + ext4_grpblk_t len, void *priv) +{ + struct ext4_getfsmap_info *info = priv; + struct ext4_fsmap *p; + struct ext4_fsmap *tmp; + struct ext4_sb_info *sbi = EXT4_SB(sb); + ext4_fsblk_t fsb, fs_start, fs_end; + int error; + + fs_start = fsb = (EXT4_C2B(sbi, start) + + ext4_group_first_block_no(sb, agno)); + fs_end = fs_start + EXT4_C2B(sbi, len); + + /* Return relevant extents from the meta_list */ + list_for_each_entry_safe(p, tmp, &info->gfi_meta_list, fmr_list) { + if (p->fmr_physical < info->gfi_next_fsblk) { + list_del(&p->fmr_list); + kfree(p); + continue; + } + if (p->fmr_physical <= fs_start || + p->fmr_physical + p->fmr_length <= fs_end) { + /* Emit the retained free extent record if present */ + if (info->gfi_lastfree.fmr_owner) { + error = ext4_getfsmap_helper(sb, info, + &info->gfi_lastfree); + if (error) + return error; + info->gfi_lastfree.fmr_owner = 0; + } + error = ext4_getfsmap_helper(sb, info, p); + if (error) + return error; + fsb = p->fmr_physical + p->fmr_length; + if (info->gfi_next_fsblk < fsb) + info->gfi_next_fsblk = fsb; + list_del(&p->fmr_list); + kfree(p); + continue; + } + } + if (info->gfi_next_fsblk < fsb) + info->gfi_next_fsblk = fsb; + + return 0; +} + + /* Transform a blockgroup's free record into a fsmap */ static int ext4_getfsmap_datadev_helper(struct super_block *sb, ext4_group_t agno, ext4_grpblk_t start, @@ -539,6 +589,7 @@ static int ext4_getfsmap_datadev(struct super_block *sb, error = ext4_mballoc_query_range(sb, info->gfi_agno, EXT4_B2C(sbi, info->gfi_low.fmr_physical), EXT4_B2C(sbi, info->gfi_high.fmr_physical), + ext4_getfsmap_meta_helper, ext4_getfsmap_datadev_helper, info); if (error) goto err; @@ -560,7 +611,8 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
/* Report any gaps at the end of the bg */ info->gfi_last = true; - error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster, 0, info); + error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1, + 0, info); if (error) goto err;
diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c index 7f1a5f90dbbd..21d228073d79 100644 --- a/fs/ext4/ialloc.c +++ b/fs/ext4/ialloc.c @@ -193,8 +193,9 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group) * submit the buffer_head for reading */ trace_ext4_load_inode_bitmap(sb, block_group); - ext4_read_bh(bh, REQ_META | REQ_PRIO, ext4_end_bitmap_read); - ext4_simulate_fail_bh(sb, bh, EXT4_SIM_IBITMAP_EIO); + ext4_read_bh(bh, REQ_META | REQ_PRIO, + ext4_end_bitmap_read, + ext4_simulate_fail(sb, EXT4_SIM_IBITMAP_EIO)); if (!buffer_uptodate(bh)) { put_bh(bh); ext4_error_err(sb, EIO, "Cannot read inode bitmap - " diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c index 7404f0935c90..7de327fa7b1c 100644 --- a/fs/ext4/indirect.c +++ b/fs/ext4/indirect.c @@ -170,7 +170,7 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth, }
if (!bh_uptodate_or_lock(bh)) { - if (ext4_read_bh(bh, 0, NULL) < 0) { + if (ext4_read_bh(bh, 0, NULL, false) < 0) { put_bh(bh); goto failure; } diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 54bdd4884fe6..99d09cd9c6a3 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4497,10 +4497,10 @@ static int __ext4_get_inode_loc(struct super_block *sb, unsigned long ino, * Read the block from disk. */ trace_ext4_load_inode(sb, ino); - ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL); + ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL, + ext4_simulate_fail(sb, EXT4_SIM_INODE_EIO)); blk_finish_plug(&plug); wait_on_buffer(bh); - ext4_simulate_fail_bh(sb, bh, EXT4_SIM_INODE_EIO); if (!buffer_uptodate(bh)) { if (ret_block) *ret_block = block; diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index d73e38323879..92f49d7eb3c0 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -6999,13 +6999,14 @@ int ext4_mballoc_query_range( struct super_block *sb, ext4_group_t group, - ext4_grpblk_t start, + ext4_grpblk_t first, ext4_grpblk_t end, + ext4_mballoc_query_range_fn meta_formatter, ext4_mballoc_query_range_fn formatter, void *priv) { void *bitmap; - ext4_grpblk_t next; + ext4_grpblk_t start, next; struct ext4_buddy e4b; int error;
@@ -7016,10 +7017,19 @@ ext4_mballoc_query_range(
ext4_lock_group(sb, group);
- start = max(e4b.bd_info->bb_first_free, start); + start = max(e4b.bd_info->bb_first_free, first); if (end >= EXT4_CLUSTERS_PER_GROUP(sb)) end = EXT4_CLUSTERS_PER_GROUP(sb) - 1; - + if (meta_formatter && start != first) { + if (start > end) + start = end; + ext4_unlock_group(sb, group); + error = meta_formatter(sb, group, first, start - first, + priv); + if (error) + goto out_unload; + ext4_lock_group(sb, group); + } while (start <= end) { start = mb_find_next_zero_bit(bitmap, end + 1, start); if (start > end) diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h index d8553f1498d3..f8280de3e882 100644 --- a/fs/ext4/mballoc.h +++ b/fs/ext4/mballoc.h @@ -259,6 +259,7 @@ ext4_mballoc_query_range( ext4_group_t agno, ext4_grpblk_t start, ext4_grpblk_t end, + ext4_mballoc_query_range_fn meta_formatter, ext4_mballoc_query_range_fn formatter, void *priv);
diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c index bd946d0c71b7..d64c04ed061a 100644 --- a/fs/ext4/mmp.c +++ b/fs/ext4/mmp.c @@ -94,7 +94,7 @@ static int read_mmp_block(struct super_block *sb, struct buffer_head **bh, }
lock_buffer(*bh); - ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL); + ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL, false); if (ret) goto warn_exit;
diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c index b64661ea6e0e..898443e98efc 100644 --- a/fs/ext4/move_extent.c +++ b/fs/ext4/move_extent.c @@ -213,7 +213,7 @@ static int mext_page_mkuptodate(struct folio *folio, size_t from, size_t to) unlock_buffer(bh); continue; } - ext4_read_bh_nowait(bh, 0, NULL); + ext4_read_bh_nowait(bh, 0, NULL, false); nr++; } while (block++, (bh = bh->b_this_page) != head);
diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c index a2704f064361..72f77f78ae8d 100644 --- a/fs/ext4/resize.c +++ b/fs/ext4/resize.c @@ -1300,7 +1300,7 @@ static struct buffer_head *ext4_get_bitmap(struct super_block *sb, __u64 block) if (unlikely(!bh)) return NULL; if (!bh_uptodate_or_lock(bh)) { - if (ext4_read_bh(bh, 0, NULL) < 0) { + if (ext4_read_bh(bh, 0, NULL, false) < 0) { brelse(bh); return NULL; } diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 16a4ce704460..940ac1a49b72 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -161,8 +161,14 @@ MODULE_ALIAS("ext3");
static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, - bh_end_io_t *end_io) + bh_end_io_t *end_io, bool simu_fail) { + if (simu_fail) { + clear_buffer_uptodate(bh); + unlock_buffer(bh); + return; + } + /* * buffer's verified bit is no longer valid after reading from * disk again due to write out error, clear it to make sure we @@ -176,7 +182,7 @@ static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, }
void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags, - bh_end_io_t *end_io) + bh_end_io_t *end_io, bool simu_fail) { BUG_ON(!buffer_locked(bh));
@@ -184,10 +190,11 @@ void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags, unlock_buffer(bh); return; } - __ext4_read_bh(bh, op_flags, end_io); + __ext4_read_bh(bh, op_flags, end_io, simu_fail); }
-int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io) +int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, + bh_end_io_t *end_io, bool simu_fail) { BUG_ON(!buffer_locked(bh));
@@ -196,7 +203,7 @@ int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io return 0; }
- __ext4_read_bh(bh, op_flags, end_io); + __ext4_read_bh(bh, op_flags, end_io, simu_fail);
wait_on_buffer(bh); if (buffer_uptodate(bh)) @@ -208,10 +215,10 @@ int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait) { lock_buffer(bh); if (!wait) { - ext4_read_bh_nowait(bh, op_flags, NULL); + ext4_read_bh_nowait(bh, op_flags, NULL, false); return 0; } - return ext4_read_bh(bh, op_flags, NULL); + return ext4_read_bh(bh, op_flags, NULL, false); }
/* @@ -266,7 +273,7 @@ void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block)
if (likely(bh)) { if (trylock_buffer(bh)) - ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL); + ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL, false); brelse(bh); } } @@ -346,9 +353,9 @@ __u32 ext4_free_group_clusters(struct super_block *sb, __u32 ext4_free_inodes_count(struct super_block *sb, struct ext4_group_desc *bg) { - return le16_to_cpu(bg->bg_free_inodes_count_lo) | + return le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_lo)) | (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT ? - (__u32)le16_to_cpu(bg->bg_free_inodes_count_hi) << 16 : 0); + (__u32)le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_hi)) << 16 : 0); }
__u32 ext4_used_dirs_count(struct super_block *sb, @@ -402,9 +409,9 @@ void ext4_free_group_clusters_set(struct super_block *sb, void ext4_free_inodes_set(struct super_block *sb, struct ext4_group_desc *bg, __u32 count) { - bg->bg_free_inodes_count_lo = cpu_to_le16((__u16)count); + WRITE_ONCE(bg->bg_free_inodes_count_lo, cpu_to_le16((__u16)count)); if (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT) - bg->bg_free_inodes_count_hi = cpu_to_le16(count >> 16); + WRITE_ONCE(bg->bg_free_inodes_count_hi, cpu_to_le16(count >> 16)); }
void ext4_used_dirs_set(struct super_block *sb, @@ -6518,9 +6525,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb) goto restore_opts; }
- if (test_opt2(sb, ABORT)) - ext4_abort(sb, ESHUTDOWN, "Abort forced by user"); - sb->s_flags = (sb->s_flags & ~SB_POSIXACL) | (test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0);
@@ -6689,6 +6693,14 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb) if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb)) ext4_stop_mmpd(sbi);
+ /* + * Handle aborting the filesystem as the last thing during remount to + * avoid obsure errors during remount when some option changes fail to + * apply due to shutdown filesystem. + */ + if (test_opt2(sb, ABORT)) + ext4_abort(sb, ESHUTDOWN, "Abort forced by user"); + return 0;
restore_opts: diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c index 7f76460b721f..efda9a022981 100644 --- a/fs/f2fs/checkpoint.c +++ b/fs/f2fs/checkpoint.c @@ -32,7 +32,7 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io, f2fs_build_fault_attr(sbi, 0, 0); if (!end_io) f2fs_flush_merged_writes(sbi); - f2fs_handle_critical_error(sbi, reason, end_io); + f2fs_handle_critical_error(sbi, reason); }
/* diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 94f7b084f601..9efe4c00d75b 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -1676,7 +1676,8 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int flag) /* reserved delalloc block should be mapped for fiemap. */ if (blkaddr == NEW_ADDR) map->m_flags |= F2FS_MAP_DELALLOC; - if (flag != F2FS_GET_BLOCK_DIO || !is_hole) + /* DIO READ and hole case, should not map the blocks. */ + if (!(flag == F2FS_GET_BLOCK_DIO && is_hole && !map->m_may_create)) map->m_flags |= F2FS_MAP_MAPPED;
map->m_pblk = blkaddr; @@ -1901,25 +1902,6 @@ static int f2fs_xattr_fiemap(struct inode *inode, return (err < 0 ? err : 0); }
-static loff_t max_inode_blocks(struct inode *inode) -{ - loff_t result = ADDRS_PER_INODE(inode); - loff_t leaf_count = ADDRS_PER_BLOCK(inode); - - /* two direct node blocks */ - result += (leaf_count * 2); - - /* two indirect node blocks */ - leaf_count *= NIDS_PER_BLOCK; - result += (leaf_count * 2); - - /* one double indirect node block */ - leaf_count *= NIDS_PER_BLOCK; - result += leaf_count; - - return result; -} - int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, u64 start, u64 len) { @@ -1992,8 +1974,7 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, if (!compr_cluster && !(map.m_flags & F2FS_MAP_FLAGS)) { start_blk = next_pgofs;
- if (blks_to_bytes(inode, start_blk) < blks_to_bytes(inode, - max_inode_blocks(inode))) + if (blks_to_bytes(inode, start_blk) < maxbytes) goto prep_next;
flags |= FIEMAP_EXTENT_LAST; @@ -2385,10 +2366,10 @@ static int f2fs_mpage_readpages(struct inode *inode, .nr_cpages = 0, }; pgoff_t nc_cluster_idx = NULL_CLUSTER; + pgoff_t index; #endif unsigned nr_pages = rac ? readahead_count(rac) : 1; unsigned max_nr_pages = nr_pages; - pgoff_t index; int ret = 0;
map.m_pblk = 0; @@ -2406,9 +2387,9 @@ static int f2fs_mpage_readpages(struct inode *inode, prefetchw(&folio->flags); }
+#ifdef CONFIG_F2FS_FS_COMPRESSION index = folio_index(folio);
-#ifdef CONFIG_F2FS_FS_COMPRESSION if (!f2fs_compressed_file(inode)) goto read_single_page;
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 33f5449dc22d..93a5e1c24e56 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -3632,8 +3632,7 @@ int f2fs_quota_sync(struct super_block *sb, int type); loff_t max_file_blocks(struct inode *inode); void f2fs_quota_off_umount(struct super_block *sb); void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag); -void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason, - bool irq_context); +void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason); void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error); void f2fs_handle_error_async(struct f2fs_sb_info *sbi, unsigned char error); int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover); diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index 321d8ffbab6e..71ddecaf771f 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -863,7 +863,11 @@ static bool f2fs_force_buffered_io(struct inode *inode, int rw) return true; if (f2fs_compressed_file(inode)) return true; - if (f2fs_has_inline_data(inode)) + /* + * only force direct read to use buffered IO, for direct write, + * it expects inline data conversion before committing IO. + */ + if (f2fs_has_inline_data(inode) && rw == READ) return true;
/* disallow direct IO if any of devices has unaligned blksize */ @@ -2343,9 +2347,12 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag, if (readonly) goto out;
- /* grab sb->s_umount to avoid racing w/ remount() */ + /* + * grab sb->s_umount to avoid racing w/ remount() and other shutdown + * paths. + */ if (need_lock) - down_read(&sbi->sb->s_umount); + down_write(&sbi->sb->s_umount);
f2fs_stop_gc_thread(sbi); f2fs_stop_discard_thread(sbi); @@ -2354,7 +2361,7 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag, clear_opt(sbi, DISCARD);
if (need_lock) - up_read(&sbi->sb->s_umount); + up_write(&sbi->sb->s_umount);
f2fs_update_time(sbi, REQ_TIME); out: @@ -3792,7 +3799,7 @@ static int reserve_compress_blocks(struct dnode_of_data *dn, pgoff_t count, to_reserved = cluster_size - compr_blocks - reserved;
/* for the case all blocks in cluster were reserved */ - if (to_reserved == 1) { + if (reserved && to_reserved == 1) { dn->ofs_in_node += cluster_size; goto next; } diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c index 9322a7200e31..e0469316c7cd 100644 --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -257,6 +257,8 @@ static int select_gc_type(struct f2fs_sb_info *sbi, int gc_type)
switch (sbi->gc_mode) { case GC_IDLE_CB: + case GC_URGENT_LOW: + case GC_URGENT_MID: gc_mode = GC_CB; break; case GC_IDLE_GREEDY: diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index 59b13ff243fa..af36c6d6542b 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -905,6 +905,16 @@ static int truncate_node(struct dnode_of_data *dn) if (err) return err;
+ if (ni.blk_addr != NEW_ADDR && + !f2fs_is_valid_blkaddr(sbi, ni.blk_addr, DATA_GENERIC_ENHANCE)) { + f2fs_err_ratelimited(sbi, + "nat entry is corrupted, run fsck to fix it, ino:%u, " + "nid:%u, blkaddr:%u", ni.ino, ni.nid, ni.blk_addr); + set_sbi_flag(sbi, SBI_NEED_FSCK); + f2fs_handle_error(sbi, ERROR_INCONSISTENT_NAT); + return -EFSCORRUPTED; + } + /* Deallocate node address */ f2fs_invalidate_blocks(sbi, ni.blk_addr); dec_valid_node_count(sbi, dn->inode, dn->nid == dn->inode->i_ino); diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 1766254279d2..edf205093f43 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -2926,7 +2926,8 @@ static int change_curseg(struct f2fs_sb_info *sbi, int type) struct f2fs_summary_block *sum_node; struct page *sum_page;
- write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno)); + if (curseg->inited) + write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno));
__set_test_and_inuse(sbi, new_segno);
@@ -3977,8 +3978,8 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, } }
- f2fs_bug_on(sbi, !IS_DATASEG(type)); curseg = CURSEG_I(sbi, type); + f2fs_bug_on(sbi, !IS_DATASEG(curseg->seg_type));
mutex_lock(&curseg->curseg_mutex); down_write(&sit_i->sentry_lock); diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h index 71adb4a43bec..51b2b8c5c749 100644 --- a/fs/f2fs/segment.h +++ b/fs/f2fs/segment.h @@ -559,18 +559,21 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi) }
static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi, - unsigned int node_blocks, unsigned int dent_blocks) + unsigned int node_blocks, unsigned int data_blocks, + unsigned int dent_blocks) {
- unsigned segno, left_blocks; + unsigned int segno, left_blocks, blocks; int i;
- /* check current node sections in the worst case. */ - for (i = CURSEG_HOT_NODE; i <= CURSEG_COLD_NODE; i++) { + /* check current data/node sections in the worst case. */ + for (i = CURSEG_HOT_DATA; i < NR_PERSISTENT_LOG; i++) { segno = CURSEG_I(sbi, i)->segno; left_blocks = CAP_BLKS_PER_SEC(sbi) - get_ckpt_valid_blocks(sbi, segno, true); - if (node_blocks > left_blocks) + + blocks = i <= CURSEG_COLD_DATA ? data_blocks : node_blocks; + if (blocks > left_blocks) return false; }
@@ -584,8 +587,9 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi, }
/* - * calculate needed sections for dirty node/dentry - * and call has_curseg_enough_space + * calculate needed sections for dirty node/dentry and call + * has_curseg_enough_space, please note that, it needs to account + * dirty data as well in lfs mode when checkpoint is disabled. */ static inline void __get_secs_required(struct f2fs_sb_info *sbi, unsigned int *lower_p, unsigned int *upper_p, bool *curseg_p) @@ -594,19 +598,30 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi, get_pages(sbi, F2FS_DIRTY_DENTS) + get_pages(sbi, F2FS_DIRTY_IMETA); unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS); + unsigned int total_data_blocks = 0; unsigned int node_secs = total_node_blocks / CAP_BLKS_PER_SEC(sbi); unsigned int dent_secs = total_dent_blocks / CAP_BLKS_PER_SEC(sbi); + unsigned int data_secs = 0; unsigned int node_blocks = total_node_blocks % CAP_BLKS_PER_SEC(sbi); unsigned int dent_blocks = total_dent_blocks % CAP_BLKS_PER_SEC(sbi); + unsigned int data_blocks = 0; + + if (f2fs_lfs_mode(sbi) && + unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) { + total_data_blocks = get_pages(sbi, F2FS_DIRTY_DATA); + data_secs = total_data_blocks / CAP_BLKS_PER_SEC(sbi); + data_blocks = total_data_blocks % CAP_BLKS_PER_SEC(sbi); + }
if (lower_p) - *lower_p = node_secs + dent_secs; + *lower_p = node_secs + dent_secs + data_secs; if (upper_p) *upper_p = node_secs + dent_secs + - (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0); + (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0) + + (data_blocks ? 1 : 0); if (curseg_p) *curseg_p = has_curseg_enough_space(sbi, - node_blocks, dent_blocks); + node_blocks, data_blocks, dent_blocks); }
static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index 87ab5696bd48..983fdd98fc37 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -150,6 +150,8 @@ enum { Opt_mode, Opt_fault_injection, Opt_fault_type, + Opt_lazytime, + Opt_nolazytime, Opt_quota, Opt_noquota, Opt_usrquota, @@ -226,6 +228,8 @@ static match_table_t f2fs_tokens = { {Opt_mode, "mode=%s"}, {Opt_fault_injection, "fault_injection=%u"}, {Opt_fault_type, "fault_type=%u"}, + {Opt_lazytime, "lazytime"}, + {Opt_nolazytime, "nolazytime"}, {Opt_quota, "quota"}, {Opt_noquota, "noquota"}, {Opt_usrquota, "usrquota"}, @@ -918,6 +922,12 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount) f2fs_info(sbi, "fault_type options not supported"); break; #endif + case Opt_lazytime: + sb->s_flags |= SB_LAZYTIME; + break; + case Opt_nolazytime: + sb->s_flags &= ~SB_LAZYTIME; + break; #ifdef CONFIG_QUOTA case Opt_quota: case Opt_usrquota: @@ -3322,7 +3332,7 @@ loff_t max_file_blocks(struct inode *inode) * fit within U32_MAX + 1 data units. */
- result = min(result, F2FS_BYTES_TO_BLK(((loff_t)U32_MAX + 1) * 4096)); + result = umin(result, F2FS_BYTES_TO_BLK(((loff_t)U32_MAX + 1) * 4096));
return result; } @@ -4155,8 +4165,7 @@ static bool system_going_down(void) || system_state == SYSTEM_RESTART; }
-void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason, - bool irq_context) +void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason) { struct super_block *sb = sbi->sb; bool shutdown = reason == STOP_CP_REASON_SHUTDOWN; @@ -4168,10 +4177,12 @@ void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason, if (!f2fs_hw_is_readonly(sbi)) { save_stop_reason(sbi, reason);
- if (irq_context && !shutdown) - schedule_work(&sbi->s_error_work); - else - f2fs_record_stop_reason(sbi); + /* + * always create an asynchronous task to record stop_reason + * in order to avoid potential deadlock when running into + * f2fs_record_stop_reason() synchronously. + */ + schedule_work(&sbi->s_error_work); }
/* @@ -4991,9 +5002,6 @@ static int __init init_f2fs_fs(void) err = f2fs_init_shrinker(); if (err) goto free_sysfs; - err = register_filesystem(&f2fs_fs_type); - if (err) - goto free_shrinker; f2fs_create_root_stats(); err = f2fs_init_post_read_processing(); if (err) @@ -5016,7 +5024,12 @@ static int __init init_f2fs_fs(void) err = f2fs_create_casefold_cache(); if (err) goto free_compress_cache; + err = register_filesystem(&f2fs_fs_type); + if (err) + goto free_casefold_cache; return 0; +free_casefold_cache: + f2fs_destroy_casefold_cache(); free_compress_cache: f2fs_destroy_compress_cache(); free_compress_mempool: @@ -5031,8 +5044,6 @@ static int __init init_f2fs_fs(void) f2fs_destroy_post_read_processing(); free_root_stats: f2fs_destroy_root_stats(); - unregister_filesystem(&f2fs_fs_type); -free_shrinker: f2fs_exit_shrinker(); free_sysfs: f2fs_exit_sysfs(); @@ -5056,6 +5067,7 @@ static int __init init_f2fs_fs(void)
static void __exit exit_f2fs_fs(void) { + unregister_filesystem(&f2fs_fs_type); f2fs_destroy_casefold_cache(); f2fs_destroy_compress_cache(); f2fs_destroy_compress_mempool(); @@ -5064,7 +5076,6 @@ static void __exit exit_f2fs_fs(void) f2fs_destroy_iostat_processing(); f2fs_destroy_post_read_processing(); f2fs_destroy_root_stats(); - unregister_filesystem(&f2fs_fs_type); f2fs_exit_shrinker(); f2fs_exit_sysfs(); f2fs_destroy_garbage_collection_cache(); diff --git a/fs/fcntl.c b/fs/fcntl.c index 22dd9dcce7ec..3d89de31066a 100644 --- a/fs/fcntl.c +++ b/fs/fcntl.c @@ -397,6 +397,9 @@ static long f_dupfd_query(int fd, struct file *filp) { CLASS(fd_raw, f)(fd);
+ if (fd_empty(f)) + return -EBADF; + /* * We can do the 'fdput()' immediately, as the only thing that * matters is the pointer value which isn't changed by the fdput. diff --git a/fs/fuse/file.c b/fs/fuse/file.c index dafdf766b1d5..e20d91d0ae55 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -645,7 +645,7 @@ void fuse_read_args_fill(struct fuse_io_args *ia, struct file *file, loff_t pos, args->out_args[0].size = count; }
-static void fuse_release_user_pages(struct fuse_args_pages *ap, +static void fuse_release_user_pages(struct fuse_args_pages *ap, ssize_t nres, bool should_dirty) { unsigned int i; @@ -656,6 +656,9 @@ static void fuse_release_user_pages(struct fuse_args_pages *ap, if (ap->args.is_pinned) unpin_user_page(ap->pages[i]); } + + if (nres > 0 && ap->args.invalidate_vmap) + invalidate_kernel_vmap_range(ap->args.vmap_base, nres); }
static void fuse_io_release(struct kref *kref) @@ -754,25 +757,29 @@ static void fuse_aio_complete_req(struct fuse_mount *fm, struct fuse_args *args, struct fuse_io_args *ia = container_of(args, typeof(*ia), ap.args); struct fuse_io_priv *io = ia->io; ssize_t pos = -1; - - fuse_release_user_pages(&ia->ap, io->should_dirty); + size_t nres;
if (err) { /* Nothing */ } else if (io->write) { if (ia->write.out.size > ia->write.in.size) { err = -EIO; - } else if (ia->write.in.size != ia->write.out.size) { - pos = ia->write.in.offset - io->offset + - ia->write.out.size; + } else { + nres = ia->write.out.size; + if (ia->write.in.size != ia->write.out.size) + pos = ia->write.in.offset - io->offset + + ia->write.out.size; } } else { u32 outsize = args->out_args[0].size;
+ nres = outsize; if (ia->read.in.size != outsize) pos = ia->read.in.offset - io->offset + outsize; }
+ fuse_release_user_pages(&ia->ap, err ?: nres, io->should_dirty); + fuse_aio_complete(io, err, pos); fuse_io_free(ia); } @@ -1468,24 +1475,37 @@ static inline size_t fuse_get_frag_size(const struct iov_iter *ii,
static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii, size_t *nbytesp, int write, - unsigned int max_pages) + unsigned int max_pages, + bool use_pages_for_kvec_io) { + bool flush_or_invalidate = false; size_t nbytes = 0; /* # bytes already packed in req */ ssize_t ret = 0;
- /* Special case for kernel I/O: can copy directly into the buffer */ + /* Special case for kernel I/O: can copy directly into the buffer. + * However if the implementation of fuse_conn requires pages instead of + * pointer (e.g., virtio-fs), use iov_iter_extract_pages() instead. + */ if (iov_iter_is_kvec(ii)) { - unsigned long user_addr = fuse_get_user_addr(ii); - size_t frag_size = fuse_get_frag_size(ii, *nbytesp); + void *user_addr = (void *)fuse_get_user_addr(ii);
- if (write) - ap->args.in_args[1].value = (void *) user_addr; - else - ap->args.out_args[0].value = (void *) user_addr; + if (!use_pages_for_kvec_io) { + size_t frag_size = fuse_get_frag_size(ii, *nbytesp);
- iov_iter_advance(ii, frag_size); - *nbytesp = frag_size; - return 0; + if (write) + ap->args.in_args[1].value = user_addr; + else + ap->args.out_args[0].value = user_addr; + + iov_iter_advance(ii, frag_size); + *nbytesp = frag_size; + return 0; + } + + if (is_vmalloc_addr(user_addr)) { + ap->args.vmap_base = user_addr; + flush_or_invalidate = true; + } }
while (nbytes < *nbytesp && ap->num_pages < max_pages) { @@ -1514,6 +1534,10 @@ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii, (PAGE_SIZE - ret) & (PAGE_SIZE - 1); }
+ if (write && flush_or_invalidate) + flush_kernel_vmap_range(ap->args.vmap_base, nbytes); + + ap->args.invalidate_vmap = !write && flush_or_invalidate; ap->args.is_pinned = iov_iter_extract_will_pin(ii); ap->args.user_pages = true; if (write) @@ -1582,7 +1606,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter, size_t nbytes = min(count, nmax);
err = fuse_get_user_pages(&ia->ap, iter, &nbytes, write, - max_pages); + max_pages, fc->use_pages_for_kvec_io); if (err && !nbytes) break;
@@ -1596,7 +1620,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter, }
if (!io->async || nres < 0) { - fuse_release_user_pages(&ia->ap, io->should_dirty); + fuse_release_user_pages(&ia->ap, nres, io->should_dirty); fuse_io_free(ia); } ia = NULL; diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h index e6cc3d552b13..28cf319c1c25 100644 --- a/fs/fuse/fuse_i.h +++ b/fs/fuse/fuse_i.h @@ -309,9 +309,12 @@ struct fuse_args { bool may_block:1; bool is_ext:1; bool is_pinned:1; + bool invalidate_vmap:1; struct fuse_in_arg in_args[3]; struct fuse_arg out_args[2]; void (*end)(struct fuse_mount *fm, struct fuse_args *args, int error); + /* Used for kvec iter backed by vmalloc address */ + void *vmap_base; };
struct fuse_args_pages { @@ -857,6 +860,9 @@ struct fuse_conn { /** Passthrough support for read/write IO */ unsigned int passthrough:1;
+ /* Use pages instead of pointer for kernel I/O */ + unsigned int use_pages_for_kvec_io:1; + /** Maximum stack depth for passthrough backing files */ int max_stack_depth;
diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c index 6404a189e989..d220e28e755f 100644 --- a/fs/fuse/virtio_fs.c +++ b/fs/fuse/virtio_fs.c @@ -1691,6 +1691,7 @@ static int virtio_fs_get_tree(struct fs_context *fsc) fc->delete_stale = true; fc->auto_submounts = true; fc->sync_fs = true; + fc->use_pages_for_kvec_io = true;
/* Tell FUSE to split requests that exceed the virtqueue's size */ fc->max_pages_limit = min_t(unsigned int, fc->max_pages_limit, diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index 269c3bc7fced..a51fe42732c4 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -1013,14 +1013,15 @@ bool gfs2_queue_try_to_evict(struct gfs2_glock *gl) &gl->gl_delete, 0); }
-static bool gfs2_queue_verify_evict(struct gfs2_glock *gl) +bool gfs2_queue_verify_delete(struct gfs2_glock *gl, bool later) { struct gfs2_sbd *sdp = gl->gl_name.ln_sbd; + unsigned long delay;
- if (test_and_set_bit(GLF_VERIFY_EVICT, &gl->gl_flags)) + if (test_and_set_bit(GLF_VERIFY_DELETE, &gl->gl_flags)) return false; - return queue_delayed_work(sdp->sd_delete_wq, - &gl->gl_delete, 5 * HZ); + delay = later ? 5 * HZ : 0; + return queue_delayed_work(sdp->sd_delete_wq, &gl->gl_delete, delay); }
static void delete_work_func(struct work_struct *work) @@ -1052,19 +1053,19 @@ static void delete_work_func(struct work_struct *work) if (gfs2_try_evict(gl)) { if (test_bit(SDF_KILL, &sdp->sd_flags)) goto out; - if (gfs2_queue_verify_evict(gl)) + if (gfs2_queue_verify_delete(gl, true)) return; } goto out; }
- if (test_and_clear_bit(GLF_VERIFY_EVICT, &gl->gl_flags)) { + if (test_and_clear_bit(GLF_VERIFY_DELETE, &gl->gl_flags)) { inode = gfs2_lookup_by_inum(sdp, no_addr, gl->gl_no_formal_ino, GFS2_BLKST_UNLINKED); if (IS_ERR(inode)) { if (PTR_ERR(inode) == -EAGAIN && !test_bit(SDF_KILL, &sdp->sd_flags) && - gfs2_queue_verify_evict(gl)) + gfs2_queue_verify_delete(gl, true)) return; } else { d_prune_aliases(inode); @@ -2118,7 +2119,7 @@ static void glock_hash_walk(glock_examiner examiner, const struct gfs2_sbd *sdp) void gfs2_cancel_delete_work(struct gfs2_glock *gl) { clear_bit(GLF_TRY_TO_EVICT, &gl->gl_flags); - clear_bit(GLF_VERIFY_EVICT, &gl->gl_flags); + clear_bit(GLF_VERIFY_DELETE, &gl->gl_flags); if (cancel_delayed_work(&gl->gl_delete)) gfs2_glock_put(gl); } @@ -2371,7 +2372,7 @@ static const char *gflags2str(char *buf, const struct gfs2_glock *gl) *p++ = 'N'; if (test_bit(GLF_TRY_TO_EVICT, gflags)) *p++ = 'e'; - if (test_bit(GLF_VERIFY_EVICT, gflags)) + if (test_bit(GLF_VERIFY_DELETE, gflags)) *p++ = 'E'; *p = 0; return buf; diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h index adf0091cc98f..63e101d448e9 100644 --- a/fs/gfs2/glock.h +++ b/fs/gfs2/glock.h @@ -245,6 +245,7 @@ static inline int gfs2_glock_nq_init(struct gfs2_glock *gl, void gfs2_glock_cb(struct gfs2_glock *gl, unsigned int state); void gfs2_glock_complete(struct gfs2_glock *gl, int ret); bool gfs2_queue_try_to_evict(struct gfs2_glock *gl); +bool gfs2_queue_verify_delete(struct gfs2_glock *gl, bool later); void gfs2_cancel_delete_work(struct gfs2_glock *gl); void gfs2_flush_delete_work(struct gfs2_sbd *sdp); void gfs2_gl_hash_clear(struct gfs2_sbd *sdp); diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h index aa4ef67a34e0..bd1348bff90e 100644 --- a/fs/gfs2/incore.h +++ b/fs/gfs2/incore.h @@ -329,7 +329,7 @@ enum { GLF_BLOCKING = 15, GLF_UNLOCKED = 16, /* Wait for glock to be unlocked */ GLF_TRY_TO_EVICT = 17, /* iopen glocks only */ - GLF_VERIFY_EVICT = 18, /* iopen glocks only */ + GLF_VERIFY_DELETE = 18, /* iopen glocks only */ };
struct gfs2_glock { diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c index 29c772816765..539303129715 100644 --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -1879,7 +1879,7 @@ static void try_rgrp_unlink(struct gfs2_rgrpd *rgd, u64 *last_unlinked, u64 skip */ ip = gl->gl_object;
- if (ip || !gfs2_queue_try_to_evict(gl)) + if (ip || !gfs2_queue_verify_delete(gl, false)) gfs2_glock_put(gl); else found++; diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c index 6678060ed4d2..e22c1edc32b3 100644 --- a/fs/gfs2/super.c +++ b/fs/gfs2/super.c @@ -1045,7 +1045,7 @@ static int gfs2_drop_inode(struct inode *inode) struct gfs2_glock *gl = ip->i_iopen_gh.gh_gl;
gfs2_glock_hold(gl); - if (!gfs2_queue_try_to_evict(gl)) + if (!gfs2_queue_verify_delete(gl, true)) gfs2_glock_put_async(gl); return 0; } diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h index 59ce81dca73f..5389918bbf29 100644 --- a/fs/hfsplus/hfsplus_fs.h +++ b/fs/hfsplus/hfsplus_fs.h @@ -156,6 +156,7 @@ struct hfsplus_sb_info {
/* Runtime variables */ u32 blockoffset; + u32 min_io_size; sector_t part_start; sector_t sect_count; int fs_shift; @@ -307,7 +308,7 @@ struct hfsplus_readdir_data { */ static inline unsigned short hfsplus_min_io_size(struct super_block *sb) { - return max_t(unsigned short, bdev_logical_block_size(sb->s_bdev), + return max_t(unsigned short, HFSPLUS_SB(sb)->min_io_size, HFSPLUS_SECTOR_SIZE); }
diff --git a/fs/hfsplus/wrapper.c b/fs/hfsplus/wrapper.c index 9592ffcb44e5..74801911bc1c 100644 --- a/fs/hfsplus/wrapper.c +++ b/fs/hfsplus/wrapper.c @@ -172,6 +172,8 @@ int hfsplus_read_wrapper(struct super_block *sb) if (!blocksize) goto out;
+ sbi->min_io_size = blocksize; + if (hfsplus_get_last_session(sb, &part_start, &part_size)) goto out;
diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c index 6d1cf2436ead..084f6ed2dd7a 100644 --- a/fs/hostfs/hostfs_kern.c +++ b/fs/hostfs/hostfs_kern.c @@ -471,8 +471,8 @@ static int hostfs_write_begin(struct file *file, struct address_space *mapping,
*foliop = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN, mapping_gfp_mask(mapping)); - if (!*foliop) - return -ENOMEM; + if (IS_ERR(*foliop)) + return PTR_ERR(*foliop); return 0; }
diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c index f50311a6b429..47038e660812 100644 --- a/fs/isofs/inode.c +++ b/fs/isofs/inode.c @@ -948,8 +948,6 @@ static int isofs_fill_super(struct super_block *s, struct fs_context *fc) goto out_no_inode; }
- kfree(opt->iocharset); - return 0;
/* @@ -987,7 +985,6 @@ static int isofs_fill_super(struct super_block *s, struct fs_context *fc) brelse(bh); brelse(pri_bh); out_freesbi: - kfree(opt->iocharset); kfree(sbi); s->s_fs_info = NULL; return error; @@ -1528,7 +1525,10 @@ static int isofs_get_tree(struct fs_context *fc)
static void isofs_free_fc(struct fs_context *fc) { - kfree(fc->fs_private); + struct isofs_options *opt = fc->fs_private; + + kfree(opt->iocharset); + kfree(opt); }
static const struct fs_context_operations isofs_context_ops = { diff --git a/fs/jffs2/erase.c b/fs/jffs2/erase.c index acd32f05b519..ef3a1e1b6cb0 100644 --- a/fs/jffs2/erase.c +++ b/fs/jffs2/erase.c @@ -338,10 +338,9 @@ static int jffs2_block_check_erase(struct jffs2_sb_info *c, struct jffs2_erasebl } while(--retlen); mtd_unpoint(c->mtd, jeb->offset, c->sector_size); if (retlen) { - pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08tx\n", - *wordebuf, - jeb->offset + - c->sector_size-retlen * sizeof(*wordebuf)); + *bad_offset = jeb->offset + c->sector_size - retlen * sizeof(*wordebuf); + pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08x\n", + *wordebuf, *bad_offset); return -EIO; } return 0; diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c index 0fb05e314edf..24afbae87225 100644 --- a/fs/jfs/xattr.c +++ b/fs/jfs/xattr.c @@ -559,7 +559,7 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
size_check: if (EALIST_SIZE(ea_buf->xattr) != ea_size) { - int size = min_t(int, EALIST_SIZE(ea_buf->xattr), ea_size); + int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr));
printk(KERN_ERR "ea_get: invalid extended attribute\n"); print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, diff --git a/fs/netfs/fscache_volume.c b/fs/netfs/fscache_volume.c index cb75c07b5281..ced14ac78cc1 100644 --- a/fs/netfs/fscache_volume.c +++ b/fs/netfs/fscache_volume.c @@ -322,8 +322,7 @@ void fscache_create_volume(struct fscache_volume *volume, bool wait) } return; no_wait: - clear_bit_unlock(FSCACHE_VOLUME_CREATING, &volume->flags); - wake_up_bit(&volume->flags, FSCACHE_VOLUME_CREATING); + clear_and_wake_up_bit(FSCACHE_VOLUME_CREATING, &volume->flags); }
/* diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c index 0becdec12970..47189476b553 100644 --- a/fs/nfs/blocklayout/blocklayout.c +++ b/fs/nfs/blocklayout/blocklayout.c @@ -571,19 +571,32 @@ bl_find_get_deviceid(struct nfs_server *server, if (!node) return ERR_PTR(-ENODEV);
+ /* + * Devices that are marked unavailable are left in the cache with a + * timeout to avoid sending GETDEVINFO after every LAYOUTGET, or + * constantly attempting to register the device. Once marked as + * unavailable they must be deleted and never reused. + */ if (test_bit(NFS_DEVICEID_UNAVAILABLE, &node->flags)) { unsigned long end = jiffies; unsigned long start = end - PNFS_DEVICE_RETRY_TIMEOUT;
if (!time_in_range(node->timestamp_unavailable, start, end)) { + /* Uncork subsequent GETDEVINFO operations for this device */ nfs4_delete_deviceid(node->ld, node->nfs_client, id); goto retry; } goto out_put; }
- if (!bl_register_dev(container_of(node, struct pnfs_block_dev, node))) + if (!bl_register_dev(container_of(node, struct pnfs_block_dev, node))) { + /* + * If we cannot register, treat this device as transient: + * Make a negative cache entry for the device + */ + nfs4_mark_deviceid_unavailable(node); goto out_put; + }
return node;
diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c index 6252f4447945..cab8809f0e0f 100644 --- a/fs/nfs/blocklayout/dev.c +++ b/fs/nfs/blocklayout/dev.c @@ -20,9 +20,6 @@ static void bl_unregister_scsi(struct pnfs_block_dev *dev) const struct pr_ops *ops = bdev->bd_disk->fops->pr_ops; int status;
- if (!test_and_clear_bit(PNFS_BDEV_REGISTERED, &dev->flags)) - return; - status = ops->pr_register(bdev, dev->pr_key, 0, false); if (status) trace_bl_pr_key_unreg_err(bdev, dev->pr_key, status); @@ -58,7 +55,8 @@ static void bl_unregister_dev(struct pnfs_block_dev *dev) return; }
- if (dev->type == PNFS_BLOCK_VOLUME_SCSI) + if (dev->type == PNFS_BLOCK_VOLUME_SCSI && + test_and_clear_bit(PNFS_BDEV_REGISTERED, &dev->flags)) bl_unregister_scsi(dev); }
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index 430733e3eff2..6bcc4b0e00ab 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -12,7 +12,7 @@ #include <linux/nfslocalio.h> #include <linux/wait_bit.h>
-#define NFS_SB_MASK (SB_RDONLY|SB_NOSUID|SB_NODEV|SB_NOEXEC|SB_SYNCHRONOUS) +#define NFS_SB_MASK (SB_NOSUID|SB_NODEV|SB_NOEXEC|SB_SYNCHRONOUS)
extern const struct export_operations nfs_export_ops;
diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c index 8f0ce82a677e..637528e6368e 100644 --- a/fs/nfs/localio.c +++ b/fs/nfs/localio.c @@ -354,6 +354,12 @@ nfs_local_read_done(struct nfs_local_kiocb *iocb, long status)
nfs_local_pgio_done(hdr, status);
+ /* + * Must clear replen otherwise NFSv3 data corruption will occur + * if/when switching from LOCALIO back to using normal RPC. + */ + hdr->res.replen = 0; + if (hdr->res.count != hdr->args.count || hdr->args.offset + hdr->res.count >= i_size_read(file_inode(filp))) hdr->res.eof = true; diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 9d40319e063d..405f17e6e0b4 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -2603,12 +2603,14 @@ static void nfs4_open_release(void *calldata) struct nfs4_opendata *data = calldata; struct nfs4_state *state = NULL;
+ /* In case of error, no cleanup! */ + if (data->rpc_status != 0 || !data->rpc_done) { + nfs_release_seqid(data->o_arg.seqid); + goto out_free; + } /* If this request hasn't been cancelled, do nothing */ if (!data->cancelled) goto out_free; - /* In case of error, no cleanup! */ - if (data->rpc_status != 0 || !data->rpc_done) - goto out_free; /* In case we need an open_confirm, no cleanup! */ if (data->o_res.rflags & NFS4_OPEN_RESULT_CONFIRM) goto out_free; diff --git a/fs/nfs/write.c b/fs/nfs/write.c index ead2dc55952d..82ae2b85d393 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -144,6 +144,31 @@ static void nfs_io_completion_put(struct nfs_io_completion *ioc) kref_put(&ioc->refcount, nfs_io_completion_release); }
+static void +nfs_page_set_inode_ref(struct nfs_page *req, struct inode *inode) +{ + if (!test_and_set_bit(PG_INODE_REF, &req->wb_flags)) { + kref_get(&req->wb_kref); + atomic_long_inc(&NFS_I(inode)->nrequests); + } +} + +static int +nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode) +{ + int ret; + + if (!test_bit(PG_REMOVE, &req->wb_flags)) + return 0; + ret = nfs_page_group_lock(req); + if (ret) + return ret; + if (test_and_clear_bit(PG_REMOVE, &req->wb_flags)) + nfs_page_set_inode_ref(req, inode); + nfs_page_group_unlock(req); + return 0; +} + /** * nfs_folio_find_head_request - find head request associated with a folio * @folio: pointer to folio @@ -540,7 +565,6 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio) struct inode *inode = folio->mapping->host; struct nfs_page *head, *subreq; struct nfs_commit_info cinfo; - bool removed; int ret;
/* @@ -565,18 +589,18 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio) goto retry; }
- ret = nfs_page_group_lock(head); + ret = nfs_cancel_remove_inode(head, inode); if (ret < 0) goto out_unlock;
- removed = test_bit(PG_REMOVE, &head->wb_flags); + ret = nfs_page_group_lock(head); + if (ret < 0) + goto out_unlock;
/* lock each request in the page group */ for (subreq = head->wb_this_page; subreq != head; subreq = subreq->wb_this_page) { - if (test_bit(PG_REMOVE, &subreq->wb_flags)) - removed = true; ret = nfs_page_group_lock_subreq(head, subreq); if (ret < 0) goto out_unlock; @@ -584,21 +608,6 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
nfs_page_group_unlock(head);
- /* - * If PG_REMOVE is set on any request, I/O on that request has - * completed, but some requests were still under I/O at the time - * we locked the head request. - * - * In that case the above wait for all requests means that all I/O - * has now finished, and we can restart from a clean slate. Let the - * old requests go away and start from scratch instead. - */ - if (removed) { - nfs_unroll_locks(head, head); - nfs_unlock_and_release_request(head); - goto retry; - } - nfs_init_cinfo_from_inode(&cinfo, inode); nfs_join_page_group(head, &cinfo, inode); return head; diff --git a/fs/nfs_common/nfslocalio.c b/fs/nfs_common/nfslocalio.c index 09404d142d1a..a74ec08f6c96 100644 --- a/fs/nfs_common/nfslocalio.c +++ b/fs/nfs_common/nfslocalio.c @@ -155,11 +155,9 @@ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *uuid, /* We have an implied reference to net thanks to nfsd_serv_try_get */ localio = nfs_to->nfsd_open_local_fh(net, uuid->dom, rpc_clnt, cred, nfs_fh, fmode); - if (IS_ERR(localio)) { - rcu_read_lock(); - nfs_to->nfsd_serv_put(net); - rcu_read_unlock(); - } + if (IS_ERR(localio)) + nfs_to_nfsd_net_put(net); + return localio; } EXPORT_SYMBOL_GPL(nfs_open_local_fh); diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c index c82d8e3e0d4f..984f8e6379dd 100644 --- a/fs/nfsd/export.c +++ b/fs/nfsd/export.c @@ -40,15 +40,24 @@ #define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS) #define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1)
-static void expkey_put(struct kref *ref) +static void expkey_put_work(struct work_struct *work) { - struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref); + struct svc_expkey *key = + container_of(to_rcu_work(work), struct svc_expkey, ek_rcu_work);
if (test_bit(CACHE_VALID, &key->h.flags) && !test_bit(CACHE_NEGATIVE, &key->h.flags)) path_put(&key->ek_path); auth_domain_put(key->ek_client); - kfree_rcu(key, ek_rcu); + kfree(key); +} + +static void expkey_put(struct kref *ref) +{ + struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref); + + INIT_RCU_WORK(&key->ek_rcu_work, expkey_put_work); + queue_rcu_work(system_wq, &key->ek_rcu_work); }
static int expkey_upcall(struct cache_detail *cd, struct cache_head *h) @@ -355,16 +364,26 @@ static void export_stats_destroy(struct export_stats *stats) EXP_STATS_COUNTERS_NUM); }
-static void svc_export_put(struct kref *ref) +static void svc_export_put_work(struct work_struct *work) { - struct svc_export *exp = container_of(ref, struct svc_export, h.ref); + struct svc_export *exp = + container_of(to_rcu_work(work), struct svc_export, ex_rcu_work); + path_put(&exp->ex_path); auth_domain_put(exp->ex_client); nfsd4_fslocs_free(&exp->ex_fslocs); export_stats_destroy(exp->ex_stats); kfree(exp->ex_stats); kfree(exp->ex_uuid); - kfree_rcu(exp, ex_rcu); + kfree(exp); +} + +static void svc_export_put(struct kref *ref) +{ + struct svc_export *exp = container_of(ref, struct svc_export, h.ref); + + INIT_RCU_WORK(&exp->ex_rcu_work, svc_export_put_work); + queue_rcu_work(system_wq, &exp->ex_rcu_work); }
static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h) diff --git a/fs/nfsd/export.h b/fs/nfsd/export.h index 3794ae253a70..081afb68681e 100644 --- a/fs/nfsd/export.h +++ b/fs/nfsd/export.h @@ -75,7 +75,7 @@ struct svc_export { u32 ex_layout_types; struct nfsd4_deviceid_map *ex_devid_map; struct cache_detail *cd; - struct rcu_head ex_rcu; + struct rcu_work ex_rcu_work; unsigned long ex_xprtsec_modes; struct export_stats *ex_stats; }; @@ -92,7 +92,7 @@ struct svc_expkey { u32 ek_fsid[6];
struct path ek_path; - struct rcu_head ek_rcu; + struct rcu_work ek_rcu_work; };
#define EX_ISSYNC(exp) (!((exp)->ex_flags & NFSEXP_ASYNC)) diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c index 2e6783f63712..146a9463c3c2 100644 --- a/fs/nfsd/filecache.c +++ b/fs/nfsd/filecache.c @@ -391,19 +391,19 @@ nfsd_file_put(struct nfsd_file *nf) }
/** - * nfsd_file_put_local - put the reference to nfsd_file and local nfsd_serv - * @nf: nfsd_file of which to put the references + * nfsd_file_put_local - put nfsd_file reference and arm nfsd_serv_put in caller + * @nf: nfsd_file of which to put the reference * - * First put the reference of the nfsd_file and then put the - * reference to the associated nn->nfsd_serv. + * First save the associated net to return to caller, then put + * the reference of the nfsd_file. */ -void -nfsd_file_put_local(struct nfsd_file *nf) __must_hold(rcu) +struct net * +nfsd_file_put_local(struct nfsd_file *nf) { struct net *net = nf->nf_net;
nfsd_file_put(nf); - nfsd_serv_put(net); + return net; }
/** diff --git a/fs/nfsd/filecache.h b/fs/nfsd/filecache.h index cadf3c2689c4..d5db6b34ba30 100644 --- a/fs/nfsd/filecache.h +++ b/fs/nfsd/filecache.h @@ -55,7 +55,7 @@ void nfsd_file_cache_shutdown(void); int nfsd_file_cache_start_net(struct net *net); void nfsd_file_cache_shutdown_net(struct net *net); void nfsd_file_put(struct nfsd_file *nf); -void nfsd_file_put_local(struct nfsd_file *nf); +struct net *nfsd_file_put_local(struct nfsd_file *nf); struct nfsd_file *nfsd_file_get(struct nfsd_file *nf); struct file *nfsd_file_file(struct nfsd_file *nf); void nfsd_file_close_inode_sync(struct inode *inode); diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c index b5b3ab9d719a..b8cbb1556004 100644 --- a/fs/nfsd/nfs4callback.c +++ b/fs/nfsd/nfs4callback.c @@ -287,17 +287,17 @@ static int decode_cb_compound4res(struct xdr_stream *xdr, u32 length; __be32 *p;
- p = xdr_inline_decode(xdr, 4 + 4); + p = xdr_inline_decode(xdr, XDR_UNIT); if (unlikely(p == NULL)) goto out_overflow; - hdr->status = be32_to_cpup(p++); + hdr->status = be32_to_cpup(p); /* Ignore the tag */ - length = be32_to_cpup(p++); - p = xdr_inline_decode(xdr, length + 4); - if (unlikely(p == NULL)) + if (xdr_stream_decode_u32(xdr, &length) < 0) + goto out_overflow; + if (xdr_inline_decode(xdr, length) == NULL) + goto out_overflow; + if (xdr_stream_decode_u32(xdr, &hdr->nops) < 0) goto out_overflow; - p += XDR_QUADLEN(length); - hdr->nops = be32_to_cpup(p); return 0; out_overflow: return -EIO; @@ -1461,6 +1461,8 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb) ses = c->cn_session; } spin_unlock(&clp->cl_lock); + if (!c) + return;
err = setup_callback_client(clp, &conn, ses); if (err) { diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c index d32f2dfd148f..7a1fdafa42ea 100644 --- a/fs/nfsd/nfs4proc.c +++ b/fs/nfsd/nfs4proc.c @@ -1292,7 +1292,7 @@ static void nfsd4_stop_copy(struct nfsd4_copy *copy) nfs4_put_copy(copy); }
-static struct nfsd4_copy *nfsd4_get_copy(struct nfs4_client *clp) +static struct nfsd4_copy *nfsd4_unhash_copy(struct nfs4_client *clp) { struct nfsd4_copy *copy = NULL;
@@ -1301,6 +1301,9 @@ static struct nfsd4_copy *nfsd4_get_copy(struct nfs4_client *clp) copy = list_first_entry(&clp->async_copies, struct nfsd4_copy, copies); refcount_inc(©->refcount); + copy->cp_clp = NULL; + if (!list_empty(©->copies)) + list_del_init(©->copies); } spin_unlock(&clp->async_lock); return copy; @@ -1310,7 +1313,7 @@ void nfsd4_shutdown_copy(struct nfs4_client *clp) { struct nfsd4_copy *copy;
- while ((copy = nfsd4_get_copy(clp)) != NULL) + while ((copy = nfsd4_unhash_copy(clp)) != NULL) nfsd4_stop_copy(copy); } #ifdef CONFIG_NFSD_V4_2_INTER_SSC diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c index b7d61eb8afe9..4a765555bf84 100644 --- a/fs/nfsd/nfs4recover.c +++ b/fs/nfsd/nfs4recover.c @@ -659,7 +659,8 @@ nfs4_reset_recoverydir(char *recdir) return status; status = -ENOTDIR; if (d_is_dir(path.dentry)) { - strcpy(user_recovery_dirname, recdir); + strscpy(user_recovery_dirname, recdir, + sizeof(user_recovery_dirname)); status = 0; } path_put(&path); diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 551d2958ec29..d3cfc6471539 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -5957,7 +5957,7 @@ nfs4_delegation_stat(struct nfs4_delegation *dp, struct svc_fh *currentfh, path.dentry = file_dentry(nf->nf_file);
rc = vfs_getattr(&path, stat, - (STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE), + (STATX_MODE | STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE), AT_STATX_SYNC_AS_STAT);
nfsd_file_put(nf); @@ -6041,8 +6041,7 @@ nfs4_open_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp, } open->op_delegate_type = NFS4_OPEN_DELEGATE_WRITE; dp->dl_cb_fattr.ncf_cur_fsize = stat.size; - dp->dl_cb_fattr.ncf_initial_cinfo = - nfsd4_change_attribute(&stat, d_inode(currentfh->fh_dentry)); + dp->dl_cb_fattr.ncf_initial_cinfo = nfsd4_change_attribute(&stat); trace_nfsd_deleg_write(&dp->dl_stid.sc_stateid); } else { open->op_delegate_type = NFS4_OPEN_DELEGATE_READ; diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c index f118921250c3..8d25aef51ad1 100644 --- a/fs/nfsd/nfs4xdr.c +++ b/fs/nfsd/nfs4xdr.c @@ -3040,7 +3040,7 @@ static __be32 nfsd4_encode_fattr4_change(struct xdr_stream *xdr, return nfs_ok; }
- c = nfsd4_change_attribute(&args->stat, d_inode(args->dentry)); + c = nfsd4_change_attribute(&args->stat); return nfsd4_encode_changeid4(xdr, c); }
diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c index 40ad58a6a036..96e19c50a5d7 100644 --- a/fs/nfsd/nfsfh.c +++ b/fs/nfsd/nfsfh.c @@ -667,20 +667,18 @@ fh_update(struct svc_fh *fhp) __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp) { bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE); - struct inode *inode; struct kstat stat; __be32 err;
if (fhp->fh_no_wcc || fhp->fh_pre_saved) return nfs_ok;
- inode = d_inode(fhp->fh_dentry); err = fh_getattr(fhp, &stat); if (err) return err;
if (v4) - fhp->fh_pre_change = nfsd4_change_attribute(&stat, inode); + fhp->fh_pre_change = nfsd4_change_attribute(&stat);
fhp->fh_pre_mtime = stat.mtime; fhp->fh_pre_ctime = stat.ctime; @@ -697,7 +695,6 @@ __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp) __be32 fh_fill_post_attrs(struct svc_fh *fhp) { bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE); - struct inode *inode = d_inode(fhp->fh_dentry); __be32 err;
if (fhp->fh_no_wcc) @@ -713,7 +710,7 @@ __be32 fh_fill_post_attrs(struct svc_fh *fhp) fhp->fh_post_saved = true; if (v4) fhp->fh_post_change = - nfsd4_change_attribute(&fhp->fh_post_attr, inode); + nfsd4_change_attribute(&fhp->fh_post_attr); return nfs_ok; }
@@ -804,7 +801,14 @@ enum fsid_source fsid_source(const struct svc_fh *fhp) return FSIDSOURCE_DEV; }
-/* +/** + * nfsd4_change_attribute - Generate an NFSv4 change_attribute value + * @stat: inode attributes + * + * Caller must fill in @stat before calling, typically by invoking + * vfs_getattr() with STATX_MODE, STATX_CTIME, and STATX_CHANGE_COOKIE. + * Returns an unsigned 64-bit changeid4 value (RFC 8881 Section 3.2). + * * We could use i_version alone as the change attribute. However, i_version * can go backwards on a regular file after an unclean shutdown. On its own * that doesn't necessarily cause a problem, but if i_version goes backwards @@ -821,13 +825,13 @@ enum fsid_source fsid_source(const struct svc_fh *fhp) * assume that the new change attr is always logged to stable storage in some * fashion before the results can be seen. */ -u64 nfsd4_change_attribute(const struct kstat *stat, const struct inode *inode) +u64 nfsd4_change_attribute(const struct kstat *stat) { u64 chattr;
if (stat->result_mask & STATX_CHANGE_COOKIE) { chattr = stat->change_cookie; - if (S_ISREG(inode->i_mode) && + if (S_ISREG(stat->mode) && !(stat->attributes & STATX_ATTR_CHANGE_MONOTONIC)) { chattr += (u64)stat->ctime.tv_sec << 30; chattr += stat->ctime.tv_nsec; diff --git a/fs/nfsd/nfsfh.h b/fs/nfsd/nfsfh.h index 5b7394801dc4..876152a91f12 100644 --- a/fs/nfsd/nfsfh.h +++ b/fs/nfsd/nfsfh.h @@ -297,8 +297,7 @@ static inline void fh_clear_pre_post_attrs(struct svc_fh *fhp) fhp->fh_pre_saved = false; }
-u64 nfsd4_change_attribute(const struct kstat *stat, - const struct inode *inode); +u64 nfsd4_change_attribute(const struct kstat *stat); __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp); __be32 fh_fill_post_attrs(struct svc_fh *fhp); __be32 __must_check fh_fill_both_attrs(struct svc_fh *fhp); diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c index 82ae8254c068..f976949d2634 100644 --- a/fs/notify/fsnotify.c +++ b/fs/notify/fsnotify.c @@ -333,16 +333,19 @@ static int fsnotify_handle_event(struct fsnotify_group *group, __u32 mask, if (!inode_mark) return 0;
- if (mask & FS_EVENT_ON_CHILD) { - /* - * Some events can be sent on both parent dir and child marks - * (e.g. FS_ATTRIB). If both parent dir and child are - * watching, report the event once to parent dir with name (if - * interested) and once to child without name (if interested). - * The child watcher is expecting an event without a file name - * and without the FS_EVENT_ON_CHILD flag. - */ - mask &= ~FS_EVENT_ON_CHILD; + /* + * Some events can be sent on both parent dir and child marks (e.g. + * FS_ATTRIB). If both parent dir and child are watching, report the + * event once to parent dir with name (if interested) and once to child + * without name (if interested). + * + * In any case regardless whether the parent is watching or not, the + * child watcher is expecting an event without the FS_EVENT_ON_CHILD + * flag. The file name is expected if and only if this is a directory + * event. + */ + mask &= ~FS_EVENT_ON_CHILD; + if (!(mask & ALL_FSNOTIFY_DIRENT_EVENTS)) { dir = NULL; name = NULL; } diff --git a/fs/notify/mark.c b/fs/notify/mark.c index c45b222cf9c1..4981439e6209 100644 --- a/fs/notify/mark.c +++ b/fs/notify/mark.c @@ -138,8 +138,11 @@ static void fsnotify_get_sb_watched_objects(struct super_block *sb)
static void fsnotify_put_sb_watched_objects(struct super_block *sb) { - if (atomic_long_dec_and_test(fsnotify_sb_watched_objects(sb))) - wake_up_var(fsnotify_sb_watched_objects(sb)); + atomic_long_t *watched_objects = fsnotify_sb_watched_objects(sb); + + /* the superblock can go away after this decrement */ + if (atomic_long_dec_and_test(watched_objects)) + wake_up_var(watched_objects); }
static void fsnotify_get_inode_ref(struct inode *inode) @@ -150,8 +153,11 @@ static void fsnotify_get_inode_ref(struct inode *inode)
static void fsnotify_put_inode_ref(struct inode *inode) { - fsnotify_put_sb_watched_objects(inode->i_sb); + /* read ->i_sb before the inode can go away */ + struct super_block *sb = inode->i_sb; + iput(inode); + fsnotify_put_sb_watched_objects(sb); }
/* diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c index e370eaf9bfe2..f704ceef9539 100644 --- a/fs/ntfs3/file.c +++ b/fs/ntfs3/file.c @@ -222,7 +222,7 @@ static int ntfs_extend_initialized_size(struct file *file, if (err) goto out;
- folio_zero_range(folio, zerofrom, folio_size(folio)); + folio_zero_range(folio, zerofrom, folio_size(folio) - zerofrom);
err = ntfs_write_end(file, mapping, pos, len, len, folio, NULL); if (err < 0) diff --git a/fs/ocfs2/aops.h b/fs/ocfs2/aops.h index 45db1781ea73..1d1b4b7edba0 100644 --- a/fs/ocfs2/aops.h +++ b/fs/ocfs2/aops.h @@ -70,6 +70,8 @@ enum ocfs2_iocb_lock_bits { OCFS2_IOCB_NUM_LOCKS };
+#define ocfs2_iocb_init_rw_locked(iocb) \ + (iocb->private = NULL) #define ocfs2_iocb_clear_rw_locked(iocb) \ clear_bit(OCFS2_IOCB_RW_LOCK, (unsigned long *)&iocb->private) #define ocfs2_iocb_rw_locked_level(iocb) \ diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c index 06af21982c16..cb09330a0861 100644 --- a/fs/ocfs2/file.c +++ b/fs/ocfs2/file.c @@ -2398,6 +2398,8 @@ static ssize_t ocfs2_file_write_iter(struct kiocb *iocb, } else inode_lock(inode);
+ ocfs2_iocb_init_rw_locked(iocb); + /* * Concurrent O_DIRECT writes are allowed with * mount_option "coherency=buffered". @@ -2544,6 +2546,8 @@ static ssize_t ocfs2_file_read_iter(struct kiocb *iocb, if (!direct_io && nowait) return -EOPNOTSUPP;
+ ocfs2_iocb_init_rw_locked(iocb); + /* * buffered reads protect themselves in ->read_folio(). O_DIRECT reads * need locks to protect pending reads from racing with truncate. diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 51446c59388f..7a85735d584f 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -493,13 +493,13 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) * the previous entry, search for a matching entry. */ if (!m || start < m->addr || start >= m->addr + m->size) { - struct kcore_list *iter; + struct kcore_list *pos;
m = NULL; - list_for_each_entry(iter, &kclist_head, list) { - if (start >= iter->addr && - start < iter->addr + iter->size) { - m = iter; + list_for_each_entry(pos, &kclist_head, list) { + if (start >= pos->addr && + start < pos->addr + pos->size) { + m = pos; break; } } diff --git a/fs/read_write.c b/fs/read_write.c index 64dc24afdb3a..befec0b5c537 100644 --- a/fs/read_write.c +++ b/fs/read_write.c @@ -1830,18 +1830,21 @@ int generic_file_rw_checks(struct file *file_in, struct file *file_out) return 0; }
-bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos) +int generic_atomic_write_valid(struct kiocb *iocb, struct iov_iter *iter) { size_t len = iov_iter_count(iter);
if (!iter_is_ubuf(iter)) - return false; + return -EINVAL;
if (!is_power_of_2(len)) - return false; + return -EINVAL; + + if (!IS_ALIGNED(iocb->ki_pos, len)) + return -EINVAL;
- if (!IS_ALIGNED(pos, len)) - return false; + if (!(iocb->ki_flags & IOCB_DIRECT)) + return -EOPNOTSUPP;
- return true; + return 0; } diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c index 0ff2491c311d..9c0ef4195b58 100644 --- a/fs/smb/client/cached_dir.c +++ b/fs/smb/client/cached_dir.c @@ -17,6 +17,11 @@ static void free_cached_dir(struct cached_fid *cfid); static void smb2_close_cached_fid(struct kref *ref); static void cfids_laundromat_worker(struct work_struct *work);
+struct cached_dir_dentry { + struct list_head entry; + struct dentry *dentry; +}; + static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids, const char *path, bool lookup_only, @@ -59,6 +64,16 @@ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids, list_add(&cfid->entry, &cfids->entries); cfid->on_list = true; kref_get(&cfid->refcount); + /* + * Set @cfid->has_lease to true during construction so that the lease + * reference can be put in cached_dir_lease_break() due to a potential + * lease break right after the request is sent or while @cfid is still + * being cached, or if a reconnection is triggered during construction. + * Concurrent processes won't be to use it yet due to @cfid->time being + * zero. + */ + cfid->has_lease = true; + spin_unlock(&cfids->cfid_list_lock); return cfid; } @@ -176,12 +191,12 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon, return -ENOENT; } /* - * Return cached fid if it has a lease. Otherwise, it is either a new - * entry or laundromat worker removed it from @cfids->entries. Caller - * will put last reference if the latter. + * Return cached fid if it is valid (has a lease and has a time). + * Otherwise, it is either a new entry or laundromat worker removed it + * from @cfids->entries. Caller will put last reference if the latter. */ spin_lock(&cfids->cfid_list_lock); - if (cfid->has_lease) { + if (cfid->has_lease && cfid->time) { spin_unlock(&cfids->cfid_list_lock); *ret_cfid = cfid; kfree(utf16_path); @@ -212,6 +227,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon, } } cfid->dentry = dentry; + cfid->tcon = tcon;
/* * We do not hold the lock for the open because in case @@ -267,15 +283,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
smb2_set_related(&rqst[1]);
- /* - * Set @cfid->has_lease to true before sending out compounded request so - * its lease reference can be put in cached_dir_lease_break() due to a - * potential lease break right after the request is sent or while @cfid - * is still being cached. Concurrent processes won't be to use it yet - * due to @cfid->time being zero. - */ - cfid->has_lease = true; - if (retries) { smb2_set_replay(server, &rqst[0]); smb2_set_replay(server, &rqst[1]); @@ -292,7 +299,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon, } goto oshr_free; } - cfid->tcon = tcon; cfid->is_open = true;
spin_lock(&cfids->cfid_list_lock); @@ -347,6 +353,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon, SMB2_query_info_free(&rqst[1]); free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base); free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base); +out: if (rc) { spin_lock(&cfids->cfid_list_lock); if (cfid->on_list) { @@ -358,23 +365,14 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon, /* * We are guaranteed to have two references at this * point. One for the caller and one for a potential - * lease. Release the Lease-ref so that the directory - * will be closed when the caller closes the cached - * handle. + * lease. Release one here, and the second below. */ cfid->has_lease = false; - spin_unlock(&cfids->cfid_list_lock); kref_put(&cfid->refcount, smb2_close_cached_fid); - goto out; } spin_unlock(&cfids->cfid_list_lock); - } -out: - if (rc) { - if (cfid->is_open) - SMB2_close(0, cfid->tcon, cfid->fid.persistent_fid, - cfid->fid.volatile_fid); - free_cached_dir(cfid); + + kref_put(&cfid->refcount, smb2_close_cached_fid); } else { *ret_cfid = cfid; atomic_inc(&tcon->num_remote_opens); @@ -401,7 +399,7 @@ int open_cached_dir_by_dentry(struct cifs_tcon *tcon, spin_lock(&cfids->cfid_list_lock); list_for_each_entry(cfid, &cfids->entries, entry) { if (dentry && cfid->dentry == dentry) { - cifs_dbg(FYI, "found a cached root file handle by dentry\n"); + cifs_dbg(FYI, "found a cached file handle by dentry\n"); kref_get(&cfid->refcount); *ret_cfid = cfid; spin_unlock(&cfids->cfid_list_lock); @@ -477,7 +475,10 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb) struct cifs_tcon *tcon; struct tcon_link *tlink; struct cached_fids *cfids; + struct cached_dir_dentry *tmp_list, *q; + LIST_HEAD(entry);
+ spin_lock(&cifs_sb->tlink_tree_lock); for (node = rb_first(root); node; node = rb_next(node)) { tlink = rb_entry(node, struct tcon_link, tl_rbnode); tcon = tlink_tcon(tlink); @@ -486,11 +487,30 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb) cfids = tcon->cfids; if (cfids == NULL) continue; + spin_lock(&cfids->cfid_list_lock); list_for_each_entry(cfid, &cfids->entries, entry) { - dput(cfid->dentry); + tmp_list = kmalloc(sizeof(*tmp_list), GFP_ATOMIC); + if (tmp_list == NULL) + break; + spin_lock(&cfid->fid_lock); + tmp_list->dentry = cfid->dentry; cfid->dentry = NULL; + spin_unlock(&cfid->fid_lock); + + list_add_tail(&tmp_list->entry, &entry); } + spin_unlock(&cfids->cfid_list_lock); + } + spin_unlock(&cifs_sb->tlink_tree_lock); + + list_for_each_entry_safe(tmp_list, q, &entry, entry) { + list_del(&tmp_list->entry); + dput(tmp_list->dentry); + kfree(tmp_list); } + + /* Flush any pending work that will drop dentries */ + flush_workqueue(cfid_put_wq); }
/* @@ -501,50 +521,71 @@ void invalidate_all_cached_dirs(struct cifs_tcon *tcon) { struct cached_fids *cfids = tcon->cfids; struct cached_fid *cfid, *q; - LIST_HEAD(entry);
if (cfids == NULL) return;
+ /* + * Mark all the cfids as closed, and move them to the cfids->dying list. + * They'll be cleaned up later by cfids_invalidation_worker. Take + * a reference to each cfid during this process. + */ spin_lock(&cfids->cfid_list_lock); list_for_each_entry_safe(cfid, q, &cfids->entries, entry) { - list_move(&cfid->entry, &entry); + list_move(&cfid->entry, &cfids->dying); cfids->num_entries--; cfid->is_open = false; cfid->on_list = false; - /* To prevent race with smb2_cached_lease_break() */ - kref_get(&cfid->refcount); - } - spin_unlock(&cfids->cfid_list_lock); - - list_for_each_entry_safe(cfid, q, &entry, entry) { - list_del(&cfid->entry); - cancel_work_sync(&cfid->lease_break); if (cfid->has_lease) { /* - * We lease was never cancelled from the server so we - * need to drop the reference. + * The lease was never cancelled from the server, + * so steal that reference. */ - spin_lock(&cfids->cfid_list_lock); cfid->has_lease = false; - spin_unlock(&cfids->cfid_list_lock); - kref_put(&cfid->refcount, smb2_close_cached_fid); - } - /* Drop the extra reference opened above*/ - kref_put(&cfid->refcount, smb2_close_cached_fid); + } else + kref_get(&cfid->refcount); } + /* + * Queue dropping of the dentries once locks have been dropped + */ + if (!list_empty(&cfids->dying)) + queue_work(cfid_put_wq, &cfids->invalidation_work); + spin_unlock(&cfids->cfid_list_lock); }
static void -smb2_cached_lease_break(struct work_struct *work) +cached_dir_offload_close(struct work_struct *work) { struct cached_fid *cfid = container_of(work, - struct cached_fid, lease_break); + struct cached_fid, close_work); + struct cifs_tcon *tcon = cfid->tcon; + + WARN_ON(cfid->on_list);
- spin_lock(&cfid->cfids->cfid_list_lock); - cfid->has_lease = false; - spin_unlock(&cfid->cfids->cfid_list_lock); kref_put(&cfid->refcount, smb2_close_cached_fid); + cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_cached_close); +} + +/* + * Release the cached directory's dentry, and then queue work to drop cached + * directory itself (closing on server if needed). + * + * Must be called with a reference to the cached_fid and a reference to the + * tcon. + */ +static void cached_dir_put_work(struct work_struct *work) +{ + struct cached_fid *cfid = container_of(work, struct cached_fid, + put_work); + struct dentry *dentry; + + spin_lock(&cfid->fid_lock); + dentry = cfid->dentry; + cfid->dentry = NULL; + spin_unlock(&cfid->fid_lock); + + dput(dentry); + queue_work(serverclose_wq, &cfid->close_work); }
int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16]) @@ -561,6 +602,7 @@ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16]) !memcmp(lease_key, cfid->fid.lease_key, SMB2_LEASE_KEY_SIZE)) { + cfid->has_lease = false; cfid->time = 0; /* * We found a lease remove it from the list @@ -570,8 +612,10 @@ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16]) cfid->on_list = false; cfids->num_entries--;
- queue_work(cifsiod_wq, - &cfid->lease_break); + ++tcon->tc_count; + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, + netfs_trace_tcon_ref_get_cached_lease_break); + queue_work(cfid_put_wq, &cfid->put_work); spin_unlock(&cfids->cfid_list_lock); return true; } @@ -593,7 +637,8 @@ static struct cached_fid *init_cached_dir(const char *path) return NULL; }
- INIT_WORK(&cfid->lease_break, smb2_cached_lease_break); + INIT_WORK(&cfid->close_work, cached_dir_offload_close); + INIT_WORK(&cfid->put_work, cached_dir_put_work); INIT_LIST_HEAD(&cfid->entry); INIT_LIST_HEAD(&cfid->dirents.entries); mutex_init(&cfid->dirents.de_mutex); @@ -606,6 +651,9 @@ static void free_cached_dir(struct cached_fid *cfid) { struct cached_dirent *dirent, *q;
+ WARN_ON(work_pending(&cfid->close_work)); + WARN_ON(work_pending(&cfid->put_work)); + dput(cfid->dentry); cfid->dentry = NULL;
@@ -623,10 +671,30 @@ static void free_cached_dir(struct cached_fid *cfid) kfree(cfid); }
+static void cfids_invalidation_worker(struct work_struct *work) +{ + struct cached_fids *cfids = container_of(work, struct cached_fids, + invalidation_work); + struct cached_fid *cfid, *q; + LIST_HEAD(entry); + + spin_lock(&cfids->cfid_list_lock); + /* move cfids->dying to the local list */ + list_cut_before(&entry, &cfids->dying, &cfids->dying); + spin_unlock(&cfids->cfid_list_lock); + + list_for_each_entry_safe(cfid, q, &entry, entry) { + list_del(&cfid->entry); + /* Drop the ref-count acquired in invalidate_all_cached_dirs */ + kref_put(&cfid->refcount, smb2_close_cached_fid); + } +} + static void cfids_laundromat_worker(struct work_struct *work) { struct cached_fids *cfids; struct cached_fid *cfid, *q; + struct dentry *dentry; LIST_HEAD(entry);
cfids = container_of(work, struct cached_fids, laundromat_work.work); @@ -638,33 +706,42 @@ static void cfids_laundromat_worker(struct work_struct *work) cfid->on_list = false; list_move(&cfid->entry, &entry); cfids->num_entries--; - /* To prevent race with smb2_cached_lease_break() */ - kref_get(&cfid->refcount); + if (cfid->has_lease) { + /* + * Our lease has not yet been cancelled from the + * server. Steal that reference. + */ + cfid->has_lease = false; + } else + kref_get(&cfid->refcount); } } spin_unlock(&cfids->cfid_list_lock);
list_for_each_entry_safe(cfid, q, &entry, entry) { list_del(&cfid->entry); - /* - * Cancel and wait for the work to finish in case we are racing - * with it. - */ - cancel_work_sync(&cfid->lease_break); - if (cfid->has_lease) { + + spin_lock(&cfid->fid_lock); + dentry = cfid->dentry; + cfid->dentry = NULL; + spin_unlock(&cfid->fid_lock); + + dput(dentry); + if (cfid->is_open) { + spin_lock(&cifs_tcp_ses_lock); + ++cfid->tcon->tc_count; + trace_smb3_tcon_ref(cfid->tcon->debug_id, cfid->tcon->tc_count, + netfs_trace_tcon_ref_get_cached_laundromat); + spin_unlock(&cifs_tcp_ses_lock); + queue_work(serverclose_wq, &cfid->close_work); + } else /* - * Our lease has not yet been cancelled from the server - * so we need to drop the reference. + * Drop the ref-count from above, either the lease-ref (if there + * was one) or the extra one acquired. */ - spin_lock(&cfids->cfid_list_lock); - cfid->has_lease = false; - spin_unlock(&cfids->cfid_list_lock); kref_put(&cfid->refcount, smb2_close_cached_fid); - } - /* Drop the extra reference opened above */ - kref_put(&cfid->refcount, smb2_close_cached_fid); } - queue_delayed_work(cifsiod_wq, &cfids->laundromat_work, + queue_delayed_work(cfid_put_wq, &cfids->laundromat_work, dir_cache_timeout * HZ); }
@@ -677,9 +754,11 @@ struct cached_fids *init_cached_dirs(void) return NULL; spin_lock_init(&cfids->cfid_list_lock); INIT_LIST_HEAD(&cfids->entries); + INIT_LIST_HEAD(&cfids->dying);
+ INIT_WORK(&cfids->invalidation_work, cfids_invalidation_worker); INIT_DELAYED_WORK(&cfids->laundromat_work, cfids_laundromat_worker); - queue_delayed_work(cifsiod_wq, &cfids->laundromat_work, + queue_delayed_work(cfid_put_wq, &cfids->laundromat_work, dir_cache_timeout * HZ);
return cfids; @@ -698,6 +777,7 @@ void free_cached_dirs(struct cached_fids *cfids) return;
cancel_delayed_work_sync(&cfids->laundromat_work); + cancel_work_sync(&cfids->invalidation_work);
spin_lock(&cfids->cfid_list_lock); list_for_each_entry_safe(cfid, q, &cfids->entries, entry) { @@ -705,6 +785,11 @@ void free_cached_dirs(struct cached_fids *cfids) cfid->is_open = false; list_move(&cfid->entry, &entry); } + list_for_each_entry_safe(cfid, q, &cfids->dying, entry) { + cfid->on_list = false; + cfid->is_open = false; + list_move(&cfid->entry, &entry); + } spin_unlock(&cfids->cfid_list_lock);
list_for_each_entry_safe(cfid, q, &entry, entry) { diff --git a/fs/smb/client/cached_dir.h b/fs/smb/client/cached_dir.h index 81ba0fd5cc16..1dfe79d947a6 100644 --- a/fs/smb/client/cached_dir.h +++ b/fs/smb/client/cached_dir.h @@ -44,7 +44,8 @@ struct cached_fid { spinlock_t fid_lock; struct cifs_tcon *tcon; struct dentry *dentry; - struct work_struct lease_break; + struct work_struct put_work; + struct work_struct close_work; struct smb2_file_all_info file_all_info; struct cached_dirents dirents; }; @@ -53,10 +54,13 @@ struct cached_fid { struct cached_fids { /* Must be held when: * - accessing the cfids->entries list + * - accessing the cfids->dying list */ spinlock_t cfid_list_lock; int num_entries; struct list_head entries; + struct list_head dying; + struct work_struct invalidation_work; struct delayed_work laundromat_work; };
diff --git a/fs/smb/client/cifsacl.c b/fs/smb/client/cifsacl.c index 1d294d53f662..c68ad526a4de 100644 --- a/fs/smb/client/cifsacl.c +++ b/fs/smb/client/cifsacl.c @@ -885,12 +885,17 @@ unsigned int setup_authusers_ACE(struct smb_ace *pntace) * Fill in the special SID based on the mode. See * https://technet.microsoft.com/en-us/library/hh509017(v=ws.10).aspx */ -unsigned int setup_special_mode_ACE(struct smb_ace *pntace, __u64 nmode) +unsigned int setup_special_mode_ACE(struct smb_ace *pntace, + bool posix, + __u64 nmode) { int i; unsigned int ace_size = 28;
- pntace->type = ACCESS_DENIED_ACE_TYPE; + if (posix) + pntace->type = ACCESS_ALLOWED_ACE_TYPE; + else + pntace->type = ACCESS_DENIED_ACE_TYPE; pntace->flags = 0x0; pntace->access_req = 0; pntace->sid.num_subauth = 3; @@ -933,7 +938,8 @@ static void populate_new_aces(char *nacl_base, struct smb_sid *pownersid, struct smb_sid *pgrpsid, __u64 *pnmode, u32 *pnum_aces, u16 *pnsize, - bool modefromsid) + bool modefromsid, + bool posix) { __u64 nmode; u32 num_aces = 0; @@ -950,13 +956,15 @@ static void populate_new_aces(char *nacl_base, num_aces = *pnum_aces; nsize = *pnsize;
- if (modefromsid) { - pnntace = (struct smb_ace *) (nacl_base + nsize); - nsize += setup_special_mode_ACE(pnntace, nmode); - num_aces++; + if (modefromsid || posix) { pnntace = (struct smb_ace *) (nacl_base + nsize); - nsize += setup_authusers_ACE(pnntace); + nsize += setup_special_mode_ACE(pnntace, posix, nmode); num_aces++; + if (modefromsid) { + pnntace = (struct smb_ace *) (nacl_base + nsize); + nsize += setup_authusers_ACE(pnntace); + num_aces++; + } goto set_size; }
@@ -1076,7 +1084,7 @@ static __u16 replace_sids_and_copy_aces(struct smb_acl *pdacl, struct smb_acl *p
static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl, struct smb_sid *pownersid, struct smb_sid *pgrpsid, - __u64 *pnmode, bool mode_from_sid) + __u64 *pnmode, bool mode_from_sid, bool posix) { int i; u16 size = 0; @@ -1094,11 +1102,11 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl, nsize = sizeof(struct smb_acl);
/* If pdacl is NULL, we don't have a src. Simply populate new ACL. */ - if (!pdacl) { + if (!pdacl || posix) { populate_new_aces(nacl_base, pownersid, pgrpsid, pnmode, &num_aces, &nsize, - mode_from_sid); + mode_from_sid, posix); goto finalize_dacl; }
@@ -1115,7 +1123,7 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl, populate_new_aces(nacl_base, pownersid, pgrpsid, pnmode, &num_aces, &nsize, - mode_from_sid); + mode_from_sid, posix);
new_aces_set = true; } @@ -1144,7 +1152,7 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl, populate_new_aces(nacl_base, pownersid, pgrpsid, pnmode, &num_aces, &nsize, - mode_from_sid); + mode_from_sid, posix);
new_aces_set = true; } @@ -1251,7 +1259,7 @@ static int parse_sec_desc(struct cifs_sb_info *cifs_sb, /* Convert permission bits from mode to equivalent CIFS ACL */ static int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *pnntsd, __u32 secdesclen, __u32 *pnsecdesclen, __u64 *pnmode, kuid_t uid, kgid_t gid, - bool mode_from_sid, bool id_from_sid, int *aclflag) + bool mode_from_sid, bool id_from_sid, bool posix, int *aclflag) { int rc = 0; __u32 dacloffset; @@ -1288,7 +1296,7 @@ static int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *pnntsd, ndacl_ptr->num_aces = cpu_to_le32(0);
rc = set_chmod_dacl(dacl_ptr, ndacl_ptr, owner_sid_ptr, group_sid_ptr, - pnmode, mode_from_sid); + pnmode, mode_from_sid, posix);
sidsoffset = ndacloffset + le16_to_cpu(ndacl_ptr->size); /* copy the non-dacl portion of secdesc */ @@ -1587,6 +1595,7 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode, struct tcon_link *tlink = cifs_sb_tlink(cifs_sb); struct smb_version_operations *ops; bool mode_from_sid, id_from_sid; + bool posix = tlink_tcon(tlink)->posix_extensions; const u32 info = 0;
if (IS_ERR(tlink)) @@ -1622,12 +1631,13 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode, id_from_sid = false;
/* Potentially, five new ACEs can be added to the ACL for U,G,O mapping */ - nsecdesclen = secdesclen; if (pnmode && *pnmode != NO_CHANGE_64) { /* chmod */ - if (mode_from_sid) - nsecdesclen += 2 * sizeof(struct smb_ace); + if (posix) + nsecdesclen = 1 * sizeof(struct smb_ace); + else if (mode_from_sid) + nsecdesclen = secdesclen + (2 * sizeof(struct smb_ace)); else /* cifsacl */ - nsecdesclen += 5 * sizeof(struct smb_ace); + nsecdesclen = secdesclen + (5 * sizeof(struct smb_ace)); } else { /* chown */ /* When ownership changes, changes new owner sid length could be different */ nsecdesclen = sizeof(struct smb_ntsd) + (sizeof(struct smb_sid) * 2); @@ -1657,7 +1667,7 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode, }
rc = build_sec_desc(pntsd, pnntsd, secdesclen, &nsecdesclen, pnmode, uid, gid, - mode_from_sid, id_from_sid, &aclflag); + mode_from_sid, id_from_sid, posix, &aclflag);
cifs_dbg(NOISY, "build_sec_desc rc: %d\n", rc);
diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c index 20cafdff5081..bf909c2f6b96 100644 --- a/fs/smb/client/cifsfs.c +++ b/fs/smb/client/cifsfs.c @@ -157,6 +157,7 @@ struct workqueue_struct *fileinfo_put_wq; struct workqueue_struct *cifsoplockd_wq; struct workqueue_struct *deferredclose_wq; struct workqueue_struct *serverclose_wq; +struct workqueue_struct *cfid_put_wq; __u32 cifs_lock_secret;
/* @@ -1895,9 +1896,16 @@ init_cifs(void) goto out_destroy_deferredclose_wq; }
+ cfid_put_wq = alloc_workqueue("cfid_put_wq", + WQ_FREEZABLE|WQ_MEM_RECLAIM, 0); + if (!cfid_put_wq) { + rc = -ENOMEM; + goto out_destroy_serverclose_wq; + } + rc = cifs_init_inodecache(); if (rc) - goto out_destroy_serverclose_wq; + goto out_destroy_cfid_put_wq;
rc = cifs_init_netfs(); if (rc) @@ -1965,6 +1973,8 @@ init_cifs(void) cifs_destroy_netfs(); out_destroy_inodecache: cifs_destroy_inodecache(); +out_destroy_cfid_put_wq: + destroy_workqueue(cfid_put_wq); out_destroy_serverclose_wq: destroy_workqueue(serverclose_wq); out_destroy_deferredclose_wq: diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 5041b1ffc244..9a4b3608b7d6 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -588,6 +588,7 @@ struct smb_version_operations { /* Check for STATUS_NETWORK_NAME_DELETED */ bool (*is_network_name_deleted)(char *buf, struct TCP_Server_Info *srv); int (*parse_reparse_point)(struct cifs_sb_info *cifs_sb, + const char *full_path, struct kvec *rsp_iov, struct cifs_open_info_data *data); int (*create_reparse_symlink)(const unsigned int xid, @@ -1983,7 +1984,7 @@ require use of the stronger protocol */ * cifsInodeInfo->lock_sem cifsInodeInfo->llist cifs_init_once * ->can_cache_brlcks * cifsInodeInfo->deferred_lock cifsInodeInfo->deferred_closes cifsInodeInfo_alloc - * cached_fid->fid_mutex cifs_tcon->crfid tcon_info_alloc + * cached_fids->cfid_list_lock cifs_tcon->cfids->entries init_cached_dirs * cifsFileInfo->fh_mutex cifsFileInfo cifs_new_fileinfo * cifsFileInfo->file_info_lock cifsFileInfo->count cifs_new_fileinfo * ->invalidHandle initiate_cifs_search @@ -2071,6 +2072,7 @@ extern struct workqueue_struct *fileinfo_put_wq; extern struct workqueue_struct *cifsoplockd_wq; extern struct workqueue_struct *deferredclose_wq; extern struct workqueue_struct *serverclose_wq; +extern struct workqueue_struct *cfid_put_wq; extern __u32 cifs_lock_secret;
extern mempool_t *cifs_sm_req_poolp; diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index 1d3470bca45e..0c6468844c4b 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -244,7 +244,9 @@ extern int cifs_set_acl(struct mnt_idmap *idmap, extern int set_cifs_acl(struct smb_ntsd *pntsd, __u32 len, struct inode *ino, const char *path, int flag); extern unsigned int setup_authusers_ACE(struct smb_ace *pace); -extern unsigned int setup_special_mode_ACE(struct smb_ace *pace, __u64 nmode); +extern unsigned int setup_special_mode_ACE(struct smb_ace *pace, + bool posix, + __u64 nmode); extern unsigned int setup_special_user_owner_ACE(struct smb_ace *pace);
extern void dequeue_mid(struct mid_q_entry *mid, bool malformed); @@ -666,6 +668,7 @@ char *extract_hostname(const char *unc); char *extract_sharename(const char *unc); int parse_reparse_point(struct reparse_data_buffer *buf, u32 plen, struct cifs_sb_info *cifs_sb, + const char *full_path, bool unicode, struct cifs_open_info_data *data); int __cifs_sfu_make_node(unsigned int xid, struct inode *inode, struct dentry *dentry, struct cifs_tcon *tcon, diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c index 0ce2d704b1f3..a94c538ff863 100644 --- a/fs/smb/client/connect.c +++ b/fs/smb/client/connect.c @@ -1897,11 +1897,35 @@ static int match_session(struct cifs_ses *ses, CIFS_MAX_USERNAME_LEN)) return 0; if ((ctx->username && strlen(ctx->username) != 0) && - ses->password != NULL && - strncmp(ses->password, - ctx->password ? ctx->password : "", - CIFS_MAX_PASSWORD_LEN)) - return 0; + ses->password != NULL) { + + /* New mount can only share sessions with an existing mount if: + * 1. Both password and password2 match, or + * 2. password2 of the old mount matches password of the new mount + * and password of the old mount matches password2 of the new + * mount + */ + if (ses->password2 != NULL && ctx->password2 != NULL) { + if (!((strncmp(ses->password, ctx->password ? + ctx->password : "", CIFS_MAX_PASSWORD_LEN) == 0 && + strncmp(ses->password2, ctx->password2, + CIFS_MAX_PASSWORD_LEN) == 0) || + (strncmp(ses->password, ctx->password2, + CIFS_MAX_PASSWORD_LEN) == 0 && + strncmp(ses->password2, ctx->password ? + ctx->password : "", CIFS_MAX_PASSWORD_LEN) == 0))) + return 0; + + } else if ((ses->password2 == NULL && ctx->password2 != NULL) || + (ses->password2 != NULL && ctx->password2 == NULL)) { + return 0; + + } else { + if (strncmp(ses->password, ctx->password ? + ctx->password : "", CIFS_MAX_PASSWORD_LEN)) + return 0; + } + } }
if (strcmp(ctx->local_nls->charset, ses->local_nls->charset)) @@ -2244,6 +2268,7 @@ struct cifs_ses * cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx) { int rc = 0; + int retries = 0; unsigned int xid; struct cifs_ses *ses; struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr; @@ -2262,6 +2287,8 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx) cifs_dbg(FYI, "Session needs reconnect\n");
mutex_lock(&ses->session_mutex); + +retry_old_session: rc = cifs_negotiate_protocol(xid, ses, server); if (rc) { mutex_unlock(&ses->session_mutex); @@ -2274,6 +2301,13 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx) rc = cifs_setup_session(xid, ses, server, ctx->local_nls); if (rc) { + if (((rc == -EACCES) || (rc == -EKEYEXPIRED) || + (rc == -EKEYREVOKED)) && !retries && ses->password2) { + retries++; + cifs_dbg(FYI, "Session reconnect failed, retrying with alternate password\n"); + swap(ses->password, ses->password2); + goto retry_old_session; + } mutex_unlock(&ses->session_mutex); /* problem -- put our reference */ cifs_put_smb_ses(ses); @@ -2349,6 +2383,7 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx) ses->chans_need_reconnect = 1; spin_unlock(&ses->chan_lock);
+retry_new_session: mutex_lock(&ses->session_mutex); rc = cifs_negotiate_protocol(xid, ses, server); if (!rc) @@ -2361,8 +2396,16 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx) sizeof(ses->smb3signingkey)); spin_unlock(&ses->chan_lock);
- if (rc) - goto get_ses_fail; + if (rc) { + if (((rc == -EACCES) || (rc == -EKEYEXPIRED) || + (rc == -EKEYREVOKED)) && !retries && ses->password2) { + retries++; + cifs_dbg(FYI, "Session setup failed, retrying with alternate password\n"); + swap(ses->password, ses->password2); + goto retry_new_session; + } else + goto get_ses_fail; + }
/* * success, put it on the list and add it as first channel @@ -2551,7 +2594,7 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx)
if (ses->server->dialect >= SMB20_PROT_ID && (ses->server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING)) - nohandlecache = ctx->nohandlecache; + nohandlecache = ctx->nohandlecache || !dir_cache_timeout; else nohandlecache = true; tcon = tcon_info_alloc(!nohandlecache, netfs_trace_tcon_ref_new); diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c index 5c5a52019efa..48606e2ddffd 100644 --- a/fs/smb/client/fs_context.c +++ b/fs/smb/client/fs_context.c @@ -890,12 +890,37 @@ do { \ cifs_sb->ctx->field = NULL; \ } while (0)
+int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses) +{ + if (ses->password && + cifs_sb->ctx->password && + strcmp(ses->password, cifs_sb->ctx->password)) { + kfree_sensitive(cifs_sb->ctx->password); + cifs_sb->ctx->password = kstrdup(ses->password, GFP_KERNEL); + if (!cifs_sb->ctx->password) + return -ENOMEM; + } + if (ses->password2 && + cifs_sb->ctx->password2 && + strcmp(ses->password2, cifs_sb->ctx->password2)) { + kfree_sensitive(cifs_sb->ctx->password2); + cifs_sb->ctx->password2 = kstrdup(ses->password2, GFP_KERNEL); + if (!cifs_sb->ctx->password2) { + kfree_sensitive(cifs_sb->ctx->password); + cifs_sb->ctx->password = NULL; + return -ENOMEM; + } + } + return 0; +} + static int smb3_reconfigure(struct fs_context *fc) { struct smb3_fs_context *ctx = smb3_fc2context(fc); struct dentry *root = fc->root; struct cifs_sb_info *cifs_sb = CIFS_SB(root->d_sb); struct cifs_ses *ses = cifs_sb_master_tcon(cifs_sb)->ses; + char *new_password = NULL, *new_password2 = NULL; bool need_recon = false; int rc;
@@ -915,21 +940,63 @@ static int smb3_reconfigure(struct fs_context *fc) STEAL_STRING(cifs_sb, ctx, UNC); STEAL_STRING(cifs_sb, ctx, source); STEAL_STRING(cifs_sb, ctx, username); + if (need_recon == false) STEAL_STRING_SENSITIVE(cifs_sb, ctx, password); else { - kfree_sensitive(ses->password); - ses->password = kstrdup(ctx->password, GFP_KERNEL); - if (!ses->password) - return -ENOMEM; - kfree_sensitive(ses->password2); - ses->password2 = kstrdup(ctx->password2, GFP_KERNEL); - if (!ses->password2) { - kfree_sensitive(ses->password); - ses->password = NULL; + if (ctx->password) { + new_password = kstrdup(ctx->password, GFP_KERNEL); + if (!new_password) + return -ENOMEM; + } else + STEAL_STRING_SENSITIVE(cifs_sb, ctx, password); + } + + /* + * if a new password2 has been specified, then reset it's value + * inside the ses struct + */ + if (ctx->password2) { + new_password2 = kstrdup(ctx->password2, GFP_KERNEL); + if (!new_password2) { + kfree_sensitive(new_password); return -ENOMEM; } + } else + STEAL_STRING_SENSITIVE(cifs_sb, ctx, password2); + + /* + * we may update the passwords in the ses struct below. Make sure we do + * not race with smb2_reconnect + */ + mutex_lock(&ses->session_mutex); + + /* + * smb2_reconnect may swap password and password2 in case session setup + * failed. First get ctx passwords in sync with ses passwords. It should + * be okay to do this even if this function were to return an error at a + * later stage + */ + rc = smb3_sync_session_ctx_passwords(cifs_sb, ses); + if (rc) { + mutex_unlock(&ses->session_mutex); + return rc; } + + /* + * now that allocations for passwords are done, commit them + */ + if (new_password) { + kfree_sensitive(ses->password); + ses->password = new_password; + } + if (new_password2) { + kfree_sensitive(ses->password2); + ses->password2 = new_password2; + } + + mutex_unlock(&ses->session_mutex); + STEAL_STRING(cifs_sb, ctx, domainname); STEAL_STRING(cifs_sb, ctx, nodename); STEAL_STRING(cifs_sb, ctx, iocharset); diff --git a/fs/smb/client/fs_context.h b/fs/smb/client/fs_context.h index 890d6d9d4a59..c8c8b4451b3b 100644 --- a/fs/smb/client/fs_context.h +++ b/fs/smb/client/fs_context.h @@ -299,6 +299,7 @@ static inline struct smb3_fs_context *smb3_fc2context(const struct fs_context *f }
extern int smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx); +extern int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses); extern void smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb);
/* diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c index eff3f57235ee..6d567b169981 100644 --- a/fs/smb/client/inode.c +++ b/fs/smb/client/inode.c @@ -1115,6 +1115,7 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data, rc = 0; } else if (iov && server->ops->parse_reparse_point) { rc = server->ops->parse_reparse_point(cifs_sb, + full_path, iov, data); } break; @@ -2473,13 +2474,10 @@ cifs_dentry_needs_reval(struct dentry *dentry) return true;
if (!open_cached_dir_by_dentry(tcon, dentry->d_parent, &cfid)) { - spin_lock(&cfid->fid_lock); if (cfid->time && cifs_i->time > cfid->time) { - spin_unlock(&cfid->fid_lock); close_cached_dir(cfid); return false; } - spin_unlock(&cfid->fid_lock); close_cached_dir(cfid); } /* @@ -3062,6 +3060,7 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs) int rc = -EACCES; __u32 dosattr = 0; __u64 mode = NO_CHANGE_64; + bool posix = cifs_sb_master_tcon(cifs_sb)->posix_extensions;
xid = get_xid();
@@ -3152,7 +3151,8 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs) mode = attrs->ia_mode; rc = 0; if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) || - (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID)) { + (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID) || + posix) { rc = id_mode_to_cifs_acl(inode, full_path, &mode, INVALID_UID, INVALID_GID); if (rc) { diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c index 74abbdf5026c..f74d0a86f44a 100644 --- a/fs/smb/client/reparse.c +++ b/fs/smb/client/reparse.c @@ -35,6 +35,9 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode, u16 len, plen; int rc = 0;
+ if (strlen(symname) > REPARSE_SYM_PATH_MAX) + return -ENAMETOOLONG; + sym = kstrdup(symname, GFP_KERNEL); if (!sym) return -ENOMEM; @@ -64,7 +67,7 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode, if (rc < 0) goto out;
- plen = 2 * UniStrnlen((wchar_t *)path, PATH_MAX); + plen = 2 * UniStrnlen((wchar_t *)path, REPARSE_SYM_PATH_MAX); len = sizeof(*buf) + plen * 2; buf = kzalloc(len, GFP_KERNEL); if (!buf) { @@ -532,9 +535,76 @@ static int parse_reparse_posix(struct reparse_posix_data *buf, return 0; }
+int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len, + bool unicode, bool relative, + const char *full_path, + struct cifs_sb_info *cifs_sb) +{ + char sep = CIFS_DIR_SEP(cifs_sb); + char *linux_target = NULL; + char *smb_target = NULL; + int levels; + int rc; + int i; + + smb_target = cifs_strndup_from_utf16(buf, len, unicode, cifs_sb->local_nls); + if (!smb_target) { + rc = -ENOMEM; + goto out; + } + + if (smb_target[0] == sep && relative) { + /* + * This is a relative SMB symlink from the top of the share, + * which is the top level directory of the Linux mount point. + * Linux does not support such relative symlinks, so convert + * it to the relative symlink from the current directory. + * full_path is the SMB path to the symlink (from which is + * extracted current directory) and smb_target is the SMB path + * where symlink points, therefore full_path must always be on + * the SMB share. + */ + int smb_target_len = strlen(smb_target)+1; + levels = 0; + for (i = 1; full_path[i]; i++) { /* i=1 to skip leading sep */ + if (full_path[i] == sep) + levels++; + } + linux_target = kmalloc(levels*3 + smb_target_len, GFP_KERNEL); + if (!linux_target) { + rc = -ENOMEM; + goto out; + } + for (i = 0; i < levels; i++) { + linux_target[i*3 + 0] = '.'; + linux_target[i*3 + 1] = '.'; + linux_target[i*3 + 2] = sep; + } + memcpy(linux_target + levels*3, smb_target+1, smb_target_len); /* +1 to skip leading sep */ + } else { + linux_target = smb_target; + smb_target = NULL; + } + + if (sep == '\') + convert_delimiter(linux_target, '/'); + + rc = 0; + *target = linux_target; + + cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, *target); + +out: + if (rc != 0) + kfree(linux_target); + kfree(smb_target); + return rc; +} + static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym, u32 plen, bool unicode, struct cifs_sb_info *cifs_sb, + const char *full_path, struct cifs_open_info_data *data) { unsigned int len; @@ -549,20 +619,18 @@ static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym, return -EIO; }
- data->symlink_target = cifs_strndup_from_utf16(sym->PathBuffer + offs, - len, unicode, - cifs_sb->local_nls); - if (!data->symlink_target) - return -ENOMEM; - - convert_delimiter(data->symlink_target, '/'); - cifs_dbg(FYI, "%s: target path: %s\n", __func__, data->symlink_target); - - return 0; + return smb2_parse_native_symlink(&data->symlink_target, + sym->PathBuffer + offs, + len, + unicode, + le32_to_cpu(sym->Flags) & SYMLINK_FLAG_RELATIVE, + full_path, + cifs_sb); }
int parse_reparse_point(struct reparse_data_buffer *buf, u32 plen, struct cifs_sb_info *cifs_sb, + const char *full_path, bool unicode, struct cifs_open_info_data *data) { struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb); @@ -577,7 +645,7 @@ int parse_reparse_point(struct reparse_data_buffer *buf, case IO_REPARSE_TAG_SYMLINK: return parse_reparse_symlink( (struct reparse_symlink_data_buffer *)buf, - plen, unicode, cifs_sb, data); + plen, unicode, cifs_sb, full_path, data); case IO_REPARSE_TAG_LX_SYMLINK: case IO_REPARSE_TAG_AF_UNIX: case IO_REPARSE_TAG_LX_FIFO: @@ -593,6 +661,7 @@ int parse_reparse_point(struct reparse_data_buffer *buf, }
int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb, + const char *full_path, struct kvec *rsp_iov, struct cifs_open_info_data *data) { @@ -602,7 +671,7 @@ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
buf = (struct reparse_data_buffer *)((u8 *)io + le32_to_cpu(io->OutputOffset)); - return parse_reparse_point(buf, plen, cifs_sb, true, data); + return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data); }
static void wsl_to_fattr(struct cifs_open_info_data *data, diff --git a/fs/smb/client/reparse.h b/fs/smb/client/reparse.h index 158e7b7aae64..ff05b0e75c92 100644 --- a/fs/smb/client/reparse.h +++ b/fs/smb/client/reparse.h @@ -12,6 +12,8 @@ #include "fs_context.h" #include "cifsglob.h"
+#define REPARSE_SYM_PATH_MAX 4060 + /* * Used only by cifs.ko to ignore reparse points from files when client or * server doesn't support FSCTL_GET_REPARSE_POINT. @@ -115,7 +117,9 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode, int smb2_mknod_reparse(unsigned int xid, struct inode *inode, struct dentry *dentry, struct cifs_tcon *tcon, const char *full_path, umode_t mode, dev_t dev); -int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb, struct kvec *rsp_iov, +int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb, + const char *full_path, + struct kvec *rsp_iov, struct cifs_open_info_data *data);
#endif /* _CIFS_REPARSE_H */ diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c index 9a6ece66c4d3..db3695eddcf9 100644 --- a/fs/smb/client/smb1ops.c +++ b/fs/smb/client/smb1ops.c @@ -994,17 +994,17 @@ static int cifs_query_symlink(const unsigned int xid, }
static int cifs_parse_reparse_point(struct cifs_sb_info *cifs_sb, + const char *full_path, struct kvec *rsp_iov, struct cifs_open_info_data *data) { struct reparse_data_buffer *buf; TRANSACT_IOCTL_RSP *io = rsp_iov->iov_base; - bool unicode = !!(io->hdr.Flags2 & SMBFLG2_UNICODE); u32 plen = le16_to_cpu(io->ByteCount);
buf = (struct reparse_data_buffer *)((__u8 *)&io->hdr.Protocol + le32_to_cpu(io->DataOffset)); - return parse_reparse_point(buf, plen, cifs_sb, unicode, data); + return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data); }
static bool diff --git a/fs/smb/client/smb2file.c b/fs/smb/client/smb2file.c index e301349b0078..e836bc2193dd 100644 --- a/fs/smb/client/smb2file.c +++ b/fs/smb/client/smb2file.c @@ -63,12 +63,12 @@ static struct smb2_symlink_err_rsp *symlink_data(const struct kvec *iov) return sym; }
-int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path) +int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, + const char *full_path, char **path) { struct smb2_symlink_err_rsp *sym; unsigned int sub_offs, sub_len; unsigned int print_offs, print_len; - char *s;
if (!cifs_sb || !iov || !iov->iov_base || !iov->iov_len || !path) return -EINVAL; @@ -86,15 +86,13 @@ int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec iov->iov_len < SMB2_SYMLINK_STRUCT_SIZE + print_offs + print_len) return -EINVAL;
- s = cifs_strndup_from_utf16((char *)sym->PathBuffer + sub_offs, sub_len, true, - cifs_sb->local_nls); - if (!s) - return -ENOMEM; - convert_delimiter(s, '/'); - cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, s); - - *path = s; - return 0; + return smb2_parse_native_symlink(path, + (char *)sym->PathBuffer + sub_offs, + sub_len, + true, + le32_to_cpu(sym->Flags) & SYMLINK_FLAG_RELATIVE, + full_path, + cifs_sb); }
int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock, void *buf) @@ -126,6 +124,7 @@ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 goto out; if (hdr->Status == STATUS_STOPPED_ON_SYMLINK) { rc = smb2_parse_symlink_response(oparms->cifs_sb, &err_iov, + oparms->path, &data->symlink_target); if (!rc) { memset(smb2_data, 0, sizeof(*smb2_data)); diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c index e49d0c25eb03..a188908914fe 100644 --- a/fs/smb/client/smb2inode.c +++ b/fs/smb/client/smb2inode.c @@ -828,6 +828,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
static int parse_create_response(struct cifs_open_info_data *data, struct cifs_sb_info *cifs_sb, + const char *full_path, const struct kvec *iov) { struct smb2_create_rsp *rsp = iov->iov_base; @@ -841,6 +842,7 @@ static int parse_create_response(struct cifs_open_info_data *data, break; case STATUS_STOPPED_ON_SYMLINK: rc = smb2_parse_symlink_response(cifs_sb, iov, + full_path, &data->symlink_target); if (rc) return rc; @@ -930,14 +932,14 @@ int smb2_query_path_info(const unsigned int xid,
switch (rc) { case 0: - rc = parse_create_response(data, cifs_sb, &out_iov[0]); + rc = parse_create_response(data, cifs_sb, full_path, &out_iov[0]); break; case -EOPNOTSUPP: /* * BB TODO: When support for special files added to Samba * re-verify this path. */ - rc = parse_create_response(data, cifs_sb, &out_iov[0]); + rc = parse_create_response(data, cifs_sb, full_path, &out_iov[0]); if (rc || !data->reparse_point) goto out;
diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index 24a2aa04a108..7571fefeb83a 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -4080,7 +4080,7 @@ map_oplock_to_lease(u8 oplock) if (oplock == SMB2_OPLOCK_LEVEL_EXCLUSIVE) return SMB2_LEASE_WRITE_CACHING_LE | SMB2_LEASE_READ_CACHING_LE; else if (oplock == SMB2_OPLOCK_LEVEL_II) - return SMB2_LEASE_READ_CACHING_LE; + return SMB2_LEASE_READ_CACHING_LE | SMB2_LEASE_HANDLE_CACHING_LE; else if (oplock == SMB2_OPLOCK_LEVEL_BATCH) return SMB2_LEASE_HANDLE_CACHING_LE | SMB2_LEASE_READ_CACHING_LE | SMB2_LEASE_WRITE_CACHING_LE; diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 6584b5cddc28..d1bd69cbfe09 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -1231,7 +1231,9 @@ SMB2_negotiate(const unsigned int xid, * SMB3.0 supports only 1 cipher and doesn't have a encryption neg context * Set the cipher type manually. */ - if (server->dialect == SMB30_PROT_ID && (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION)) + if ((server->dialect == SMB30_PROT_ID || + server->dialect == SMB302_PROT_ID) && + (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION)) server->cipher_type = SMB2_ENCRYPTION_AES128_CCM;
security_blob = smb2_get_data_area_len(&blob_offset, &blob_length, @@ -2683,7 +2685,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len) ptr += sizeof(struct smb3_acl);
/* create one ACE to hold the mode embedded in reserved special SID */ - acelen = setup_special_mode_ACE((struct smb_ace *)ptr, (__u64)mode); + acelen = setup_special_mode_ACE((struct smb_ace *)ptr, false, (__u64)mode); ptr += acelen; acl_size = acelen + sizeof(struct smb3_acl); ace_count = 1; diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h index 6f9885e4f66c..09349fa8da03 100644 --- a/fs/smb/client/smb2proto.h +++ b/fs/smb/client/smb2proto.h @@ -37,8 +37,6 @@ extern struct mid_q_entry *smb2_setup_request(struct cifs_ses *ses, struct smb_rqst *rqst); extern struct mid_q_entry *smb2_setup_async_request( struct TCP_Server_Info *server, struct smb_rqst *rqst); -extern struct cifs_ses *smb2_find_smb_ses(struct TCP_Server_Info *server, - __u64 ses_id); extern struct cifs_tcon *smb2_find_smb_tcon(struct TCP_Server_Info *server, __u64 ses_id, __u32 tid); extern int smb2_calc_signature(struct smb_rqst *rqst, @@ -113,7 +111,14 @@ extern int smb3_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, struct cifs_sb_info *cifs_sb, const unsigned char *path, char *pbuf, unsigned int *pbytes_read); -int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path); +int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len, + bool unicode, bool relative, + const char *full_path, + struct cifs_sb_info *cifs_sb); +int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, + const struct kvec *iov, + const char *full_path, + char **path); int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock, void *buf); extern int smb2_unlock_range(struct cifsFileInfo *cfile, diff --git a/fs/smb/client/smb2transport.c b/fs/smb/client/smb2transport.c index b486b14bb330..475b36c27f65 100644 --- a/fs/smb/client/smb2transport.c +++ b/fs/smb/client/smb2transport.c @@ -74,7 +74,7 @@ smb311_crypto_shash_allocate(struct TCP_Server_Info *server)
static -int smb2_get_sign_key(__u64 ses_id, struct TCP_Server_Info *server, u8 *key) +int smb3_get_sign_key(__u64 ses_id, struct TCP_Server_Info *server, u8 *key) { struct cifs_chan *chan; struct TCP_Server_Info *pserver; @@ -168,16 +168,41 @@ smb2_find_smb_ses_unlocked(struct TCP_Server_Info *server, __u64 ses_id) return NULL; }
-struct cifs_ses * -smb2_find_smb_ses(struct TCP_Server_Info *server, __u64 ses_id) +static int smb2_get_sign_key(struct TCP_Server_Info *server, + __u64 ses_id, u8 *key) { struct cifs_ses *ses; + int rc = -ENOENT; + + if (SERVER_IS_CHAN(server)) + server = server->primary_server;
spin_lock(&cifs_tcp_ses_lock); - ses = smb2_find_smb_ses_unlocked(server, ses_id); - spin_unlock(&cifs_tcp_ses_lock); + list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) { + if (ses->Suid != ses_id) + continue;
- return ses; + rc = 0; + spin_lock(&ses->ses_lock); + switch (ses->ses_status) { + case SES_EXITING: /* SMB2_LOGOFF */ + case SES_GOOD: + if (likely(ses->auth_key.response)) { + memcpy(key, ses->auth_key.response, + SMB2_NTLMV2_SESSKEY_SIZE); + } else { + rc = -EIO; + } + break; + default: + rc = -EAGAIN; + break; + } + spin_unlock(&ses->ses_lock); + break; + } + spin_unlock(&cifs_tcp_ses_lock); + return rc; }
static struct cifs_tcon * @@ -236,14 +261,16 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server, unsigned char *sigptr = smb2_signature; struct kvec *iov = rqst->rq_iov; struct smb2_hdr *shdr = (struct smb2_hdr *)iov[0].iov_base; - struct cifs_ses *ses; struct shash_desc *shash = NULL; struct smb_rqst drqst; + __u64 sid = le64_to_cpu(shdr->SessionId); + u8 key[SMB2_NTLMV2_SESSKEY_SIZE];
- ses = smb2_find_smb_ses(server, le64_to_cpu(shdr->SessionId)); - if (unlikely(!ses)) { - cifs_server_dbg(FYI, "%s: Could not find session\n", __func__); - return -ENOENT; + rc = smb2_get_sign_key(server, sid, key); + if (unlikely(rc)) { + cifs_server_dbg(FYI, "%s: [sesid=0x%llx] couldn't find signing key: %d\n", + __func__, sid, rc); + return rc; }
memset(smb2_signature, 0x0, SMB2_HMACSHA256_SIZE); @@ -260,8 +287,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server, shash = server->secmech.hmacsha256; }
- rc = crypto_shash_setkey(shash->tfm, ses->auth_key.response, - SMB2_NTLMV2_SESSKEY_SIZE); + rc = crypto_shash_setkey(shash->tfm, key, sizeof(key)); if (rc) { cifs_server_dbg(VFS, "%s: Could not update with response\n", @@ -303,8 +329,6 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server, out: if (allocate_crypto) cifs_free_hash(&shash); - if (ses) - cifs_put_smb_ses(ses); return rc; }
@@ -570,7 +594,7 @@ smb3_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server, struct smb_rqst drqst; u8 key[SMB3_SIGN_KEY_SIZE];
- rc = smb2_get_sign_key(le64_to_cpu(shdr->SessionId), server, key); + rc = smb3_get_sign_key(le64_to_cpu(shdr->SessionId), server, key); if (unlikely(rc)) { cifs_server_dbg(FYI, "%s: Could not get signing key\n", __func__); return rc; diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h index 0b52d22a91a0..12cbd3428a6d 100644 --- a/fs/smb/client/trace.h +++ b/fs/smb/client/trace.h @@ -44,6 +44,8 @@ EM(netfs_trace_tcon_ref_free_ipc, "FRE Ipc ") \ EM(netfs_trace_tcon_ref_free_ipc_fail, "FRE Ipc-F ") \ EM(netfs_trace_tcon_ref_free_reconnect_server, "FRE Reconn") \ + EM(netfs_trace_tcon_ref_get_cached_laundromat, "GET Ch-Lau") \ + EM(netfs_trace_tcon_ref_get_cached_lease_break, "GET Ch-Lea") \ EM(netfs_trace_tcon_ref_get_cancelled_close, "GET Cn-Cls") \ EM(netfs_trace_tcon_ref_get_dfs_refer, "GET DfsRef") \ EM(netfs_trace_tcon_ref_get_find, "GET Find ") \ @@ -52,6 +54,7 @@ EM(netfs_trace_tcon_ref_new, "NEW ") \ EM(netfs_trace_tcon_ref_new_ipc, "NEW Ipc ") \ EM(netfs_trace_tcon_ref_new_reconnect_server, "NEW Reconn") \ + EM(netfs_trace_tcon_ref_put_cached_close, "PUT Ch-Cls") \ EM(netfs_trace_tcon_ref_put_cancelled_close, "PUT Cn-Cls") \ EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \ EM(netfs_trace_tcon_ref_put_cancelled_mid, "PUT Cn-Mid") \ diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c index e6cfedba9992..c8cc6fa6fc3e 100644 --- a/fs/smb/server/server.c +++ b/fs/smb/server/server.c @@ -276,8 +276,12 @@ static void handle_ksmbd_work(struct work_struct *wk) * disconnection. waitqueue_active is safe because it * uses atomic operation for condition. */ + atomic_inc(&conn->refcnt); if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) wake_up(&conn->r_count_q); + + if (atomic_dec_and_test(&conn->refcnt)) + kfree(conn); }
/** diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c index 291583005dd1..245a10cc1eeb 100644 --- a/fs/ubifs/super.c +++ b/fs/ubifs/super.c @@ -773,10 +773,10 @@ static void init_constants_master(struct ubifs_info *c) * necessary to report something for the 'statfs()' call. * * Subtract the LEB reserved for GC, the LEB which is reserved for - * deletions, minimum LEBs for the index, and assume only one journal - * head is available. + * deletions, minimum LEBs for the index, the LEBs which are reserved + * for each journal head. */ - tmp64 = c->main_lebs - 1 - 1 - MIN_INDEX_LEBS - c->jhead_cnt + 1; + tmp64 = c->main_lebs - 1 - 1 - MIN_INDEX_LEBS - c->jhead_cnt; tmp64 *= (long long)c->leb_size - c->leb_overhead; tmp64 = ubifs_reported_space(c, tmp64); c->block_cnt = tmp64 >> UBIFS_BLOCK_SHIFT; diff --git a/fs/ubifs/tnc_commit.c b/fs/ubifs/tnc_commit.c index a55e04822d16..7c43e0ccf6d4 100644 --- a/fs/ubifs/tnc_commit.c +++ b/fs/ubifs/tnc_commit.c @@ -657,6 +657,8 @@ static int get_znodes_to_commit(struct ubifs_info *c) znode->alt = 0; cnext = find_next_dirty(znode); if (!cnext) { + ubifs_assert(c, !znode->parent); + znode->cparent = NULL; znode->cnext = c->cnext; break; } diff --git a/fs/unicode/utf8-core.c b/fs/unicode/utf8-core.c index 8395066341a4..0400824ef493 100644 --- a/fs/unicode/utf8-core.c +++ b/fs/unicode/utf8-core.c @@ -198,7 +198,7 @@ struct unicode_map *utf8_load(unsigned int version) return um;
out_symbol_put: - symbol_put(um->tables); + symbol_put(utf8_data_table); out_free_um: kfree(um); return ERR_PTR(-EINVAL); diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c index 4719ec90029c..edaf193dbd5c 100644 --- a/fs/xfs/xfs_bmap_util.c +++ b/fs/xfs/xfs_bmap_util.c @@ -546,10 +546,14 @@ xfs_can_free_eofblocks( return false;
/* - * Check if there is an post-EOF extent to free. + * Check if there is an post-EOF extent to free. If there are any + * delalloc blocks attached to the inode (data fork delalloc + * reservations or CoW extents of any kind), we need to free them so + * that inactivation doesn't fail to erase them. */ xfs_ilock(ip, XFS_ILOCK_SHARED); - if (xfs_iext_lookup_extent(ip, &ip->i_df, end_fsb, &icur, &imap)) + if (ip->i_delayed_blks || + xfs_iext_lookup_extent(ip, &ip->i_df, end_fsb, &icur, &imap)) found_blocks = true; xfs_iunlock(ip, XFS_ILOCK_SHARED); return found_blocks; diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index eeadbaeccf88..fa284b64b2de 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -350,9 +350,9 @@ *(.data..decrypted) \ *(.ref.data) \ *(.data..shared_aligned) /* percpu related */ \ - *(.data.unlikely) \ + *(.data..unlikely) \ __start_once = .; \ - *(.data.once) \ + *(.data..once) \ __end_once = .; \ STRUCT_ALIGN(); \ *(__tracepoints) \ diff --git a/include/kunit/skbuff.h b/include/kunit/skbuff.h index 44d12370939a..345e1e8f0312 100644 --- a/include/kunit/skbuff.h +++ b/include/kunit/skbuff.h @@ -29,7 +29,7 @@ static void kunit_action_kfree_skb(void *p) static inline struct sk_buff *kunit_zalloc_skb(struct kunit *test, int len, gfp_t gfp) { - struct sk_buff *res = alloc_skb(len, GFP_KERNEL); + struct sk_buff *res = alloc_skb(len, gfp);
if (!res || skb_pad(res, len)) return NULL; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 4fecf46ef681..c5063e0a38a0 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -925,6 +925,8 @@ void blk_freeze_queue_start(struct request_queue *q); void blk_mq_freeze_queue_wait(struct request_queue *q); int blk_mq_freeze_queue_wait_timeout(struct request_queue *q, unsigned long timeout); +void blk_mq_unfreeze_queue_non_owner(struct request_queue *q); +void blk_freeze_queue_start_non_owner(struct request_queue *q);
void blk_mq_map_queues(struct blk_mq_queue_map *qmap); void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 50c3b959da28..e84a93c40132 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -25,6 +25,7 @@ #include <linux/uuid.h> #include <linux/xarray.h> #include <linux/file.h> +#include <linux/lockdep.h>
struct module; struct request_queue; @@ -471,6 +472,11 @@ struct request_queue { struct xarray hctx_table;
struct percpu_ref q_usage_counter; + struct lock_class_key io_lock_cls_key; + struct lockdep_map io_lockdep_map; + + struct lock_class_key q_lock_cls_key; + struct lockdep_map q_lockdep_map;
struct request *last_merge;
@@ -566,6 +572,10 @@ struct request_queue { struct throtl_data *td; #endif struct rcu_head rcu_head; +#ifdef CONFIG_LOCKDEP + struct task_struct *mq_freeze_owner; + int mq_freeze_owner_depth; +#endif wait_queue_head_t mq_freeze_wq; /* * Protect concurrent access to q_usage_counter by @@ -1247,7 +1257,7 @@ static inline unsigned int queue_io_min(const struct request_queue *q) return q->limits.io_min; }
-static inline int bdev_io_min(struct block_device *bdev) +static inline unsigned int bdev_io_min(struct block_device *bdev) { return queue_io_min(bdev_get_queue(bdev)); } diff --git a/include/linux/bpf.h b/include/linux/bpf.h index bdadb0bb6cec..bc2e3dab0487 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1373,7 +1373,8 @@ int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_func void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, struct bpf_prog *to); /* Called only from JIT-enabled code, so there's no need for stubs. */ -void bpf_image_ksym_add(void *data, unsigned int size, struct bpf_ksym *ksym); +void bpf_image_ksym_init(void *data, unsigned int size, struct bpf_ksym *ksym); +void bpf_image_ksym_add(struct bpf_ksym *ksym); void bpf_image_ksym_del(struct bpf_ksym *ksym); void bpf_ksym_add(struct bpf_ksym *ksym); void bpf_ksym_del(struct bpf_ksym *ksym); @@ -3461,4 +3462,10 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog) return prog->aux->func_idx != 0; }
+static inline bool bpf_prog_is_raw_tp(const struct bpf_prog *prog) +{ + return prog->type == BPF_PROG_TYPE_TRACING && + prog->expected_attach_type == BPF_TRACE_RAW_TP; +} + #endif /* _LINUX_BPF_H */ diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h index 038b2d523bf8..518bd1fd86fb 100644 --- a/include/linux/cleanup.h +++ b/include/linux/cleanup.h @@ -290,7 +290,7 @@ static inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \ #define DEFINE_GUARD(_name, _type, _lock, _unlock) \ DEFINE_CLASS(_name, _type, if (_T) { _unlock; }, ({ _lock; _T; }), _type _T); \ static inline void * class_##_name##_lock_ptr(class_##_name##_t *_T) \ - { return *_T; } + { return (void *)(__force unsigned long)*_T; }
#define DEFINE_GUARD_COND(_name, _ext, _condlock) \ EXTEND_CLASS(_name, _ext, \ @@ -347,7 +347,7 @@ static inline void class_##_name##_destructor(class_##_name##_t *_T) \ \ static inline void *class_##_name##_lock_ptr(class_##_name##_t *_T) \ { \ - return _T->lock; \ + return (void *)(__force unsigned long)_T->lock; \ }
diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index 32284cd26d52..c16d4199bf92 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -94,19 +94,6 @@ # define __copy(symbol) #endif
-/* - * Optional: only supported since gcc >= 15 - * Optional: only supported since clang >= 18 - * - * gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896 - * clang: https://github.com/llvm/llvm-project/pull/76348 - */ -#if __has_attribute(__counted_by__) -# define __counted_by(member) __attribute__((__counted_by__(member))) -#else -# define __counted_by(member) -#endif - /* * Optional: not supported by gcc * Optional: only supported since clang >= 14.0 diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 1a957ea2f4fe..639be0f30b45 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -323,6 +323,25 @@ struct ftrace_likely_data { #define __no_sanitize_or_inline __always_inline #endif
+/* + * Optional: only supported since gcc >= 15 + * Optional: only supported since clang >= 18 + * + * gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896 + * clang: https://github.com/llvm/llvm-project/pull/76348 + * + * __bdos on clang < 19.1.2 can erroneously return 0: + * https://github.com/llvm/llvm-project/pull/110497 + * + * __bdos on clang < 19.1.3 can be off by 4: + * https://github.com/llvm/llvm-project/pull/112636 + */ +#ifdef CONFIG_CC_HAS_COUNTED_BY +# define __counted_by(member) __attribute__((__counted_by__(member))) +#else +# define __counted_by(member) +#endif + /* * Apply __counted_by() when the Endianness matches to increase test coverage. */ diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h index b0b821edfd97..3b2ad444c002 100644 --- a/include/linux/f2fs_fs.h +++ b/include/linux/f2fs_fs.h @@ -24,10 +24,10 @@ #define NEW_ADDR ((block_t)-1) /* used as block_t addresses */ #define COMPRESS_ADDR ((block_t)-2) /* used as compressed data flag */
-#define F2FS_BYTES_TO_BLK(bytes) ((bytes) >> F2FS_BLKSIZE_BITS) -#define F2FS_BLK_TO_BYTES(blk) ((blk) << F2FS_BLKSIZE_BITS) +#define F2FS_BYTES_TO_BLK(bytes) ((unsigned long long)(bytes) >> F2FS_BLKSIZE_BITS) +#define F2FS_BLK_TO_BYTES(blk) ((unsigned long long)(blk) << F2FS_BLKSIZE_BITS) #define F2FS_BLK_END_BYTES(blk) (F2FS_BLK_TO_BYTES(blk + 1) - 1) -#define F2FS_BLK_ALIGN(x) (F2FS_BYTES_TO_BLK((x) + F2FS_BLKSIZE - 1)) +#define F2FS_BLK_ALIGN(x) (F2FS_BYTES_TO_BLK((x) + F2FS_BLKSIZE - 1))
/* 0, 1(node nid), 2(meta nid) are reserved node id */ #define F2FS_RESERVED_NODE_NUM 3 diff --git a/include/linux/fs.h b/include/linux/fs.h index 3559446279c1..4b5cad44a126 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -3726,6 +3726,6 @@ static inline bool vfs_empty_path(int dfd, const char __user *path) return !c; }
-bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos); +int generic_atomic_write_valid(struct kiocb *iocb, struct iov_iter *iter);
#endif /* _LINUX_FS_H */ diff --git a/include/linux/hisi_acc_qm.h b/include/linux/hisi_acc_qm.h index 9d7754ad5e9b..43ad280935e3 100644 --- a/include/linux/hisi_acc_qm.h +++ b/include/linux/hisi_acc_qm.h @@ -229,6 +229,12 @@ struct hisi_qm_status {
struct hisi_qm;
+enum acc_err_result { + ACC_ERR_NONE, + ACC_ERR_NEED_RESET, + ACC_ERR_RECOVERED, +}; + struct hisi_qm_err_info { char *acpi_rst; u32 msi_wr_port; @@ -257,9 +263,9 @@ struct hisi_qm_err_ini { void (*close_axi_master_ooo)(struct hisi_qm *qm); void (*open_sva_prefetch)(struct hisi_qm *qm); void (*close_sva_prefetch)(struct hisi_qm *qm); - void (*log_dev_hw_err)(struct hisi_qm *qm, u32 err_sts); void (*show_last_dfx_regs)(struct hisi_qm *qm); void (*err_info_init)(struct hisi_qm *qm); + enum acc_err_result (*get_err_result)(struct hisi_qm *qm); };
struct hisi_qm_cap_info { diff --git a/include/linux/intel_vsec.h b/include/linux/intel_vsec.h index 11ee185566c3..b94beab64610 100644 --- a/include/linux/intel_vsec.h +++ b/include/linux/intel_vsec.h @@ -74,10 +74,11 @@ enum intel_vsec_quirks { * @pdev: PCI device reference for the callback's use * @guid: ID of data to acccss * @data: buffer for the data to be copied + * @off: offset into the requested buffer * @count: size of buffer */ struct pmt_callbacks { - int (*read_telem)(struct pci_dev *pdev, u32 guid, u64 *data, u32 count); + int (*read_telem)(struct pci_dev *pdev, u32 guid, u64 *data, loff_t off, u32 count); };
/** diff --git a/include/linux/jiffies.h b/include/linux/jiffies.h index 1220f0fbe5bf..5d21dacd62bc 100644 --- a/include/linux/jiffies.h +++ b/include/linux/jiffies.h @@ -502,7 +502,7 @@ static inline unsigned long _msecs_to_jiffies(const unsigned int m) * - all other values are converted to jiffies by either multiplying * the input value by a factor or dividing it with a factor and * handling any 32-bit overflows. - * for the details see __msecs_to_jiffies() + * for the details see _msecs_to_jiffies() * * msecs_to_jiffies() checks for the passed in value being a constant * via __builtin_constant_p() allowing gcc to eliminate most of the diff --git a/include/linux/kfifo.h b/include/linux/kfifo.h index 564868bdce89..fd743d4c4b4b 100644 --- a/include/linux/kfifo.h +++ b/include/linux/kfifo.h @@ -37,7 +37,6 @@ */
#include <linux/array_size.h> -#include <linux/dma-mapping.h> #include <linux/spinlock.h> #include <linux/stddef.h> #include <linux/types.h> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 45be36e5285f..85fe9d0ebb91 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2382,12 +2382,6 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) } #endif /* CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE */
-typedef int (*kvm_vm_thread_fn_t)(struct kvm *kvm, uintptr_t data); - -int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn, - uintptr_t data, const char *name, - struct task_struct **thread_ptr); - #ifdef CONFIG_KVM_XFER_TO_GUEST_WORK static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) { diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 217f7abf2cbf..67964dc4db95 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -173,7 +173,7 @@ static inline void lockdep_init_map(struct lockdep_map *lock, const char *name, (lock)->dep_map.lock_type)
#define lockdep_set_subclass(lock, sub) \ - lockdep_init_map_type(&(lock)->dep_map, #lock, (lock)->dep_map.key, sub,\ + lockdep_init_map_type(&(lock)->dep_map, (lock)->dep_map.name, (lock)->dep_map.key, sub,\ (lock)->dep_map.wait_type_inner, \ (lock)->dep_map.wait_type_outer, \ (lock)->dep_map.lock_type) diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index 39a7714605a7..d7cb1e5ecbda 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -46,7 +46,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi); } \ } while (0) #define VM_WARN_ON_ONCE_PAGE(cond, page) ({ \ - static bool __section(".data.once") __warned; \ + static bool __section(".data..once") __warned; \ int __ret_warn_once = !!(cond); \ \ if (unlikely(__ret_warn_once && !__warned)) { \ @@ -66,7 +66,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi); unlikely(__ret_warn); \ }) #define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \ - static bool __section(".data.once") __warned; \ + static bool __section(".data..once") __warned; \ int __ret_warn_once = !!(cond); \ \ if (unlikely(__ret_warn_once && !__warned)) { \ @@ -77,7 +77,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi); unlikely(__ret_warn_once); \ }) #define VM_WARN_ON_ONCE_MM(cond, mm) ({ \ - static bool __section(".data.once") __warned; \ + static bool __section(".data..once") __warned; \ int __ret_warn_once = !!(cond); \ \ if (unlikely(__ret_warn_once && !__warned)) { \ diff --git a/include/linux/netpoll.h b/include/linux/netpoll.h index cd4e28db0cbd..959a4daacea1 100644 --- a/include/linux/netpoll.h +++ b/include/linux/netpoll.h @@ -72,7 +72,7 @@ static inline void *netpoll_poll_lock(struct napi_struct *napi) { struct net_device *dev = napi->dev;
- if (dev && dev->npinfo) { + if (dev && rcu_access_pointer(dev->npinfo)) { int owner = smp_processor_id();
while (cmpxchg(&napi->poll_owner, -1, owner) != -1) diff --git a/include/linux/nfslocalio.h b/include/linux/nfslocalio.h index 3982fea79919..9202f4b24343 100644 --- a/include/linux/nfslocalio.h +++ b/include/linux/nfslocalio.h @@ -55,7 +55,7 @@ struct nfsd_localio_operations { const struct cred *, const struct nfs_fh *, const fmode_t); - void (*nfsd_file_put_local)(struct nfsd_file *); + struct net *(*nfsd_file_put_local)(struct nfsd_file *); struct file *(*nfsd_file_file)(struct nfsd_file *); } ____cacheline_aligned;
@@ -66,7 +66,7 @@ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *, struct rpc_clnt *, const struct cred *, const struct nfs_fh *, const fmode_t);
-static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio) +static inline void nfs_to_nfsd_net_put(struct net *net) { /* * Once reference to nfsd_serv is dropped, NFSD could be @@ -74,10 +74,22 @@ static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio) * by always taking RCU. */ rcu_read_lock(); - nfs_to->nfsd_file_put_local(localio); + nfs_to->nfsd_serv_put(net); rcu_read_unlock(); }
+static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio) +{ + /* + * Must not hold RCU otherwise nfsd_file_put() can easily trigger: + * "Voluntary context switch within RCU read-side critical section!" + * by scheduling deep in underlying filesystem (e.g. XFS). + */ + struct net *net = nfs_to->nfsd_file_put_local(localio); + + nfs_to_nfsd_net_put(net); +} + #else /* CONFIG_NFS_LOCALIO */ static inline void nfsd_localio_ops_init(void) { diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h index d69ad5bb1eb1..b8d6c0c20876 100644 --- a/include/linux/of_fdt.h +++ b/include/linux/of_fdt.h @@ -31,6 +31,7 @@ extern void *of_fdt_unflatten_tree(const unsigned long *blob, extern int __initdata dt_root_addr_cells; extern int __initdata dt_root_size_cells; extern void *initial_boot_params; +extern phys_addr_t initial_boot_params_pa;
extern char __dtb_start[]; extern char __dtb_end[]; @@ -70,8 +71,8 @@ extern u64 dt_mem_next_cell(int s, const __be32 **cellp); /* Early flat tree scan hooks */ extern int early_init_dt_scan_root(void);
-extern bool early_init_dt_scan(void *params); -extern bool early_init_dt_verify(void *params); +extern bool early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys); +extern bool early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys); extern void early_init_dt_scan_nodes(void);
extern const char *of_flat_dt_get_machine_name(void); diff --git a/include/linux/once.h b/include/linux/once.h index bc714d414448..30346fcdc799 100644 --- a/include/linux/once.h +++ b/include/linux/once.h @@ -46,7 +46,7 @@ void __do_once_sleepable_done(bool *done, struct static_key_true *once_key, #define DO_ONCE(func, ...) \ ({ \ bool ___ret = false; \ - static bool __section(".data.once") ___done = false; \ + static bool __section(".data..once") ___done = false; \ static DEFINE_STATIC_KEY_TRUE(___once_key); \ if (static_branch_unlikely(&___once_key)) { \ unsigned long ___flags; \ @@ -64,7 +64,7 @@ void __do_once_sleepable_done(bool *done, struct static_key_true *once_key, #define DO_ONCE_SLEEPABLE(func, ...) \ ({ \ bool ___ret = false; \ - static bool __section(".data.once") ___done = false; \ + static bool __section(".data..once") ___done = false; \ static DEFINE_STATIC_KEY_TRUE(___once_key); \ if (static_branch_unlikely(&___once_key)) { \ ___ret = __do_once_sleepable_start(&___done); \ diff --git a/include/linux/once_lite.h b/include/linux/once_lite.h index b7bce4983638..27de7bc32a06 100644 --- a/include/linux/once_lite.h +++ b/include/linux/once_lite.h @@ -12,7 +12,7 @@
#define __ONCE_LITE_IF(condition) \ ({ \ - static bool __section(".data.once") __already_done; \ + static bool __section(".data..once") __already_done; \ bool __ret_cond = !!(condition); \ bool __ret_once = false; \ \ diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 58d84c59f3dd..48e5c03df1dd 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -401,7 +401,7 @@ static inline int debug_lockdep_rcu_enabled(void) */ #define RCU_LOCKDEP_WARN(c, s) \ do { \ - static bool __section(".data.unlikely") __warned; \ + static bool __section(".data..unlikely") __warned; \ if (debug_lockdep_rcu_enabled() && (c) && \ debug_lockdep_rcu_enabled() && !__warned) { \ __warned = true; \ diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index 8544ff05e594..7d81fc6918ee 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -24,13 +24,13 @@ do { \ __rt_rwlock_init(rwl, #rwl, &__key); \ } while (0)
-extern void rt_read_lock(rwlock_t *rwlock); +extern void rt_read_lock(rwlock_t *rwlock) __acquires(rwlock); extern int rt_read_trylock(rwlock_t *rwlock); -extern void rt_read_unlock(rwlock_t *rwlock); -extern void rt_write_lock(rwlock_t *rwlock); -extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass); +extern void rt_read_unlock(rwlock_t *rwlock) __releases(rwlock); +extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock); +extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquires(rwlock); extern int rt_write_trylock(rwlock_t *rwlock); -extern void rt_write_unlock(rwlock_t *rwlock); +extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock);
static __always_inline void read_lock(rwlock_t *rwlock) { diff --git a/include/linux/sched/ext.h b/include/linux/sched/ext.h index 1ddbde64a31b..2799e7284fff 100644 --- a/include/linux/sched/ext.h +++ b/include/linux/sched/ext.h @@ -199,7 +199,6 @@ struct sched_ext_entity { #ifdef CONFIG_EXT_GROUP_SCHED struct cgroup *cgrp_moving_from; #endif - /* must be the last field, see init_scx_entity() */ struct list_head tasks_node; };
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index fffeb754880f..5298765d6ca4 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -621,6 +621,23 @@ static __always_inline unsigned raw_read_seqcount_latch(const seqcount_latch_t * return READ_ONCE(s->seqcount.sequence); }
+/** + * read_seqcount_latch() - pick even/odd latch data copy + * @s: Pointer to seqcount_latch_t + * + * See write_seqcount_latch() for details and a full reader/writer usage + * example. + * + * Return: sequence counter raw value. Use the lowest bit as an index for + * picking which data copy to read. The full counter must then be checked + * with read_seqcount_latch_retry(). + */ +static __always_inline unsigned read_seqcount_latch(const seqcount_latch_t *s) +{ + kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); + return raw_read_seqcount_latch(s); +} + /** * raw_read_seqcount_latch_retry() - end a seqcount_latch_t read section * @s: Pointer to seqcount_latch_t @@ -635,9 +652,34 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start) return unlikely(READ_ONCE(s->seqcount.sequence) != start); }
+/** + * read_seqcount_latch_retry() - end a seqcount_latch_t read section + * @s: Pointer to seqcount_latch_t + * @start: count, from read_seqcount_latch() + * + * Return: true if a read section retry is required, else false + */ +static __always_inline int +read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start) +{ + kcsan_atomic_next(0); + return raw_read_seqcount_latch_retry(s, start); +} + /** * raw_write_seqcount_latch() - redirect latch readers to even/odd copy * @s: Pointer to seqcount_latch_t + */ +static __always_inline void raw_write_seqcount_latch(seqcount_latch_t *s) +{ + smp_wmb(); /* prior stores before incrementing "sequence" */ + s->seqcount.sequence++; + smp_wmb(); /* increment "sequence" before following stores */ +} + +/** + * write_seqcount_latch_begin() - redirect latch readers to odd copy + * @s: Pointer to seqcount_latch_t * * The latch technique is a multiversion concurrency control method that allows * queries during non-atomic modifications. If you can guarantee queries never @@ -665,17 +707,11 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start) * * void latch_modify(struct latch_struct *latch, ...) * { - * smp_wmb(); // Ensure that the last data[1] update is visible - * latch->seq.sequence++; - * smp_wmb(); // Ensure that the seqcount update is visible - * + * write_seqcount_latch_begin(&latch->seq); * modify(latch->data[0], ...); - * - * smp_wmb(); // Ensure that the data[0] update is visible - * latch->seq.sequence++; - * smp_wmb(); // Ensure that the seqcount update is visible - * + * write_seqcount_latch(&latch->seq); * modify(latch->data[1], ...); + * write_seqcount_latch_end(&latch->seq); * } * * The query will have a form like:: @@ -686,13 +722,13 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start) * unsigned seq, idx; * * do { - * seq = raw_read_seqcount_latch(&latch->seq); + * seq = read_seqcount_latch(&latch->seq); * * idx = seq & 0x01; * entry = data_query(latch->data[idx], ...); * * // This includes needed smp_rmb() - * } while (raw_read_seqcount_latch_retry(&latch->seq, seq)); + * } while (read_seqcount_latch_retry(&latch->seq, seq)); * * return entry; * } @@ -716,11 +752,31 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start) * When data is a dynamic data structure; one should use regular RCU * patterns to manage the lifetimes of the objects within. */ -static inline void raw_write_seqcount_latch(seqcount_latch_t *s) +static __always_inline void write_seqcount_latch_begin(seqcount_latch_t *s) { - smp_wmb(); /* prior stores before incrementing "sequence" */ - s->seqcount.sequence++; - smp_wmb(); /* increment "sequence" before following stores */ + kcsan_nestable_atomic_begin(); + raw_write_seqcount_latch(s); +} + +/** + * write_seqcount_latch() - redirect latch readers to even copy + * @s: Pointer to seqcount_latch_t + */ +static __always_inline void write_seqcount_latch(seqcount_latch_t *s) +{ + raw_write_seqcount_latch(s); +} + +/** + * write_seqcount_latch_end() - end a seqcount_latch_t write section + * @s: Pointer to seqcount_latch_t + * + * Marks the end of a seqcount_latch_t writer section, after all copies of the + * latch-protected data have been updated. + */ +static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s) +{ + kcsan_nestable_atomic_end(); }
#define __SEQLOCK_UNLOCKED(lockname) \ @@ -754,11 +810,7 @@ static inline void raw_write_seqcount_latch(seqcount_latch_t *s) */ static inline unsigned read_seqbegin(const seqlock_t *sl) { - unsigned ret = read_seqcount_begin(&sl->seqcount); - - kcsan_atomic_next(0); /* non-raw usage, assume closing read_seqretry() */ - kcsan_flat_atomic_begin(); - return ret; + return read_seqcount_begin(&sl->seqcount); }
/** @@ -774,12 +826,6 @@ static inline unsigned read_seqbegin(const seqlock_t *sl) */ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) { - /* - * Assume not nested: read_seqretry() may be called multiple times when - * completing read critical section. - */ - kcsan_flat_atomic_end(); - return read_seqcount_retry(&sl->seqcount, start); }
diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index 61c49b16f69a..6175cd682ca0 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -16,26 +16,25 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, const char *name, } #endif
-#define spin_lock_init(slock) \ +#define __spin_lock_init(slock, name, key, percpu) \ do { \ - static struct lock_class_key __key; \ - \ rt_mutex_base_init(&(slock)->lock); \ - __rt_spin_lock_init(slock, #slock, &__key, false); \ + __rt_spin_lock_init(slock, name, key, percpu); \ } while (0)
-#define local_spin_lock_init(slock) \ +#define _spin_lock_init(slock, percpu) \ do { \ static struct lock_class_key __key; \ - \ - rt_mutex_base_init(&(slock)->lock); \ - __rt_spin_lock_init(slock, #slock, &__key, true); \ + __spin_lock_init(slock, #slock, &__key, percpu); \ } while (0)
-extern void rt_spin_lock(spinlock_t *lock); -extern void rt_spin_lock_nested(spinlock_t *lock, int subclass); -extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *nest_lock); -extern void rt_spin_unlock(spinlock_t *lock); +#define spin_lock_init(slock) _spin_lock_init(slock, false) +#define local_spin_lock_init(slock) _spin_lock_init(slock, true) + +extern void rt_spin_lock(spinlock_t *lock) __acquires(lock); +extern void rt_spin_lock_nested(spinlock_t *lock, int subclass) __acquires(lock); +extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *nest_lock) __acquires(lock); +extern void rt_spin_unlock(spinlock_t *lock) __releases(lock); extern void rt_spin_lock_unlock(spinlock_t *lock); extern int rt_spin_trylock_bh(spinlock_t *lock); extern int rt_spin_trylock(spinlock_t *lock); diff --git a/include/media/v4l2-dv-timings.h b/include/media/v4l2-dv-timings.h index 8fa963326bf6..c64096b5c782 100644 --- a/include/media/v4l2-dv-timings.h +++ b/include/media/v4l2-dv-timings.h @@ -146,15 +146,18 @@ void v4l2_print_dv_timings(const char *dev_prefix, const char *prefix, * @polarities: the horizontal and vertical polarities (same as struct * v4l2_bt_timings polarities). * @interlaced: if this flag is true, it indicates interlaced format + * @cap: the v4l2_dv_timings_cap capabilities. * @fmt: the resulting timings. * * This function will attempt to detect if the given values correspond to a * valid CVT format. If so, then it will return true, and fmt will be filled * in with the found CVT timings. */ -bool v4l2_detect_cvt(unsigned frame_height, unsigned hfreq, unsigned vsync, - unsigned active_width, u32 polarities, bool interlaced, - struct v4l2_dv_timings *fmt); +bool v4l2_detect_cvt(unsigned int frame_height, unsigned int hfreq, + unsigned int vsync, unsigned int active_width, + u32 polarities, bool interlaced, + const struct v4l2_dv_timings_cap *cap, + struct v4l2_dv_timings *fmt);
/** * v4l2_detect_gtf - detect if the given timings follow the GTF standard @@ -170,15 +173,18 @@ bool v4l2_detect_cvt(unsigned frame_height, unsigned hfreq, unsigned vsync, * image height, so it has to be passed explicitly. Usually * the native screen aspect ratio is used for this. If it * is not filled in correctly, then 16:9 will be assumed. + * @cap: the v4l2_dv_timings_cap capabilities. * @fmt: the resulting timings. * * This function will attempt to detect if the given values correspond to a * valid GTF format. If so, then it will return true, and fmt will be filled * in with the found GTF timings. */ -bool v4l2_detect_gtf(unsigned frame_height, unsigned hfreq, unsigned vsync, - u32 polarities, bool interlaced, struct v4l2_fract aspect, - struct v4l2_dv_timings *fmt); +bool v4l2_detect_gtf(unsigned int frame_height, unsigned int hfreq, + unsigned int vsync, u32 polarities, bool interlaced, + struct v4l2_fract aspect, + const struct v4l2_dv_timings_cap *cap, + struct v4l2_dv_timings *fmt);
/** * v4l2_calc_aspect_ratio - calculate the aspect ratio based on bytes diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h index bab1e3d7452a..a1864cff616a 100644 --- a/include/net/bluetooth/hci.h +++ b/include/net/bluetooth/hci.h @@ -1,7 +1,7 @@ /* BlueZ - Bluetooth protocol stack for Linux Copyright (C) 2000-2001 Qualcomm Incorporated - Copyright 2023 NXP + Copyright 2023-2024 NXP
Written 2000,2001 by Maxim Krasnyansky maxk@qualcomm.com
@@ -29,6 +29,7 @@ #define HCI_MAX_ACL_SIZE 1024 #define HCI_MAX_SCO_SIZE 255 #define HCI_MAX_ISO_SIZE 251 +#define HCI_MAX_ISO_BIS 31 #define HCI_MAX_EVENT_SIZE 260 #define HCI_MAX_FRAME_SIZE (HCI_MAX_ACL_SIZE + 4)
@@ -683,6 +684,7 @@ enum { #define HCI_RSSI_INVALID 127
#define HCI_SYNC_HANDLE_INVALID 0xffff +#define HCI_SID_INVALID 0xff
#define HCI_ROLE_MASTER 0x00 #define HCI_ROLE_SLAVE 0x01 diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index 88265d37aa72..4c185a08c3a3 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -668,6 +668,7 @@ struct hci_conn { __u8 adv_instance; __u16 handle; __u16 sync_handle; + __u8 sid; __u16 state; __u16 mtu; __u8 mode; @@ -710,6 +711,9 @@ struct hci_conn { __s8 tx_power; __s8 max_tx_power; struct bt_iso_qos iso_qos; + __u8 num_bis; + __u8 bis[HCI_MAX_ISO_BIS]; + unsigned long flags;
enum conn_reasons conn_reason; @@ -945,8 +949,10 @@ enum { HCI_CONN_PER_ADV, HCI_CONN_BIG_CREATED, HCI_CONN_CREATE_CIS, + HCI_CONN_CREATE_BIG_SYNC, HCI_CONN_BIG_SYNC, HCI_CONN_BIG_SYNC_FAILED, + HCI_CONN_CREATE_PA_SYNC, HCI_CONN_PA_SYNC, HCI_CONN_PA_SYNC_FAILED, }; @@ -1099,6 +1105,30 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev, return NULL; }
+static inline struct hci_conn *hci_conn_hash_lookup_sid(struct hci_dev *hdev, + __u8 sid, + bdaddr_t *dst, + __u8 dst_type) +{ + struct hci_conn_hash *h = &hdev->conn_hash; + struct hci_conn *c; + + rcu_read_lock(); + + list_for_each_entry_rcu(c, &h->list, list) { + if (c->type != ISO_LINK || bacmp(&c->dst, dst) || + c->dst_type != dst_type || c->sid != sid) + continue; + + rcu_read_unlock(); + return c; + } + + rcu_read_unlock(); + + return NULL; +} + static inline struct hci_conn * hci_conn_hash_lookup_per_adv_bis(struct hci_dev *hdev, bdaddr_t *ba, @@ -1269,6 +1299,30 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev, return NULL; }
+static inline struct hci_conn * +hci_conn_hash_lookup_big_sync_pend(struct hci_dev *hdev, + __u8 handle, __u8 num_bis) +{ + struct hci_conn_hash *h = &hdev->conn_hash; + struct hci_conn *c; + + rcu_read_lock(); + + list_for_each_entry_rcu(c, &h->list, list) { + if (c->type != ISO_LINK) + continue; + + if (handle == c->iso_qos.bcast.big && num_bis == c->num_bis) { + rcu_read_unlock(); + return c; + } + } + + rcu_read_unlock(); + + return NULL; +} + static inline struct hci_conn * hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle, __u16 state) { @@ -1328,6 +1382,13 @@ hci_conn_hash_lookup_pa_sync_handle(struct hci_dev *hdev, __u16 sync_handle) if (c->type != ISO_LINK) continue;
+ /* Ignore the listen hcon, we are looking + * for the child hcon that was created as + * a result of the PA sync established event. + */ + if (c->state == BT_LISTEN) + continue; + if (c->sync_handle == sync_handle) { rcu_read_unlock(); return c; @@ -1445,6 +1506,8 @@ bool hci_setup_sync(struct hci_conn *conn, __u16 handle); void hci_sco_setup(struct hci_conn *conn, __u8 status); bool hci_iso_setup_path(struct hci_conn *conn); int hci_le_create_cis_pending(struct hci_dev *hdev); +int hci_pa_create_sync_pending(struct hci_dev *hdev); +int hci_le_big_create_sync_pending(struct hci_dev *hdev); int hci_conn_check_create_cis(struct hci_conn *conn);
struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst, diff --git a/include/net/net_debug.h b/include/net/net_debug.h index 1e74684cbbdb..4a79204c8d30 100644 --- a/include/net/net_debug.h +++ b/include/net/net_debug.h @@ -27,7 +27,7 @@ void netdev_info(const struct net_device *dev, const char *format, ...);
#define netdev_level_once(level, dev, fmt, ...) \ do { \ - static bool __section(".data.once") __print_once; \ + static bool __section(".data..once") __print_once; \ \ if (!__print_once) { \ __print_once = true; \ diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index aa8ede439905..67551133b522 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2948,6 +2948,14 @@ int rdma_user_mmap_entry_insert_range(struct ib_ucontext *ucontext, size_t length, u32 min_pgoff, u32 max_pgoff);
+#if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS) +void rdma_user_mmap_disassociate(struct ib_device *device); +#else +static inline void rdma_user_mmap_disassociate(struct ib_device *device) +{ +} +#endif + static inline int rdma_user_mmap_entry_insert_exact(struct ib_ucontext *ucontext, struct rdma_user_mmap_entry *entry, @@ -4726,6 +4734,9 @@ ib_get_vector_affinity(struct ib_device *device, int comp_vector) * @device: the rdma device */ void rdma_roce_rescan_device(struct ib_device *ibdev); +void rdma_roce_rescan_port(struct ib_device *ib_dev, u32 port); +void roce_del_all_netdev_gids(struct ib_device *ib_dev, + u32 port, struct net_device *ndev);
struct ib_ucontext *ib_uverbs_get_ucontext_file(struct ib_uverbs_file *ufile);
diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h index 3b687d20c9ed..db7254d52d93 100644 --- a/include/uapi/linux/rtnetlink.h +++ b/include/uapi/linux/rtnetlink.h @@ -174,7 +174,7 @@ enum { #define RTM_GETLINKPROP RTM_GETLINKPROP
RTM_NEWVLAN = 112, -#define RTM_NEWNVLAN RTM_NEWVLAN +#define RTM_NEWVLAN RTM_NEWVLAN RTM_DELVLAN, #define RTM_DELVLAN RTM_DELVLAN RTM_GETVLAN, diff --git a/init/Kconfig b/init/Kconfig index c521e1421ad4..7256fa127530 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -120,6 +120,15 @@ config CC_HAS_ASM_INLINE config CC_HAS_NO_PROFILE_FN_ATTR def_bool $(success,echo '__attribute__((no_profile_instrument_function)) int x();' | $(CC) -x c - -c -o /dev/null -Werror)
+config CC_HAS_COUNTED_BY + # TODO: when gcc 15 is released remove the build test and add + # a gcc version check + def_bool $(success,echo 'struct flex { int count; int array[] __attribute__((__counted_by__(count))); };' | $(CC) $(CLANG_FLAGS) -x c - -c -o /dev/null -Werror) + # clang needs to be at least 19.1.3 to avoid __bdos miscalculations + # https://github.com/llvm/llvm-project/pull/110497 + # https://github.com/llvm/llvm-project/pull/112636 + depends on !(CC_IS_CLANG && CLANG_VERSION < 190103) + config PAHOLE_VERSION int default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE)) diff --git a/init/initramfs.c b/init/initramfs.c index bc911e466d5b..b2f7583bb1f5 100644 --- a/init/initramfs.c +++ b/init/initramfs.c @@ -360,6 +360,15 @@ static int __init do_name(void) { state = SkipIt; next_state = Reset; + + /* name_len > 0 && name_len <= PATH_MAX checked in do_header */ + if (collected[name_len - 1] != '\0') { + pr_err("initramfs name without nulterm: %.*s\n", + (int)name_len, collected); + error("malformed archive"); + return 1; + } + if (strcmp(collected, "TRAILER!!!") == 0) { free_hash(); return 0; @@ -424,6 +433,12 @@ static int __init do_copy(void)
static int __init do_symlink(void) { + if (collected[name_len - 1] != '\0') { + pr_err("initramfs symlink without nulterm: %.*s\n", + (int)name_len, collected); + error("malformed archive"); + return 1; + } collected[N_ALIGN(name_len) + body_len] = '\0'; clean_path(collected, 0); init_symlink(collected + N_ALIGN(name_len), collected); diff --git a/io_uring/memmap.c b/io_uring/memmap.c index a0f32a255fd1..6d151e46f3d6 100644 --- a/io_uring/memmap.c +++ b/io_uring/memmap.c @@ -72,6 +72,8 @@ void *io_pages_map(struct page ***out_pages, unsigned short *npages, ret = io_mem_alloc_compound(pages, nr_pages, size, gfp); if (!IS_ERR(ret)) goto done; + if (nr_pages == 1) + goto fail;
ret = io_mem_alloc_single(pages, nr_pages, size, gfp); if (!IS_ERR(ret)) { @@ -80,7 +82,7 @@ void *io_pages_map(struct page ***out_pages, unsigned short *npages, *npages = nr_pages; return ret; } - +fail: kvfree(pages); *out_pages = NULL; *npages = 0; @@ -135,7 +137,12 @@ struct page **io_pin_pages(unsigned long uaddr, unsigned long len, int *npages) struct page **pages; int ret;
- end = (uaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT; + if (check_add_overflow(uaddr, len, &end)) + return ERR_PTR(-EOVERFLOW); + if (check_add_overflow(end, PAGE_SIZE - 1, &end)) + return ERR_PTR(-EOVERFLOW); + + end = end >> PAGE_SHIFT; start = uaddr >> PAGE_SHIFT; nr_pages = end - start; if (WARN_ON_ONCE(!nr_pages)) diff --git a/ipc/namespace.c b/ipc/namespace.c index 6ecc30effd3e..4df91ceeeafe 100644 --- a/ipc/namespace.c +++ b/ipc/namespace.c @@ -83,13 +83,15 @@ static struct ipc_namespace *create_ipc_ns(struct user_namespace *user_ns,
err = msg_init_ns(ns); if (err) - goto fail_put; + goto fail_ipc;
sem_init_ns(ns); shm_init_ns(ns);
return ns;
+fail_ipc: + retire_ipc_sysctls(ns); fail_mq: retire_mq_sysctls(ns);
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index fda3dd2ee984..b3a2ce1e5e22 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -32,7 +32,9 @@ struct bpf_struct_ops_map { * (in kvalue.data). */ struct bpf_link **links; - u32 links_cnt; + /* ksyms for bpf trampolines */ + struct bpf_ksym **ksyms; + u32 funcs_cnt; u32 image_pages_cnt; /* image_pages is an array of pages that has all the trampolines * that stores the func args before calling the bpf_prog. @@ -481,11 +483,11 @@ static void bpf_struct_ops_map_put_progs(struct bpf_struct_ops_map *st_map) { u32 i;
- for (i = 0; i < st_map->links_cnt; i++) { - if (st_map->links[i]) { - bpf_link_put(st_map->links[i]); - st_map->links[i] = NULL; - } + for (i = 0; i < st_map->funcs_cnt; i++) { + if (!st_map->links[i]) + break; + bpf_link_put(st_map->links[i]); + st_map->links[i] = NULL; } }
@@ -586,6 +588,49 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks, return 0; }
+static void bpf_struct_ops_ksym_init(const char *tname, const char *mname, + void *image, unsigned int size, + struct bpf_ksym *ksym) +{ + snprintf(ksym->name, KSYM_NAME_LEN, "bpf__%s_%s", tname, mname); + INIT_LIST_HEAD_RCU(&ksym->lnode); + bpf_image_ksym_init(image, size, ksym); +} + +static void bpf_struct_ops_map_add_ksyms(struct bpf_struct_ops_map *st_map) +{ + u32 i; + + for (i = 0; i < st_map->funcs_cnt; i++) { + if (!st_map->ksyms[i]) + break; + bpf_image_ksym_add(st_map->ksyms[i]); + } +} + +static void bpf_struct_ops_map_del_ksyms(struct bpf_struct_ops_map *st_map) +{ + u32 i; + + for (i = 0; i < st_map->funcs_cnt; i++) { + if (!st_map->ksyms[i]) + break; + bpf_image_ksym_del(st_map->ksyms[i]); + } +} + +static void bpf_struct_ops_map_free_ksyms(struct bpf_struct_ops_map *st_map) +{ + u32 i; + + for (i = 0; i < st_map->funcs_cnt; i++) { + if (!st_map->ksyms[i]) + break; + kfree(st_map->ksyms[i]); + st_map->ksyms[i] = NULL; + } +} + static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, void *value, u64 flags) { @@ -601,6 +646,9 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, int prog_fd, err; u32 i, trampoline_start, image_off = 0; void *cur_image = NULL, *image = NULL; + struct bpf_link **plink; + struct bpf_ksym **pksym; + const char *tname, *mname;
if (flags) return -EINVAL; @@ -639,14 +687,19 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, udata = &uvalue->data; kdata = &kvalue->data;
+ plink = st_map->links; + pksym = st_map->ksyms; + tname = btf_name_by_offset(st_map->btf, t->name_off); module_type = btf_type_by_id(btf_vmlinux, st_ops_ids[IDX_MODULE_ID]); for_each_member(i, t, member) { const struct btf_type *mtype, *ptype; struct bpf_prog *prog; struct bpf_tramp_link *link; + struct bpf_ksym *ksym; u32 moff;
moff = __btf_member_bit_offset(t, member) / 8; + mname = btf_name_by_offset(st_map->btf, member->name_off); ptype = btf_type_resolve_ptr(st_map->btf, member->type, NULL); if (ptype == module_type) { if (*(void **)(udata + moff)) @@ -714,7 +767,14 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, } bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog); - st_map->links[i] = &link->link; + *plink++ = &link->link; + + ksym = kzalloc(sizeof(*ksym), GFP_USER); + if (!ksym) { + err = -ENOMEM; + goto reset_unlock; + } + *pksym++ = ksym;
trampoline_start = image_off; err = bpf_struct_ops_prepare_trampoline(tlinks, link, @@ -735,6 +795,12 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
/* put prog_id to udata */ *(unsigned long *)(udata + moff) = prog->aux->id; + + /* init ksym for this trampoline */ + bpf_struct_ops_ksym_init(tname, mname, + image + trampoline_start, + image_off - trampoline_start, + ksym); }
if (st_ops->validate) { @@ -783,6 +849,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, */
reset_unlock: + bpf_struct_ops_map_free_ksyms(st_map); bpf_struct_ops_map_free_image(st_map); bpf_struct_ops_map_put_progs(st_map); memset(uvalue, 0, map->value_size); @@ -790,6 +857,8 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, unlock: kfree(tlinks); mutex_unlock(&st_map->lock); + if (!err) + bpf_struct_ops_map_add_ksyms(st_map); return err; }
@@ -849,7 +918,10 @@ static void __bpf_struct_ops_map_free(struct bpf_map *map)
if (st_map->links) bpf_struct_ops_map_put_progs(st_map); + if (st_map->ksyms) + bpf_struct_ops_map_free_ksyms(st_map); bpf_map_area_free(st_map->links); + bpf_map_area_free(st_map->ksyms); bpf_struct_ops_map_free_image(st_map); bpf_map_area_free(st_map->uvalue); bpf_map_area_free(st_map); @@ -866,6 +938,8 @@ static void bpf_struct_ops_map_free(struct bpf_map *map) if (btf_is_module(st_map->btf)) module_put(st_map->st_ops_desc->st_ops->owner);
+ bpf_struct_ops_map_del_ksyms(st_map); + /* The struct_ops's function may switch to another struct_ops. * * For example, bpf_tcp_cc_x->init() may switch to @@ -895,6 +969,19 @@ static int bpf_struct_ops_map_alloc_check(union bpf_attr *attr) return 0; }
+static u32 count_func_ptrs(const struct btf *btf, const struct btf_type *t) +{ + int i; + u32 count; + const struct btf_member *member; + + count = 0; + for_each_member(i, t, member) + if (btf_type_resolve_func_ptr(btf, member->type, NULL)) + count++; + return count; +} + static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr) { const struct bpf_struct_ops_desc *st_ops_desc; @@ -961,11 +1048,15 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr) map = &st_map->map;
st_map->uvalue = bpf_map_area_alloc(vt->size, NUMA_NO_NODE); - st_map->links_cnt = btf_type_vlen(t); + st_map->funcs_cnt = count_func_ptrs(btf, t); st_map->links = - bpf_map_area_alloc(st_map->links_cnt * sizeof(struct bpf_links *), + bpf_map_area_alloc(st_map->funcs_cnt * sizeof(struct bpf_link *), + NUMA_NO_NODE); + + st_map->ksyms = + bpf_map_area_alloc(st_map->funcs_cnt * sizeof(struct bpf_ksym *), NUMA_NO_NODE); - if (!st_map->uvalue || !st_map->links) { + if (!st_map->uvalue || !st_map->links || !st_map->ksyms) { ret = -ENOMEM; goto errout_free; } @@ -994,7 +1085,8 @@ static u64 bpf_struct_ops_map_mem_usage(const struct bpf_map *map) usage = sizeof(*st_map) + vt->size - sizeof(struct bpf_struct_ops_value); usage += vt->size; - usage += btf_type_vlen(vt) * sizeof(struct bpf_links *); + usage += st_map->funcs_cnt * sizeof(struct bpf_link *); + usage += st_map->funcs_cnt * sizeof(struct bpf_ksym *); usage += PAGE_SIZE; return usage; } diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 5cd1c7a23848..346826e3c933 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6564,7 +6564,10 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, if (prog_args_trusted(prog)) info->reg_type |= PTR_TRUSTED;
- if (btf_param_match_suffix(btf, &args[arg], "__nullable")) + /* Raw tracepoint arguments always get marked as maybe NULL */ + if (bpf_prog_is_raw_tp(prog)) + info->reg_type |= PTR_MAYBE_NULL; + else if (btf_param_match_suffix(btf, &args[arg], "__nullable")) info->reg_type |= PTR_MAYBE_NULL;
if (tgt_prog) { diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c index 70fb82bf1637..b77db7413f8c 100644 --- a/kernel/bpf/dispatcher.c +++ b/kernel/bpf/dispatcher.c @@ -154,7 +154,8 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, d->image = NULL; goto out; } - bpf_image_ksym_add(d->image, PAGE_SIZE, &d->ksym); + bpf_image_ksym_init(d->image, PAGE_SIZE, &d->ksym); + bpf_image_ksym_add(&d->ksym); }
prev_num_progs = d->num_progs; diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index f8302a5ca400..1166d9dd3e8b 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -115,10 +115,14 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog) (ptype == BPF_PROG_TYPE_LSM && eatype == BPF_LSM_MAC); }
-void bpf_image_ksym_add(void *data, unsigned int size, struct bpf_ksym *ksym) +void bpf_image_ksym_init(void *data, unsigned int size, struct bpf_ksym *ksym) { ksym->start = (unsigned long) data; ksym->end = ksym->start + size; +} + +void bpf_image_ksym_add(struct bpf_ksym *ksym) +{ bpf_ksym_add(ksym); perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, ksym->start, PAGE_SIZE, false, ksym->name); @@ -377,7 +381,8 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, int size) ksym = &im->ksym; INIT_LIST_HEAD_RCU(&ksym->lnode); snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", key); - bpf_image_ksym_add(image, size, ksym); + bpf_image_ksym_init(image, size, ksym); + bpf_image_ksym_add(ksym); return im;
out_free_image: diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index bb99bada7e2e..91317857ea3e 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -418,6 +418,25 @@ static struct btf_record *reg_btf_record(const struct bpf_reg_state *reg) return rec; }
+static bool mask_raw_tp_reg_cond(const struct bpf_verifier_env *env, struct bpf_reg_state *reg) { + return reg->type == (PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL) && + bpf_prog_is_raw_tp(env->prog) && !reg->ref_obj_id; +} + +static bool mask_raw_tp_reg(const struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + if (!mask_raw_tp_reg_cond(env, reg)) + return false; + reg->type &= ~PTR_MAYBE_NULL; + return true; +} + +static void unmask_raw_tp_reg(struct bpf_reg_state *reg, bool result) +{ + if (result) + reg->type |= PTR_MAYBE_NULL; +} + static bool subprog_is_global(const struct bpf_verifier_env *env, int subprog) { struct bpf_func_info_aux *aux = env->prog->aux->func_info_aux; @@ -6595,6 +6614,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, const char *field_name = NULL; enum bpf_type_flag flag = 0; u32 btf_id = 0; + bool mask; int ret;
if (!env->allow_ptr_leaks) { @@ -6666,7 +6686,21 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
if (ret < 0) return ret; - + /* For raw_tp progs, we allow dereference of PTR_MAYBE_NULL + * trusted PTR_TO_BTF_ID, these are the ones that are possibly + * arguments to the raw_tp. Since internal checks in for trusted + * reg in check_ptr_to_btf_access would consider PTR_MAYBE_NULL + * modifier as problematic, mask it out temporarily for the + * check. Don't apply this to pointers with ref_obj_id > 0, as + * those won't be raw_tp args. + * + * We may end up applying this relaxation to other trusted + * PTR_TO_BTF_ID with maybe null flag, since we cannot + * distinguish PTR_MAYBE_NULL tagged for arguments vs normal + * tagging, but that should expand allowed behavior, and not + * cause regression for existing behavior. + */ + mask = mask_raw_tp_reg(env, reg); if (ret != PTR_TO_BTF_ID) { /* just mark; */
@@ -6727,8 +6761,13 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, clear_trusted_flags(&flag); }
- if (atype == BPF_READ && value_regno >= 0) + if (atype == BPF_READ && value_regno >= 0) { mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag); + /* We've assigned a new type to regno, so don't undo masking. */ + if (regno == value_regno) + mask = false; + } + unmask_raw_tp_reg(reg, mask);
return 0; } @@ -7103,7 +7142,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn if (!err && t == BPF_READ && value_regno >= 0) mark_reg_unknown(env, regs, value_regno); } else if (base_type(reg->type) == PTR_TO_BTF_ID && - !type_may_be_null(reg->type)) { + (mask_raw_tp_reg_cond(env, reg) || !type_may_be_null(reg->type))) { err = check_ptr_to_btf_access(env, regs, regno, off, size, t, value_regno); } else if (reg->type == CONST_PTR_TO_MAP) { @@ -8796,6 +8835,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, enum bpf_reg_type type = reg->type; u32 *arg_btf_id = NULL; int err = 0; + bool mask;
if (arg_type == ARG_DONTCARE) return 0; @@ -8836,11 +8876,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, base_type(arg_type) == ARG_PTR_TO_SPIN_LOCK) arg_btf_id = fn->arg_btf_id[arg];
+ mask = mask_raw_tp_reg(env, reg); err = check_reg_type(env, regno, arg_type, arg_btf_id, meta); - if (err) - return err;
- err = check_func_arg_reg_off(env, reg, regno, arg_type); + err = err ?: check_func_arg_reg_off(env, reg, regno, arg_type); + unmask_raw_tp_reg(reg, mask); if (err) return err;
@@ -9635,14 +9675,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, return ret; } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) { struct bpf_call_arg_meta meta; + bool mask; int err;
if (register_is_null(reg) && type_may_be_null(arg->arg_type)) continue;
memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */ + mask = mask_raw_tp_reg(env, reg); err = check_reg_type(env, regno, arg->arg_type, &arg->btf_id, &meta); err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type); + unmask_raw_tp_reg(reg, mask); if (err) return err; } else { @@ -10583,11 +10626,26 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
switch (func_id) { case BPF_FUNC_tail_call: + if (env->cur_state->active_lock.ptr) { + verbose(env, "tail_call cannot be used inside bpf_spin_lock-ed region\n"); + return -EINVAL; + } + err = check_reference_leak(env, false); if (err) { verbose(env, "tail_call would lead to reference leak\n"); return err; } + + if (env->cur_state->active_rcu_lock) { + verbose(env, "tail_call cannot be used inside bpf_rcu_read_lock-ed region\n"); + return -EINVAL; + } + + if (env->cur_state->active_preempt_lock) { + verbose(env, "tail_call cannot be used inside bpf_preempt_disable-ed region\n"); + return -EINVAL; + } break; case BPF_FUNC_get_local_storage: /* check that flags argument in get_local_storage(map, flags) is 0, @@ -11942,6 +12000,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ enum bpf_arg_type arg_type = ARG_DONTCARE; u32 regno = i + 1, ref_id, type_size; bool is_ret_buf_sz = false; + bool mask = false; int kf_arg_type;
t = btf_type_skip_modifiers(btf, args[i].type, NULL); @@ -12000,12 +12059,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return -EINVAL; }
+ mask = mask_raw_tp_reg(env, reg); if ((is_kfunc_trusted_args(meta) || is_kfunc_rcu(meta)) && (register_is_null(reg) || type_may_be_null(reg->type)) && !is_kfunc_arg_nullable(meta->btf, &args[i])) { verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i); + unmask_raw_tp_reg(reg, mask); return -EACCES; } + unmask_raw_tp_reg(reg, mask);
if (reg->ref_obj_id) { if (is_kfunc_release(meta) && meta->ref_obj_id) { @@ -12063,16 +12125,24 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (!is_kfunc_trusted_args(meta) && !is_kfunc_rcu(meta)) break;
+ /* Allow passing maybe NULL raw_tp arguments to + * kfuncs for compatibility. Don't apply this to + * arguments with ref_obj_id > 0. + */ + mask = mask_raw_tp_reg(env, reg); if (!is_trusted_reg(reg)) { if (!is_kfunc_rcu(meta)) { verbose(env, "R%d must be referenced or trusted\n", regno); + unmask_raw_tp_reg(reg, mask); return -EINVAL; } if (!is_rcu_reg(reg)) { verbose(env, "R%d must be a rcu pointer\n", regno); + unmask_raw_tp_reg(reg, mask); return -EINVAL; } } + unmask_raw_tp_reg(reg, mask); fallthrough; case KF_ARG_PTR_TO_CTX: case KF_ARG_PTR_TO_DYNPTR: @@ -12095,7 +12165,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (is_kfunc_release(meta) && reg->ref_obj_id) arg_type |= OBJ_RELEASE; + mask = mask_raw_tp_reg(env, reg); ret = check_func_arg_reg_off(env, reg, regno, arg_type); + unmask_raw_tp_reg(reg, mask); if (ret < 0) return ret;
@@ -12272,6 +12344,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ ref_tname = btf_name_by_offset(btf, ref_t->name_off); fallthrough; case KF_ARG_PTR_TO_BTF_ID: + mask = mask_raw_tp_reg(env, reg); /* Only base_type is checked, further checks are done here */ if ((base_type(reg->type) != PTR_TO_BTF_ID || (bpf_type_has_unsafe_modifiers(reg->type) && !is_rcu_reg(reg))) && @@ -12280,9 +12353,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ verbose(env, "expected %s or socket\n", reg_type_str(env, base_type(reg->type) | (type_flag(reg->type) & BPF_REG_TRUSTED_MODIFIERS))); + unmask_raw_tp_reg(reg, mask); return -EINVAL; } ret = process_kf_arg_ptr_to_btf_id(env, reg, ref_t, ref_tname, ref_id, meta, i); + unmask_raw_tp_reg(reg, mask); if (ret < 0) return ret; break; @@ -13252,7 +13327,7 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env, */ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, struct bpf_insn *insn, - const struct bpf_reg_state *ptr_reg, + struct bpf_reg_state *ptr_reg, const struct bpf_reg_state *off_reg) { struct bpf_verifier_state *vstate = env->cur_state; @@ -13266,6 +13341,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, struct bpf_sanitize_info info = {}; u8 opcode = BPF_OP(insn->code); u32 dst = insn->dst_reg; + bool mask; int ret;
dst_reg = ®s[dst]; @@ -13292,11 +13368,14 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, return -EACCES; }
+ mask = mask_raw_tp_reg(env, ptr_reg); if (ptr_reg->type & PTR_MAYBE_NULL) { verbose(env, "R%d pointer arithmetic on %s prohibited, null-check it first\n", dst, reg_type_str(env, ptr_reg->type)); + unmask_raw_tp_reg(ptr_reg, mask); return -EACCES; } + unmask_raw_tp_reg(ptr_reg, mask);
switch (base_type(ptr_reg->type)) { case PTR_TO_CTX: @@ -15909,6 +15988,15 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char return -ENOTSUPP; } break; + case BPF_PROG_TYPE_KPROBE: + switch (env->prog->expected_attach_type) { + case BPF_TRACE_KPROBE_SESSION: + range = retval_range(0, 1); + break; + default: + return 0; + } + break; case BPF_PROG_TYPE_SK_LOOKUP: range = retval_range(SK_DROP, SK_PASS); break; @@ -19837,6 +19925,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) * for this case. */ case PTR_TO_BTF_ID | MEM_ALLOC | PTR_UNTRUSTED: + case PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL: if (type == BPF_READ) { if (BPF_MODE(insn->code) == BPF_MEM) insn->code = BPF_LDX | BPF_PROBE_MEM | diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 044c7ba1cc48..e275eaf2de7f 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -2140,8 +2140,10 @@ int cgroup_setup_root(struct cgroup_root *root, u16 ss_mask) if (ret) goto exit_stats;
- ret = cgroup_bpf_inherit(root_cgrp); - WARN_ON_ONCE(ret); + if (root == &cgrp_dfl_root) { + ret = cgroup_bpf_inherit(root_cgrp); + WARN_ON_ONCE(ret); + }
trace_cgroup_setup_root(root);
@@ -2314,10 +2316,8 @@ static void cgroup_kill_sb(struct super_block *sb) * And don't kill the default root. */ if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root && - !percpu_ref_is_dying(&root->cgrp.self.refcnt)) { - cgroup_bpf_offline(&root->cgrp); + !percpu_ref_is_dying(&root->cgrp.self.refcnt)) percpu_ref_kill(&root->cgrp.self.refcnt); - } cgroup_put(&root->cgrp); kernfs_kill_sb(sb); } @@ -5710,9 +5710,11 @@ static struct cgroup *cgroup_create(struct cgroup *parent, const char *name, if (ret) goto out_kernfs_remove;
- ret = cgroup_bpf_inherit(cgrp); - if (ret) - goto out_psi_free; + if (cgrp->root == &cgrp_dfl_root) { + ret = cgroup_bpf_inherit(cgrp); + if (ret) + goto out_psi_free; + }
/* * New cgroup inherits effective freeze counter, and @@ -6026,7 +6028,8 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
cgroup1_check_for_release(parent);
- cgroup_bpf_offline(cgrp); + if (cgrp->root == &cgrp_dfl_root) + cgroup_bpf_offline(cgrp);
/* put the base reference */ percpu_ref_kill(&cgrp->self.refcnt); diff --git a/kernel/fork.c b/kernel/fork.c index 22f43721d031..ce8be55e5e04 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -622,6 +622,12 @@ static void dup_mm_exe_file(struct mm_struct *mm, struct mm_struct *oldmm)
exe_file = get_mm_exe_file(oldmm); RCU_INIT_POINTER(mm->exe_file, exe_file); + /* + * We depend on the oldmm having properly denied write access to the + * exe_file already. + */ + if (exe_file && deny_write_access(exe_file)) + pr_warn_once("deny_write_access() failed in %s\n", __func__); }
#ifdef CONFIG_MMU @@ -1414,11 +1420,20 @@ int set_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file) */ old_exe_file = rcu_dereference_raw(mm->exe_file);
- if (new_exe_file) + if (new_exe_file) { + /* + * We expect the caller (i.e., sys_execve) to already denied + * write access, so this is unlikely to fail. + */ + if (unlikely(deny_write_access(new_exe_file))) + return -EACCES; get_file(new_exe_file); + } rcu_assign_pointer(mm->exe_file, new_exe_file); - if (old_exe_file) + if (old_exe_file) { + allow_write_access(old_exe_file); fput(old_exe_file); + } return 0; }
@@ -1457,6 +1472,9 @@ int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file) return ret; }
+ ret = deny_write_access(new_exe_file); + if (ret) + return -EACCES; get_file(new_exe_file);
/* set the new file */ @@ -1465,8 +1483,10 @@ int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file) rcu_assign_pointer(mm->exe_file, new_exe_file); mmap_write_unlock(mm);
- if (old_exe_file) + if (old_exe_file) { + allow_write_access(old_exe_file); fput(old_exe_file); + } return 0; }
diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c index 6d37596deb1f..d360fa44b234 100644 --- a/kernel/rcu/rcuscale.c +++ b/kernel/rcu/rcuscale.c @@ -890,13 +890,15 @@ kfree_scale_init(void) if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start < 2 * HZ)) { pr_alert("ERROR: call_rcu() CBs are not being lazy as expected!\n"); WARN_ON_ONCE(1); - return -1; + firsterr = -1; + goto unwind; }
if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start > 3 * HZ)) { pr_alert("ERROR: call_rcu() CBs are being too lazy!\n"); WARN_ON_ONCE(1); - return -1; + firsterr = -1; + goto unwind; } }
diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c index 549c03336ee9..4dcbf8aa80ff 100644 --- a/kernel/rcu/srcutiny.c +++ b/kernel/rcu/srcutiny.c @@ -122,8 +122,8 @@ void srcu_drive_gp(struct work_struct *wp) ssp = container_of(wp, struct srcu_struct, srcu_work); preempt_disable(); // Needed for PREEMPT_AUTO if (ssp->srcu_gp_running || ULONG_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max))) { - return; /* Already running or nothing to do. */ preempt_enable(); + return; /* Already running or nothing to do. */ }
/* Remove recently arrived callbacks and wait for readers. */ diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b1f883fcd918..3e486ccaa4ca 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3511,7 +3511,7 @@ static int krc_count(struct kfree_rcu_cpu *krcp) }
static void -schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) +__schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) { long delay, delay_left;
@@ -3525,6 +3525,16 @@ schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) queue_delayed_work(system_unbound_wq, &krcp->monitor_work, delay); }
+static void +schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&krcp->lock, flags); + __schedule_delayed_monitor_work(krcp); + raw_spin_unlock_irqrestore(&krcp->lock, flags); +} + static void kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp) { @@ -3836,7 +3846,7 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
// Set timer to drain after KFREE_DRAIN_JIFFIES. if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING) - schedule_delayed_monitor_work(krcp); + __schedule_delayed_monitor_work(krcp);
unlock_return: krc_this_cpu_unlock(krcp, flags); diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index 16865475120b..2605dd234a13 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -891,7 +891,18 @@ static void nocb_cb_wait(struct rcu_data *rdp) swait_event_interruptible_exclusive(rdp->nocb_cb_wq, nocb_cb_wait_cond(rdp)); if (kthread_should_park()) { - kthread_parkme(); + /* + * kthread_park() must be preceded by an rcu_barrier(). + * But yet another rcu_barrier() might have sneaked in between + * the barrier callback execution and the callbacks counter + * decrement. + */ + if (rdp->nocb_cb_sleep) { + rcu_nocb_lock_irqsave(rdp, flags); + WARN_ON_ONCE(rcu_segcblist_n_cbs(&rdp->cblist)); + rcu_nocb_unlock_irqrestore(rdp, flags); + kthread_parkme(); + } } else if (READ_ONCE(rdp->nocb_cb_sleep)) { WARN_ON(signal_pending(current)); trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WokeEmpty")); diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index c6ba15388ea7..28c77904ea74 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -783,9 +783,8 @@ static int sugov_init(struct cpufreq_policy *policy) if (ret) goto fail;
- sugov_eas_rebuild_sd(); - out: + sugov_eas_rebuild_sd(); mutex_unlock(&global_tunables_lock); return 0;
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 751d73d500e5..16613631543f 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -3567,12 +3567,7 @@ static void scx_ops_exit_task(struct task_struct *p)
void init_scx_entity(struct sched_ext_entity *scx) { - /* - * init_idle() calls this function again after fork sequence is - * complete. Don't touch ->tasks_node as it's already linked. - */ - memset(scx, 0, offsetof(struct sched_ext_entity, tasks_node)); - + memset(scx, 0, sizeof(*scx)); INIT_LIST_HEAD(&scx->dsq_list.node); RB_CLEAR_NODE(&scx->dsq_priq); scx->sticky_cpu = -1; @@ -6478,6 +6473,8 @@ __bpf_kfunc_end_defs();
BTF_KFUNCS_START(scx_kfunc_ids_unlocked) BTF_ID_FLAGS(func, scx_bpf_create_dsq, KF_SLEEPABLE) +BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq_set_slice) +BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq_set_vtime) BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq, KF_RCU) BTF_ID_FLAGS(func, scx_bpf_dispatch_vtime_from_dsq, KF_RCU) BTF_KFUNCS_END(scx_kfunc_ids_unlocked) diff --git a/kernel/time/time.c b/kernel/time/time.c index 642647f5046b..1ad88e97b4eb 100644 --- a/kernel/time/time.c +++ b/kernel/time/time.c @@ -556,9 +556,9 @@ EXPORT_SYMBOL(ns_to_timespec64); * - all other values are converted to jiffies by either multiplying * the input value by a factor or dividing it with a factor and * handling any 32-bit overflows. - * for the details see __msecs_to_jiffies() + * for the details see _msecs_to_jiffies() * - * __msecs_to_jiffies() checks for the passed in value being a constant + * msecs_to_jiffies() checks for the passed in value being a constant * via __builtin_constant_p() allowing gcc to eliminate most of the * code, __msecs_to_jiffies() is called if the value passed does not * allow constant folding and the actual conversion must be done at diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 0fc9d066a7be..7835f9b376e7 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -2422,7 +2422,8 @@ static inline void __run_timers(struct timer_base *base)
static void __run_timer_base(struct timer_base *base) { - if (time_before(jiffies, base->next_expiry)) + /* Can race against a remote CPU updating next_expiry under the lock */ + if (time_before(jiffies, READ_ONCE(base->next_expiry))) return;
timer_base_lock_expiry(base); diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 630b763e5240..792dc35414a3 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -3205,7 +3205,6 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe, struct bpf_prog *prog = link->link.prog; bool sleepable = prog->sleepable; struct bpf_run_ctx *old_run_ctx; - int err = 0;
if (link->task && !same_thread_group(current, link->task)) return 0; @@ -3218,7 +3217,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe, migrate_disable();
old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); - err = bpf_prog_run(link->link.prog, regs); + bpf_prog_run(link->link.prog, regs); bpf_reset_run_ctx(old_run_ctx);
migrate_enable(); @@ -3227,7 +3226,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe, rcu_read_unlock_trace(); else rcu_read_unlock(); - return err; + return 0; }
static bool diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c index 05e791241812..3ff9caa4a71b 100644 --- a/kernel/trace/trace_event_perf.c +++ b/kernel/trace/trace_event_perf.c @@ -352,10 +352,16 @@ void perf_uprobe_destroy(struct perf_event *p_event) int perf_trace_add(struct perf_event *p_event, int flags) { struct trace_event_call *tp_event = p_event->tp_event; + struct hw_perf_event *hwc = &p_event->hw;
if (!(flags & PERF_EF_START)) p_event->hw.state = PERF_HES_STOPPED;
+ if (is_sampling_event(p_event)) { + hwc->last_period = hwc->sample_period; + perf_swevent_set_period(p_event); + } + /* * If TRACE_REG_PERF_ADD returns false; no custom action was performed * and we need to take the default action of enqueueing our event on diff --git a/lib/overflow_kunit.c b/lib/overflow_kunit.c index 2abc78367dd1..5222c6393f11 100644 --- a/lib/overflow_kunit.c +++ b/lib/overflow_kunit.c @@ -1187,7 +1187,7 @@ static void DEFINE_FLEX_test(struct kunit *test) { /* Using _RAW_ on a __counted_by struct will initialize "counter" to zero */ DEFINE_RAW_FLEX(struct foo, two_but_zero, array, 2); -#if __has_attribute(__counted_by__) +#ifdef CONFIG_CC_HAS_COUNTED_BY int expected_raw_size = sizeof(struct foo); #else int expected_raw_size = sizeof(struct foo) + 2 * sizeof(s16); diff --git a/lib/string_helpers.c b/lib/string_helpers.c index 4f887aa62fa0..91fa37b5c510 100644 --- a/lib/string_helpers.c +++ b/lib/string_helpers.c @@ -57,7 +57,7 @@ int string_get_size(u64 size, u64 blk_size, const enum string_size_units units, static const unsigned int rounding[] = { 500, 50, 5 }; int i = 0, j; u32 remainder = 0, sf_cap; - char tmp[8]; + char tmp[12]; const char *unit;
tmp[0] = '\0'; diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c index 989a12a67872..6dc234913dd5 100644 --- a/lib/strncpy_from_user.c +++ b/lib/strncpy_from_user.c @@ -120,6 +120,9 @@ long strncpy_from_user(char *dst, const char __user *src, long count) if (unlikely(count <= 0)) return 0;
+ kasan_check_write(dst, count); + check_object_size(dst, count, false); + if (can_do_masked_user_access()) { long retval;
@@ -142,8 +145,6 @@ long strncpy_from_user(char *dst, const char __user *src, long count) if (max > count) max = count;
- kasan_check_write(dst, count); - check_object_size(dst, count, false); if (user_read_access_begin(src, max)) { retval = do_strncpy_from_user(dst, src, count, max); user_read_access_end(); diff --git a/mm/internal.h b/mm/internal.h index 64c2eb0b160e..9bb098e78f15 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -48,7 +48,7 @@ struct folio_batch; * when we specify __GFP_NOWARN. */ #define WARN_ON_ONCE_GFP(cond, gfp) ({ \ - static bool __section(".data.once") __warned; \ + static bool __section(".data..once") __warned; \ int __ret_warn_once = !!(cond); \ \ if (unlikely(!(gfp & __GFP_NOWARN) && __ret_warn_once && !__warned)) { \ diff --git a/net/9p/trans_usbg.c b/net/9p/trans_usbg.c index 975b76839dca..6b694f117aef 100644 --- a/net/9p/trans_usbg.c +++ b/net/9p/trans_usbg.c @@ -909,9 +909,9 @@ static struct usb_function_instance *usb9pfs_alloc_instance(void) usb9pfs_opts->buflen = DEFAULT_BUFLEN;
dev = kzalloc(sizeof(*dev), GFP_KERNEL); - if (IS_ERR(dev)) { + if (!dev) { kfree(usb9pfs_opts); - return ERR_CAST(dev); + return ERR_PTR(-ENOMEM); }
usb9pfs_opts->dev = dev; diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c index dfdbe1ca5338..b9ff69c7522a 100644 --- a/net/9p/trans_xen.c +++ b/net/9p/trans_xen.c @@ -286,7 +286,7 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv) if (!priv->rings[i].intf) break; if (priv->rings[i].irq > 0) - unbind_from_irqhandler(priv->rings[i].irq, priv->dev); + unbind_from_irqhandler(priv->rings[i].irq, ring); if (priv->rings[i].data.in) { for (j = 0; j < (1 << priv->rings[i].intf->ring_order); @@ -465,6 +465,7 @@ static int xen_9pfs_front_init(struct xenbus_device *dev) goto error; }
+ xenbus_switch_state(dev, XenbusStateInitialised); return 0;
error_xenbus: @@ -512,8 +513,10 @@ static void xen_9pfs_front_changed(struct xenbus_device *dev, break;
case XenbusStateInitWait: - if (!xen_9pfs_front_init(dev)) - xenbus_switch_state(dev, XenbusStateInitialised); + if (dev->state != XenbusStateInitialising) + break; + + xen_9pfs_front_init(dev); break;
case XenbusStateConnected: diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c index c4c74b82ed21..6354cdf9c2b3 100644 --- a/net/bluetooth/hci_conn.c +++ b/net/bluetooth/hci_conn.c @@ -952,6 +952,7 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t conn->tx_power = HCI_TX_POWER_INVALID; conn->max_tx_power = HCI_TX_POWER_INVALID; conn->sync_handle = HCI_SYNC_HANDLE_INVALID; + conn->sid = HCI_SID_INVALID;
set_bit(HCI_CONN_POWER_SAVE, &conn->flags); conn->disc_timeout = HCI_DISCONN_TIMEOUT; @@ -2062,105 +2063,225 @@ static int create_big_sync(struct hci_dev *hdev, void *data)
static void create_pa_complete(struct hci_dev *hdev, void *data, int err) { - struct hci_cp_le_pa_create_sync *cp = data; - bt_dev_dbg(hdev, "");
if (err) bt_dev_err(hdev, "Unable to create PA: %d", err); +}
- kfree(cp); +static bool hci_conn_check_create_pa_sync(struct hci_conn *conn) +{ + if (conn->type != ISO_LINK || conn->sid == HCI_SID_INVALID) + return false; + + return true; }
static int create_pa_sync(struct hci_dev *hdev, void *data) { - struct hci_cp_le_pa_create_sync *cp = data; - int err; + struct hci_cp_le_pa_create_sync *cp = NULL; + struct hci_conn *conn; + int err = 0;
- err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC, - sizeof(*cp), cp, HCI_CMD_TIMEOUT); - if (err) { - hci_dev_clear_flag(hdev, HCI_PA_SYNC); - return err; + hci_dev_lock(hdev); + + rcu_read_lock(); + + /* The spec allows only one pending LE Periodic Advertising Create + * Sync command at a time. If the command is pending now, don't do + * anything. We check for pending connections after each PA Sync + * Established event. + * + * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E + * page 2493: + * + * If the Host issues this command when another HCI_LE_Periodic_ + * Advertising_Create_Sync command is pending, the Controller shall + * return the error code Command Disallowed (0x0C). + */ + list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) { + if (test_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags)) + goto unlock; + } + + list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) { + if (hci_conn_check_create_pa_sync(conn)) { + struct bt_iso_qos *qos = &conn->iso_qos; + + cp = kzalloc(sizeof(*cp), GFP_KERNEL); + if (!cp) { + err = -ENOMEM; + goto unlock; + } + + cp->options = qos->bcast.options; + cp->sid = conn->sid; + cp->addr_type = conn->dst_type; + bacpy(&cp->addr, &conn->dst); + cp->skip = cpu_to_le16(qos->bcast.skip); + cp->sync_timeout = cpu_to_le16(qos->bcast.sync_timeout); + cp->sync_cte_type = qos->bcast.sync_cte_type; + + break; + } }
- return hci_update_passive_scan_sync(hdev); +unlock: + rcu_read_unlock(); + + hci_dev_unlock(hdev); + + if (cp) { + hci_dev_set_flag(hdev, HCI_PA_SYNC); + set_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags); + + err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC, + sizeof(*cp), cp, HCI_CMD_TIMEOUT); + if (!err) + err = hci_update_passive_scan_sync(hdev); + + kfree(cp); + + if (err) { + hci_dev_clear_flag(hdev, HCI_PA_SYNC); + clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags); + } + } + + return err; +} + +int hci_pa_create_sync_pending(struct hci_dev *hdev) +{ + /* Queue start pa_create_sync and scan */ + return hci_cmd_sync_queue(hdev, create_pa_sync, + NULL, create_pa_complete); }
struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, __u8 dst_type, __u8 sid, struct bt_iso_qos *qos) { - struct hci_cp_le_pa_create_sync *cp; struct hci_conn *conn; - int err; - - if (hci_dev_test_and_set_flag(hdev, HCI_PA_SYNC)) - return ERR_PTR(-EBUSY);
conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_SLAVE); if (IS_ERR(conn)) return conn;
conn->iso_qos = *qos; + conn->dst_type = dst_type; + conn->sid = sid; conn->state = BT_LISTEN;
hci_conn_hold(conn);
- cp = kzalloc(sizeof(*cp), GFP_KERNEL); - if (!cp) { - hci_dev_clear_flag(hdev, HCI_PA_SYNC); - hci_conn_drop(conn); - return ERR_PTR(-ENOMEM); + hci_pa_create_sync_pending(hdev); + + return conn; +} + +static bool hci_conn_check_create_big_sync(struct hci_conn *conn) +{ + if (!conn->num_bis) + return false; + + return true; +} + +static void big_create_sync_complete(struct hci_dev *hdev, void *data, int err) +{ + bt_dev_dbg(hdev, ""); + + if (err) + bt_dev_err(hdev, "Unable to create BIG sync: %d", err); +} + +static int big_create_sync(struct hci_dev *hdev, void *data) +{ + DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11); + struct hci_conn *conn; + + rcu_read_lock(); + + pdu->num_bis = 0; + + /* The spec allows only one pending LE BIG Create Sync command at + * a time. If the command is pending now, don't do anything. We + * check for pending connections after each BIG Sync Established + * event. + * + * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E + * page 2586: + * + * If the Host sends this command when the Controller is in the + * process of synchronizing to any BIG, i.e. the HCI_LE_BIG_Sync_ + * Established event has not been generated, the Controller shall + * return the error code Command Disallowed (0x0C). + */ + list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) { + if (test_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags)) + goto unlock; }
- cp->options = qos->bcast.options; - cp->sid = sid; - cp->addr_type = dst_type; - bacpy(&cp->addr, dst); - cp->skip = cpu_to_le16(qos->bcast.skip); - cp->sync_timeout = cpu_to_le16(qos->bcast.sync_timeout); - cp->sync_cte_type = qos->bcast.sync_cte_type; + list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) { + if (hci_conn_check_create_big_sync(conn)) { + struct bt_iso_qos *qos = &conn->iso_qos;
- /* Queue start pa_create_sync and scan */ - err = hci_cmd_sync_queue(hdev, create_pa_sync, cp, create_pa_complete); - if (err < 0) { - hci_conn_drop(conn); - kfree(cp); - return ERR_PTR(err); + set_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags); + + pdu->handle = qos->bcast.big; + pdu->sync_handle = cpu_to_le16(conn->sync_handle); + pdu->encryption = qos->bcast.encryption; + memcpy(pdu->bcode, qos->bcast.bcode, + sizeof(pdu->bcode)); + pdu->mse = qos->bcast.mse; + pdu->timeout = cpu_to_le16(qos->bcast.timeout); + pdu->num_bis = conn->num_bis; + memcpy(pdu->bis, conn->bis, conn->num_bis); + + break; + } }
- return conn; +unlock: + rcu_read_unlock(); + + if (!pdu->num_bis) + return 0; + + return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC, + struct_size(pdu, bis, pdu->num_bis), pdu); +} + +int hci_le_big_create_sync_pending(struct hci_dev *hdev) +{ + /* Queue big_create_sync */ + return hci_cmd_sync_queue_once(hdev, big_create_sync, + NULL, big_create_sync_complete); }
int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon, struct bt_iso_qos *qos, __u16 sync_handle, __u8 num_bis, __u8 bis[]) { - DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11); int err;
- if (num_bis < 0x01 || num_bis > pdu->num_bis) + if (num_bis < 0x01 || num_bis > ISO_MAX_NUM_BIS) return -EINVAL;
err = qos_set_big(hdev, qos); if (err) return err;
- if (hcon) - hcon->iso_qos.bcast.big = qos->bcast.big; + if (hcon) { + /* Update hcon QoS */ + hcon->iso_qos = *qos;
- pdu->handle = qos->bcast.big; - pdu->sync_handle = cpu_to_le16(sync_handle); - pdu->encryption = qos->bcast.encryption; - memcpy(pdu->bcode, qos->bcast.bcode, sizeof(pdu->bcode)); - pdu->mse = qos->bcast.mse; - pdu->timeout = cpu_to_le16(qos->bcast.timeout); - pdu->num_bis = num_bis; - memcpy(pdu->bis, bis, num_bis); + hcon->num_bis = num_bis; + memcpy(hcon->bis, bis, num_bis); + }
- return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC, - struct_size(pdu, bis, num_bis), pdu); + return hci_le_big_create_sync_pending(hdev); }
static void create_big_complete(struct hci_dev *hdev, void *data, int err) diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 0bbad90ddd6f..2e4bd3e961ce 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -6345,7 +6345,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data, struct hci_ev_le_pa_sync_established *ev = data; int mask = hdev->link_mode; __u8 flags = 0; - struct hci_conn *pa_sync; + struct hci_conn *pa_sync, *conn;
bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
@@ -6353,6 +6353,20 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+ conn = hci_conn_hash_lookup_sid(hdev, ev->sid, &ev->bdaddr, + ev->bdaddr_type); + if (!conn) { + bt_dev_err(hdev, + "Unable to find connection for dst %pMR sid 0x%2.2x", + &ev->bdaddr, ev->sid); + goto unlock; + } + + clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags); + + conn->sync_handle = le16_to_cpu(ev->handle); + conn->sid = HCI_SID_INVALID; + mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ISO_LINK, &flags); if (!(mask & HCI_LM_ACCEPT)) { hci_le_pa_term_sync(hdev, ev->handle); @@ -6379,6 +6393,9 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data, }
unlock: + /* Handle any other pending PA sync command */ + hci_pa_create_sync_pending(hdev); + hci_dev_unlock(hdev); }
@@ -6896,7 +6913,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data, struct sk_buff *skb) { struct hci_evt_le_big_sync_estabilished *ev = data; - struct hci_conn *bis; + struct hci_conn *bis, *conn; int i;
bt_dev_dbg(hdev, "status 0x%2.2x", ev->status); @@ -6907,6 +6924,20 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
hci_dev_lock(hdev);
+ conn = hci_conn_hash_lookup_big_sync_pend(hdev, ev->handle, + ev->num_bis); + if (!conn) { + bt_dev_err(hdev, + "Unable to find connection for big 0x%2.2x", + ev->handle); + goto unlock; + } + + clear_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags); + + conn->num_bis = 0; + memset(conn->bis, 0, sizeof(conn->num_bis)); + for (i = 0; i < ev->num_bis; i++) { u16 handle = le16_to_cpu(ev->bis[i]); __le32 interval; @@ -6956,6 +6987,10 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data, hci_connect_cfm(bis, ev->status); }
+unlock: + /* Handle any other pending BIG sync command */ + hci_le_big_create_sync_pending(hdev); + hci_dev_unlock(hdev); }
diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c index 367e32fe30eb..4b54dbbf0729 100644 --- a/net/bluetooth/hci_sysfs.c +++ b/net/bluetooth/hci_sysfs.c @@ -21,16 +21,6 @@ static const struct device_type bt_link = { .release = bt_link_release, };
-/* - * The rfcomm tty device will possibly retain even when conn - * is down, and sysfs doesn't support move zombie device, - * so we should move the device before conn device is destroyed. - */ -static int __match_tty(struct device *dev, void *data) -{ - return !strncmp(dev_name(dev), "rfcomm", 6); -} - void hci_conn_init_sysfs(struct hci_conn *conn) { struct hci_dev *hdev = conn->hdev; @@ -73,10 +63,13 @@ void hci_conn_del_sysfs(struct hci_conn *conn) return; }
+ /* If there are devices using the connection as parent reset it to NULL + * before unregistering the device. + */ while (1) { struct device *dev;
- dev = device_find_child(&conn->dev, NULL, __match_tty); + dev = device_find_any_child(&conn->dev); if (!dev) break; device_move(dev, NULL, DPM_ORDER_DEV_LAST); diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c index 7a83e400ac77..5e2d9758bd3c 100644 --- a/net/bluetooth/iso.c +++ b/net/bluetooth/iso.c @@ -35,6 +35,7 @@ struct iso_conn { struct sk_buff *rx_skb; __u32 rx_len; __u16 tx_sn; + struct kref ref; };
#define iso_conn_lock(c) spin_lock(&(c)->lock) @@ -93,6 +94,49 @@ static struct sock *iso_get_sock(bdaddr_t *src, bdaddr_t *dst, #define ISO_CONN_TIMEOUT (HZ * 40) #define ISO_DISCONN_TIMEOUT (HZ * 2)
+static void iso_conn_free(struct kref *ref) +{ + struct iso_conn *conn = container_of(ref, struct iso_conn, ref); + + BT_DBG("conn %p", conn); + + if (conn->sk) + iso_pi(conn->sk)->conn = NULL; + + if (conn->hcon) { + conn->hcon->iso_data = NULL; + hci_conn_drop(conn->hcon); + } + + /* Ensure no more work items will run since hci_conn has been dropped */ + disable_delayed_work_sync(&conn->timeout_work); + + kfree(conn); +} + +static void iso_conn_put(struct iso_conn *conn) +{ + if (!conn) + return; + + BT_DBG("conn %p refcnt %d", conn, kref_read(&conn->ref)); + + kref_put(&conn->ref, iso_conn_free); +} + +static struct iso_conn *iso_conn_hold_unless_zero(struct iso_conn *conn) +{ + if (!conn) + return NULL; + + BT_DBG("conn %p refcnt %u", conn, kref_read(&conn->ref)); + + if (!kref_get_unless_zero(&conn->ref)) + return NULL; + + return conn; +} + static struct sock *iso_sock_hold(struct iso_conn *conn) { if (!conn || !bt_sock_linked(&iso_sk_list, conn->sk)) @@ -109,9 +153,14 @@ static void iso_sock_timeout(struct work_struct *work) timeout_work.work); struct sock *sk;
+ conn = iso_conn_hold_unless_zero(conn); + if (!conn) + return; + iso_conn_lock(conn); sk = iso_sock_hold(conn); iso_conn_unlock(conn); + iso_conn_put(conn);
if (!sk) return; @@ -149,9 +198,14 @@ static struct iso_conn *iso_conn_add(struct hci_conn *hcon) { struct iso_conn *conn = hcon->iso_data;
+ conn = iso_conn_hold_unless_zero(conn); if (conn) { - if (!conn->hcon) + if (!conn->hcon) { + iso_conn_lock(conn); conn->hcon = hcon; + iso_conn_unlock(conn); + } + iso_conn_put(conn); return conn; }
@@ -159,6 +213,7 @@ static struct iso_conn *iso_conn_add(struct hci_conn *hcon) if (!conn) return NULL;
+ kref_init(&conn->ref); spin_lock_init(&conn->lock); INIT_DELAYED_WORK(&conn->timeout_work, iso_sock_timeout);
@@ -178,17 +233,15 @@ static void iso_chan_del(struct sock *sk, int err) struct sock *parent;
conn = iso_pi(sk)->conn; + iso_pi(sk)->conn = NULL;
BT_DBG("sk %p, conn %p, err %d", sk, conn, err);
if (conn) { iso_conn_lock(conn); conn->sk = NULL; - iso_pi(sk)->conn = NULL; iso_conn_unlock(conn); - - if (conn->hcon) - hci_conn_drop(conn->hcon); + iso_conn_put(conn); }
sk->sk_state = BT_CLOSED; @@ -210,6 +263,7 @@ static void iso_conn_del(struct hci_conn *hcon, int err) struct iso_conn *conn = hcon->iso_data; struct sock *sk;
+ conn = iso_conn_hold_unless_zero(conn); if (!conn) return;
@@ -219,20 +273,18 @@ static void iso_conn_del(struct hci_conn *hcon, int err) iso_conn_lock(conn); sk = iso_sock_hold(conn); iso_conn_unlock(conn); + iso_conn_put(conn);
- if (sk) { - lock_sock(sk); - iso_sock_clear_timer(sk); - iso_chan_del(sk, err); - release_sock(sk); - sock_put(sk); + if (!sk) { + iso_conn_put(conn); + return; }
- /* Ensure no more work items will run before freeing conn. */ - cancel_delayed_work_sync(&conn->timeout_work); - - hcon->iso_data = NULL; - kfree(conn); + lock_sock(sk); + iso_sock_clear_timer(sk); + iso_chan_del(sk, err); + release_sock(sk); + sock_put(sk); }
static int __iso_chan_add(struct iso_conn *conn, struct sock *sk, @@ -652,6 +704,8 @@ static void iso_sock_destruct(struct sock *sk) { BT_DBG("sk %p", sk);
+ iso_conn_put(iso_pi(sk)->conn); + skb_queue_purge(&sk->sk_receive_queue); skb_queue_purge(&sk->sk_write_queue); } @@ -711,6 +765,7 @@ static void iso_sock_disconn(struct sock *sk) */ if (bis_sk) { hcon->state = BT_OPEN; + hcon->iso_data = NULL; iso_pi(sk)->conn->hcon = NULL; iso_sock_clear_timer(sk); iso_chan_del(sk, bt_to_errno(hcon->abort_reason)); @@ -720,7 +775,6 @@ static void iso_sock_disconn(struct sock *sk) }
sk->sk_state = BT_DISCONN; - iso_sock_set_timer(sk, ISO_DISCONN_TIMEOUT); iso_conn_lock(iso_pi(sk)->conn); hci_conn_drop(iso_pi(sk)->conn->hcon); iso_pi(sk)->conn->hcon = NULL; @@ -1338,6 +1392,13 @@ static void iso_conn_big_sync(struct sock *sk) if (!hdev) return;
+ /* hci_le_big_create_sync requires hdev lock to be held, since + * it enqueues the HCI LE BIG Create Sync command via + * hci_cmd_sync_queue_once, which checks hdev flags that might + * change. + */ + hci_dev_lock(hdev); + if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) { err = hci_le_big_create_sync(hdev, iso_pi(sk)->conn->hcon, &iso_pi(sk)->qos, @@ -1348,6 +1409,8 @@ static void iso_conn_big_sync(struct sock *sk) bt_dev_err(hdev, "hci_le_big_create_sync: %d", err); } + + hci_dev_unlock(hdev); }
static int iso_sock_recvmsg(struct socket *sock, struct msghdr *msg, @@ -1942,6 +2005,7 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
if (sk) { int err; + struct hci_conn *hcon = iso_pi(sk)->conn->hcon;
iso_pi(sk)->qos.bcast.encryption = ev2->encryption;
@@ -1950,7 +2014,8 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
if (!test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags) && !test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) { - err = hci_le_big_create_sync(hdev, NULL, + err = hci_le_big_create_sync(hdev, + hcon, &iso_pi(sk)->qos, iso_pi(sk)->sync_handle, iso_pi(sk)->bc_num_bis, diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c index a429661b676a..2343e15f8938 100644 --- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -1317,7 +1317,8 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err) struct mgmt_mode *cp;
/* Make sure cmd still outstanding. */ - if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev)) + if (err == -ECANCELED || + cmd != pending_find(MGMT_OP_SET_POWERED, hdev)) return;
cp = cmd->param; @@ -1350,7 +1351,13 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err) static int set_powered_sync(struct hci_dev *hdev, void *data) { struct mgmt_pending_cmd *cmd = data; - struct mgmt_mode *cp = cmd->param; + struct mgmt_mode *cp; + + /* Make sure cmd still outstanding. */ + if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev)) + return -ECANCELED; + + cp = cmd->param;
BT_DBG("%s", hdev->name);
@@ -1510,7 +1517,8 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data, bt_dev_dbg(hdev, "err %d", err);
/* Make sure cmd still outstanding. */ - if (cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev)) + if (err == -ECANCELED || + cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev)) return;
hci_dev_lock(hdev); @@ -1684,7 +1692,8 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data, bt_dev_dbg(hdev, "err %d", err);
/* Make sure cmd still outstanding. */ - if (cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev)) + if (err == -ECANCELED || + cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev)) return;
hci_dev_lock(hdev); @@ -1916,7 +1925,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err) bool changed;
/* Make sure cmd still outstanding. */ - if (cmd != pending_find(MGMT_OP_SET_SSP, hdev)) + if (err == -ECANCELED || cmd != pending_find(MGMT_OP_SET_SSP, hdev)) return;
if (err) { @@ -3782,7 +3791,8 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
bt_dev_dbg(hdev, "err %d", err);
- if (cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev)) + if (err == -ECANCELED || + cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev)) return;
if (status) { @@ -3957,7 +3967,8 @@ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err) struct sk_buff *skb = cmd->skb; u8 status = mgmt_status(err);
- if (cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev)) + if (err == -ECANCELED || + cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev)) return;
if (!status) { @@ -5848,13 +5859,16 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err) { struct mgmt_pending_cmd *cmd = data;
+ bt_dev_dbg(hdev, "err %d", err); + + if (err == -ECANCELED) + return; + if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) && cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) && cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev)) return;
- bt_dev_dbg(hdev, "err %d", err); - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err), cmd->param, 1); mgmt_pending_remove(cmd); @@ -6087,7 +6101,8 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err) { struct mgmt_pending_cmd *cmd = data;
- if (cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev)) + if (err == -ECANCELED || + cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev)) return;
bt_dev_dbg(hdev, "err %d", err); @@ -8078,7 +8093,8 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data, u8 status = mgmt_status(err); u16 eir_len;
- if (cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev)) + if (err == -ECANCELED || + cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev)) return;
if (!status) { diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c index f48250e3f2e1..8af1bf518321 100644 --- a/net/bluetooth/rfcomm/sock.c +++ b/net/bluetooth/rfcomm/sock.c @@ -729,7 +729,8 @@ static int rfcomm_sock_getsockopt_old(struct socket *sock, int optname, char __u struct sock *l2cap_sk; struct l2cap_conn *conn; struct rfcomm_conninfo cinfo; - int len, err = 0; + int err = 0; + size_t len; u32 opt;
BT_DBG("sk %p", sk); @@ -783,7 +784,7 @@ static int rfcomm_sock_getsockopt_old(struct socket *sock, int optname, char __u cinfo.hci_handle = conn->hcon->handle; memcpy(cinfo.dev_class, conn->hcon->dev_class, 3);
- len = min_t(unsigned int, len, sizeof(cinfo)); + len = min(len, sizeof(cinfo)); if (copy_to_user(optval, (char *) &cinfo, len)) err = -EFAULT;
@@ -802,7 +803,8 @@ static int rfcomm_sock_getsockopt(struct socket *sock, int level, int optname, c { struct sock *sk = sock->sk; struct bt_security sec; - int len, err = 0; + int err = 0; + size_t len;
BT_DBG("sk %p", sk);
@@ -827,7 +829,7 @@ static int rfcomm_sock_getsockopt(struct socket *sock, int level, int optname, c sec.level = rfcomm_pi(sk)->sec_level; sec.key_size = 0;
- len = min_t(unsigned int, len, sizeof(sec)); + len = min(len, sizeof(sec)); if (copy_to_user(optval, (char *) &sec, len)) err = -EFAULT;
diff --git a/net/core/filter.c b/net/core/filter.c index fb56567c551e..9a459213d283 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -2621,18 +2621,16 @@ BPF_CALL_2(bpf_msg_cork_bytes, struct sk_msg *, msg, u32, bytes)
static void sk_msg_reset_curr(struct sk_msg *msg) { - u32 i = msg->sg.start; - u32 len = 0; - - do { - len += sk_msg_elem(msg, i)->length; - sk_msg_iter_var_next(i); - if (len >= msg->sg.size) - break; - } while (i != msg->sg.end); + if (!msg->sg.size) { + msg->sg.curr = msg->sg.start; + msg->sg.copybreak = 0; + } else { + u32 i = msg->sg.end;
- msg->sg.curr = i; - msg->sg.copybreak = 0; + sk_msg_iter_var_prev(i); + msg->sg.curr = i; + msg->sg.copybreak = msg->sg.data[i].length; + } }
static const struct bpf_func_proto bpf_msg_cork_bytes_proto = { @@ -2795,7 +2793,7 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start, sk_msg_iter_var_next(i); } while (i != msg->sg.end);
- if (start >= offset + l) + if (start > offset + l) return -EINVAL;
space = MAX_MSG_FRAGS - sk_msg_elem_used(msg); @@ -2820,6 +2818,8 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
raw = page_address(page);
+ if (i == msg->sg.end) + sk_msg_iter_var_prev(i); psge = sk_msg_elem(msg, i); front = start - offset; back = psge->length - front; @@ -2836,7 +2836,13 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start, }
put_page(sg_page(psge)); - } else if (start - offset) { + new = i; + goto place_new; + } + + if (start - offset) { + if (i == msg->sg.end) + sk_msg_iter_var_prev(i); psge = sk_msg_elem(msg, i); rsge = sk_msg_elem_cpy(msg, i);
@@ -2847,39 +2853,44 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start, sk_msg_iter_var_next(i); sg_unmark_end(psge); sg_unmark_end(&rsge); - sk_msg_iter_next(msg, end); }
/* Slot(s) to place newly allocated data */ + sk_msg_iter_next(msg, end); new = i; + sk_msg_iter_var_next(i); + + if (i == msg->sg.end) { + if (!rsge.length) + goto place_new; + sk_msg_iter_next(msg, end); + goto place_new; + }
/* Shift one or two slots as needed */ - if (!copy) { - sge = sk_msg_elem_cpy(msg, i); + sge = sk_msg_elem_cpy(msg, new); + sg_unmark_end(&sge);
+ nsge = sk_msg_elem_cpy(msg, i); + if (rsge.length) { sk_msg_iter_var_next(i); - sg_unmark_end(&sge); + nnsge = sk_msg_elem_cpy(msg, i); sk_msg_iter_next(msg, end); + }
- nsge = sk_msg_elem_cpy(msg, i); + while (i != msg->sg.end) { + msg->sg.data[i] = sge; + sge = nsge; + sk_msg_iter_var_next(i); if (rsge.length) { - sk_msg_iter_var_next(i); + nsge = nnsge; nnsge = sk_msg_elem_cpy(msg, i); - } - - while (i != msg->sg.end) { - msg->sg.data[i] = sge; - sge = nsge; - sk_msg_iter_var_next(i); - if (rsge.length) { - nsge = nnsge; - nnsge = sk_msg_elem_cpy(msg, i); - } else { - nsge = sk_msg_elem_cpy(msg, i); - } + } else { + nsge = sk_msg_elem_cpy(msg, i); } }
+place_new: /* Place newly allocated data buffer */ sk_mem_charge(msg->sk, len); msg->sg.size += len; @@ -2908,8 +2919,10 @@ static const struct bpf_func_proto bpf_msg_push_data_proto = {
static void sk_msg_shift_left(struct sk_msg *msg, int i) { + struct scatterlist *sge = sk_msg_elem(msg, i); int prev;
+ put_page(sg_page(sge)); do { prev = i; sk_msg_iter_var_next(i); @@ -2946,6 +2959,9 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start, if (unlikely(flags)) return -EINVAL;
+ if (unlikely(len == 0)) + return 0; + /* First find the starting scatterlist element */ i = msg->sg.start; do { @@ -2958,7 +2974,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start, } while (i != msg->sg.end);
/* Bounds checks: start and pop must be inside message */ - if (start >= offset + l || last >= msg->sg.size) + if (start >= offset + l || last > msg->sg.size) return -EINVAL;
space = MAX_MSG_FRAGS - sk_msg_elem_used(msg); @@ -2987,12 +3003,12 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start, */ if (start != offset) { struct scatterlist *nsge, *sge = sk_msg_elem(msg, i); - int a = start; + int a = start - offset; int b = sge->length - pop - a;
sk_msg_iter_var_next(i);
- if (pop < sge->length - a) { + if (b > 0) { if (space) { sge->length = a; sk_msg_shift_right(msg, i); @@ -3011,7 +3027,6 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start, if (unlikely(!page)) return -ENOMEM;
- sge->length = a; orig = sg_page(sge); from = sg_virt(sge); to = page_address(page); @@ -3021,7 +3036,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start, put_page(orig); } pop = 0; - } else if (pop >= sge->length - a) { + } else { pop -= (sge->length - a); sge->length = a; } @@ -3055,7 +3070,6 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start, pop -= sge->length; sk_msg_shift_left(msg, i); } - sk_msg_iter_var_next(i); }
sk_mem_uncharge(msg->sk, len - pop); diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 1cb954f2d39e..d2baa1af9df0 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -215,6 +215,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info) return -ENOMEM;
rtnl_lock(); + rcu_read_lock();
napi = napi_by_id(napi_id); if (napi) { @@ -224,6 +225,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info) err = -ENOENT; }
+ rcu_read_unlock(); rtnl_unlock();
if (err) diff --git a/net/core/skmsg.c b/net/core/skmsg.c index b1dcbd3be89e..e90fbab703b2 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -1117,9 +1117,9 @@ static void sk_psock_strp_data_ready(struct sock *sk) if (tls_sw_has_ctx_rx(sk)) { psock->saved_data_ready(sk); } else { - write_lock_bh(&sk->sk_callback_lock); + read_lock_bh(&sk->sk_callback_lock); strp_data_ready(&psock->strp); - write_unlock_bh(&sk->sk_callback_lock); + read_unlock_bh(&sk->sk_callback_lock); } } rcu_read_unlock(); diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c index ebdfd5b64e17..f630d6645636 100644 --- a/net/hsr/hsr_device.c +++ b/net/hsr/hsr_device.c @@ -268,6 +268,8 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master) skb->dev = master->dev; skb->priority = TC_PRIO_CONTROL;
+ skb_reset_network_header(skb); + skb_reset_transport_header(skb); if (dev_hard_header(skb, skb->dev, ETH_P_PRP, hsr->sup_multicast_addr, skb->dev->dev_addr, skb->len) <= 0) @@ -275,8 +277,6 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
skb_reset_mac_header(skb); skb_reset_mac_len(skb); - skb_reset_network_header(skb); - skb_reset_transport_header(skb);
return skb; out: diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index 2b698f8419fe..fe7947f77406 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -1189,7 +1189,7 @@ static void reqsk_timer_handler(struct timer_list *t)
drop: __inet_csk_reqsk_queue_drop(sk_listener, oreq, true); - reqsk_put(req); + reqsk_put(oreq); }
static bool reqsk_queue_hash_req(struct request_sock *req, diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c index 089864c6a35e..449a2ac40bdc 100644 --- a/net/ipv4/ipmr.c +++ b/net/ipv4/ipmr.c @@ -137,7 +137,7 @@ static struct mr_table *ipmr_mr_table_iter(struct net *net, return ret; }
-static struct mr_table *ipmr_get_table(struct net *net, u32 id) +static struct mr_table *__ipmr_get_table(struct net *net, u32 id) { struct mr_table *mrt;
@@ -148,6 +148,16 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id) return NULL; }
+static struct mr_table *ipmr_get_table(struct net *net, u32 id) +{ + struct mr_table *mrt; + + rcu_read_lock(); + mrt = __ipmr_get_table(net, id); + rcu_read_unlock(); + return mrt; +} + static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4, struct mr_table **mrt) { @@ -189,7 +199,7 @@ static int ipmr_rule_action(struct fib_rule *rule, struct flowi *flp,
arg->table = fib_rule_get_table(rule, arg);
- mrt = ipmr_get_table(rule->fr_net, arg->table); + mrt = __ipmr_get_table(rule->fr_net, arg->table); if (!mrt) return -EAGAIN; res->mrt = mrt; @@ -315,6 +325,8 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id) return net->ipv4.mrt; }
+#define __ipmr_get_table ipmr_get_table + static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4, struct mr_table **mrt) { @@ -403,7 +415,7 @@ static struct mr_table *ipmr_new_table(struct net *net, u32 id) if (id != RT_TABLE_DEFAULT && id >= 1000000000) return ERR_PTR(-EINVAL);
- mrt = ipmr_get_table(net, id); + mrt = __ipmr_get_table(net, id); if (mrt) return mrt;
@@ -1374,7 +1386,7 @@ int ip_mroute_setsockopt(struct sock *sk, int optname, sockptr_t optval, goto out_unlock; }
- mrt = ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT); + mrt = __ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT); if (!mrt) { ret = -ENOENT; goto out_unlock; @@ -2262,11 +2274,13 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb, struct mr_table *mrt; int err;
- mrt = ipmr_get_table(net, RT_TABLE_DEFAULT); - if (!mrt) + rcu_read_lock(); + mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT); + if (!mrt) { + rcu_read_unlock(); return -ENOENT; + }
- rcu_read_lock(); cache = ipmr_cache_find(mrt, saddr, daddr); if (!cache && skb->dev) { int vif = ipmr_find_vif(mrt, skb->dev); @@ -2550,7 +2564,7 @@ static int ipmr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, grp = tb[RTA_DST] ? nla_get_in_addr(tb[RTA_DST]) : 0; tableid = tb[RTA_TABLE] ? nla_get_u32(tb[RTA_TABLE]) : 0;
- mrt = ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT); + mrt = __ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT); if (!mrt) { err = -ENOENT; goto errout_free; @@ -2604,7 +2618,7 @@ static int ipmr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb) if (filter.table_id) { struct mr_table *mrt;
- mrt = ipmr_get_table(sock_net(skb->sk), filter.table_id); + mrt = __ipmr_get_table(sock_net(skb->sk), filter.table_id); if (!mrt) { if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR) return skb->len; @@ -2712,7 +2726,7 @@ static int rtm_to_ipmr_mfcc(struct net *net, struct nlmsghdr *nlh, break; } } - mrt = ipmr_get_table(net, tblid); + mrt = __ipmr_get_table(net, tblid); if (!mrt) { ret = -ENOENT; goto out; @@ -2920,13 +2934,15 @@ static void *ipmr_vif_seq_start(struct seq_file *seq, loff_t *pos) struct net *net = seq_file_net(seq); struct mr_table *mrt;
- mrt = ipmr_get_table(net, RT_TABLE_DEFAULT); - if (!mrt) + rcu_read_lock(); + mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT); + if (!mrt) { + rcu_read_unlock(); return ERR_PTR(-ENOENT); + }
iter->mrt = mrt;
- rcu_read_lock(); return mr_vif_seq_start(seq, pos); }
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 94dceac52884..01115e1a34cb 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -2570,6 +2570,24 @@ static struct inet6_dev *addrconf_add_dev(struct net_device *dev) return idev; }
+static void delete_tempaddrs(struct inet6_dev *idev, + struct inet6_ifaddr *ifp) +{ + struct inet6_ifaddr *ift, *tmp; + + write_lock_bh(&idev->lock); + list_for_each_entry_safe(ift, tmp, &idev->tempaddr_list, tmp_list) { + if (ift->ifpub != ifp) + continue; + + in6_ifa_hold(ift); + write_unlock_bh(&idev->lock); + ipv6_del_addr(ift); + write_lock_bh(&idev->lock); + } + write_unlock_bh(&idev->lock); +} + static void manage_tempaddrs(struct inet6_dev *idev, struct inet6_ifaddr *ifp, __u32 valid_lft, __u32 prefered_lft, @@ -3124,11 +3142,12 @@ static int inet6_addr_del(struct net *net, int ifindex, u32 ifa_flags, in6_ifa_hold(ifp); read_unlock_bh(&idev->lock);
- if (!(ifp->flags & IFA_F_TEMPORARY) && - (ifa_flags & IFA_F_MANAGETEMPADDR)) - manage_tempaddrs(idev, ifp, 0, 0, false, - jiffies); ipv6_del_addr(ifp); + + if (!(ifp->flags & IFA_F_TEMPORARY) && + (ifp->flags & IFA_F_MANAGETEMPADDR)) + delete_tempaddrs(idev, ifp); + addrconf_verify_rtnl(net); if (ipv6_addr_is_multicast(pfx)) { ipv6_mc_config(net->ipv6.mc_autojoin_sk, @@ -4952,14 +4971,12 @@ static int inet6_addr_modify(struct net *net, struct inet6_ifaddr *ifp, }
if (was_managetempaddr || ifp->flags & IFA_F_MANAGETEMPADDR) { - if (was_managetempaddr && - !(ifp->flags & IFA_F_MANAGETEMPADDR)) { - cfg->valid_lft = 0; - cfg->preferred_lft = 0; - } - manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft, - cfg->preferred_lft, !was_managetempaddr, - jiffies); + if (was_managetempaddr && !(ifp->flags & IFA_F_MANAGETEMPADDR)) + delete_tempaddrs(ifp->idev, ifp); + else + manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft, + cfg->preferred_lft, !was_managetempaddr, + jiffies); }
addrconf_verify_rtnl(net); diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c index eb111d20615c..9a1c59275a10 100644 --- a/net/ipv6/ip6_fib.c +++ b/net/ipv6/ip6_fib.c @@ -1190,8 +1190,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt, while (sibling) { if (sibling->fib6_metric == rt->fib6_metric && rt6_qualify_for_ecmp(sibling)) { - list_add_tail(&rt->fib6_siblings, - &sibling->fib6_siblings); + list_add_tail_rcu(&rt->fib6_siblings, + &sibling->fib6_siblings); break; } sibling = rcu_dereference_protected(sibling->fib6_next, @@ -1252,7 +1252,7 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt, fib6_siblings) sibling->fib6_nsiblings--; rt->fib6_nsiblings = 0; - list_del_init(&rt->fib6_siblings); + list_del_rcu(&rt->fib6_siblings); rt6_multipath_rebalance(next_sibling); return err; } @@ -1970,7 +1970,7 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn, &rt->fib6_siblings, fib6_siblings) sibling->fib6_nsiblings--; rt->fib6_nsiblings = 0; - list_del_init(&rt->fib6_siblings); + list_del_rcu(&rt->fib6_siblings); rt6_multipath_rebalance(next_sibling); }
diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c index 2ce4ae0d8dc3..d5057401701c 100644 --- a/net/ipv6/ip6mr.c +++ b/net/ipv6/ip6mr.c @@ -125,7 +125,7 @@ static struct mr_table *ip6mr_mr_table_iter(struct net *net, return ret; }
-static struct mr_table *ip6mr_get_table(struct net *net, u32 id) +static struct mr_table *__ip6mr_get_table(struct net *net, u32 id) { struct mr_table *mrt;
@@ -136,6 +136,16 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id) return NULL; }
+static struct mr_table *ip6mr_get_table(struct net *net, u32 id) +{ + struct mr_table *mrt; + + rcu_read_lock(); + mrt = __ip6mr_get_table(net, id); + rcu_read_unlock(); + return mrt; +} + static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6, struct mr_table **mrt) { @@ -177,7 +187,7 @@ static int ip6mr_rule_action(struct fib_rule *rule, struct flowi *flp,
arg->table = fib_rule_get_table(rule, arg);
- mrt = ip6mr_get_table(rule->fr_net, arg->table); + mrt = __ip6mr_get_table(rule->fr_net, arg->table); if (!mrt) return -EAGAIN; res->mrt = mrt; @@ -304,6 +314,8 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id) return net->ipv6.mrt6; }
+#define __ip6mr_get_table ip6mr_get_table + static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6, struct mr_table **mrt) { @@ -382,7 +394,7 @@ static struct mr_table *ip6mr_new_table(struct net *net, u32 id) { struct mr_table *mrt;
- mrt = ip6mr_get_table(net, id); + mrt = __ip6mr_get_table(net, id); if (mrt) return mrt;
@@ -411,13 +423,15 @@ static void *ip6mr_vif_seq_start(struct seq_file *seq, loff_t *pos) struct net *net = seq_file_net(seq); struct mr_table *mrt;
- mrt = ip6mr_get_table(net, RT6_TABLE_DFLT); - if (!mrt) + rcu_read_lock(); + mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT); + if (!mrt) { + rcu_read_unlock(); return ERR_PTR(-ENOENT); + }
iter->mrt = mrt;
- rcu_read_lock(); return mr_vif_seq_start(seq, pos); }
@@ -2275,11 +2289,13 @@ int ip6mr_get_route(struct net *net, struct sk_buff *skb, struct rtmsg *rtm, struct mfc6_cache *cache; struct rt6_info *rt = dst_rt6_info(skb_dst(skb));
- mrt = ip6mr_get_table(net, RT6_TABLE_DFLT); - if (!mrt) + rcu_read_lock(); + mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT); + if (!mrt) { + rcu_read_unlock(); return -ENOENT; + }
- rcu_read_lock(); cache = ip6mr_cache_find(mrt, &rt->rt6i_src.addr, &rt->rt6i_dst.addr); if (!cache && skb->dev) { int vif = ip6mr_find_vif(mrt, skb->dev); @@ -2559,7 +2575,7 @@ static int ip6mr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, grp = nla_get_in6_addr(tb[RTA_DST]); tableid = tb[RTA_TABLE] ? nla_get_u32(tb[RTA_TABLE]) : 0;
- mrt = ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT); + mrt = __ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT); if (!mrt) { NL_SET_ERR_MSG_MOD(extack, "MR table does not exist"); return -ENOENT; @@ -2606,7 +2622,7 @@ static int ip6mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb) if (filter.table_id) { struct mr_table *mrt;
- mrt = ip6mr_get_table(sock_net(skb->sk), filter.table_id); + mrt = __ip6mr_get_table(sock_net(skb->sk), filter.table_id); if (!mrt) { if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR) return skb->len; diff --git a/net/ipv6/route.c b/net/ipv6/route.c index b4251915585f..cff4fbbc66ef 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -374,6 +374,7 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev) { struct rt6_info *rt = dst_rt6_info(dst); struct inet6_dev *idev = rt->rt6i_idev; + struct fib6_info *from;
if (idev && idev->dev != blackhole_netdev) { struct inet6_dev *blackhole_idev = in6_dev_get(blackhole_netdev); @@ -383,6 +384,8 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev) in6_dev_put(idev); } } + from = unrcu_pointer(xchg(&rt->from, NULL)); + fib6_info_release(from); }
static bool __rt6_check_expired(const struct rt6_info *rt) @@ -413,8 +416,8 @@ void fib6_select_path(const struct net *net, struct fib6_result *res, struct flowi6 *fl6, int oif, bool have_oif_match, const struct sk_buff *skb, int strict) { - struct fib6_info *sibling, *next_sibling; struct fib6_info *match = res->f6i; + struct fib6_info *sibling;
if (!match->nh && (!match->fib6_nsiblings || have_oif_match)) goto out; @@ -440,8 +443,8 @@ void fib6_select_path(const struct net *net, struct fib6_result *res, if (fl6->mp_hash <= atomic_read(&match->fib6_nh->fib_nh_upper_bound)) goto out;
- list_for_each_entry_safe(sibling, next_sibling, &match->fib6_siblings, - fib6_siblings) { + list_for_each_entry_rcu(sibling, &match->fib6_siblings, + fib6_siblings) { const struct fib6_nh *nh = sibling->fib6_nh; int nh_upper_bound;
@@ -1455,7 +1458,6 @@ static DEFINE_SPINLOCK(rt6_exception_lock); static void rt6_remove_exception(struct rt6_exception_bucket *bucket, struct rt6_exception *rt6_ex) { - struct fib6_info *from; struct net *net;
if (!bucket || !rt6_ex) @@ -1467,8 +1469,6 @@ static void rt6_remove_exception(struct rt6_exception_bucket *bucket, /* purge completely the exception to allow releasing the held resources: * some [sk] cache may keep the dst around for unlimited time */ - from = unrcu_pointer(xchg(&rt6_ex->rt6i->from, NULL)); - fib6_info_release(from); dst_dev_put(&rt6_ex->rt6i->dst);
hlist_del_rcu(&rt6_ex->hlist); @@ -5195,14 +5195,18 @@ static void ip6_route_mpath_notify(struct fib6_info *rt, * nexthop. Since sibling routes are always added at the end of * the list, find the first sibling of the last route appended */ + rcu_read_lock(); + if ((nlflags & NLM_F_APPEND) && rt_last && rt_last->fib6_nsiblings) { - rt = list_first_entry(&rt_last->fib6_siblings, - struct fib6_info, - fib6_siblings); + rt = list_first_or_null_rcu(&rt_last->fib6_siblings, + struct fib6_info, + fib6_siblings); }
if (rt) inet6_rt_notify(RTM_NEWROUTE, rt, info, nlflags); + + rcu_read_unlock(); }
static bool ip6_route_mpath_should_notify(const struct fib6_info *rt) @@ -5547,17 +5551,21 @@ static size_t rt6_nlmsg_size(struct fib6_info *f6i) nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size, &nexthop_len); } else { - struct fib6_info *sibling, *next_sibling; struct fib6_nh *nh = f6i->fib6_nh; + struct fib6_info *sibling;
nexthop_len = 0; if (f6i->fib6_nsiblings) { rt6_nh_nlmsg_size(nh, &nexthop_len);
- list_for_each_entry_safe(sibling, next_sibling, - &f6i->fib6_siblings, fib6_siblings) { + rcu_read_lock(); + + list_for_each_entry_rcu(sibling, &f6i->fib6_siblings, + fib6_siblings) { rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len); } + + rcu_read_unlock(); } nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws); } @@ -5721,7 +5729,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb, lwtunnel_fill_encap(skb, dst->lwtstate, RTA_ENCAP, RTA_ENCAP_TYPE) < 0) goto nla_put_failure; } else if (rt->fib6_nsiblings) { - struct fib6_info *sibling, *next_sibling; + struct fib6_info *sibling; struct nlattr *mp;
mp = nla_nest_start_noflag(skb, RTA_MULTIPATH); @@ -5733,14 +5741,21 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb, 0) < 0) goto nla_put_failure;
- list_for_each_entry_safe(sibling, next_sibling, - &rt->fib6_siblings, fib6_siblings) { + rcu_read_lock(); + + list_for_each_entry_rcu(sibling, &rt->fib6_siblings, + fib6_siblings) { if (fib_add_nexthop(skb, &sibling->fib6_nh->nh_common, sibling->fib6_nh->fib_nh_weight, - AF_INET6, 0) < 0) + AF_INET6, 0) < 0) { + rcu_read_unlock(); + goto nla_put_failure; + } }
+ rcu_read_unlock(); + nla_nest_end(skb, mp); } else if (rt->nh) { if (nla_put_u32(skb, RTA_NH_ID, rt->nh->id)) @@ -6177,7 +6192,7 @@ void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info, err = -ENOBUFS; seq = info->nlh ? info->nlh->nlmsg_seq : 0;
- skb = nlmsg_new(rt6_nlmsg_size(rt), gfp_any()); + skb = nlmsg_new(rt6_nlmsg_size(rt), GFP_ATOMIC); if (!skb) goto errout;
@@ -6190,7 +6205,7 @@ void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info, goto errout; } rtnl_notify(skb, net, info->portid, RTNLGRP_IPV6_ROUTE, - info->nlh, gfp_any()); + info->nlh, GFP_ATOMIC); return; errout: rtnl_set_sk_err(net, RTNLGRP_IPV6_ROUTE, err); diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c index c00323fa9eb6..7929df08d4e0 100644 --- a/net/iucv/af_iucv.c +++ b/net/iucv/af_iucv.c @@ -1236,7 +1236,9 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg, return -EOPNOTSUPP;
/* receive/dequeue next skb: - * the function understands MSG_PEEK and, thus, does not dequeue skb */ + * the function understands MSG_PEEK and, thus, does not dequeue skb + * only refcount is increased. + */ skb = skb_recv_datagram(sk, flags, &err); if (!skb) { if (sk->sk_shutdown & RCV_SHUTDOWN) @@ -1252,9 +1254,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
cskb = skb; if (skb_copy_datagram_msg(cskb, offset, msg, copied)) { - if (!(flags & MSG_PEEK)) - skb_queue_head(&sk->sk_receive_queue, skb); - return -EFAULT; + err = -EFAULT; + goto err_out; }
/* SOCK_SEQPACKET: set MSG_TRUNC if recv buf size is too small */ @@ -1271,11 +1272,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg, err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS, sizeof(IUCV_SKB_CB(skb)->class), (void *)&IUCV_SKB_CB(skb)->class); - if (err) { - if (!(flags & MSG_PEEK)) - skb_queue_head(&sk->sk_receive_queue, skb); - return err; - } + if (err) + goto err_out;
/* Mark read part of skb as used */ if (!(flags & MSG_PEEK)) { @@ -1331,8 +1329,18 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg, /* SOCK_SEQPACKET: return real length if MSG_TRUNC is set */ if (sk->sk_type == SOCK_SEQPACKET && (flags & MSG_TRUNC)) copied = rlen; + if (flags & MSG_PEEK) + skb_unref(skb);
return copied; + +err_out: + if (!(flags & MSG_PEEK)) + skb_queue_head(&sk->sk_receive_queue, skb); + else + skb_unref(skb); + + return err; }
static inline __poll_t iucv_accept_poll(struct sock *parent) diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c index 3eec23ac5ab1..369a2f2e459c 100644 --- a/net/l2tp/l2tp_core.c +++ b/net/l2tp/l2tp_core.c @@ -1870,15 +1870,31 @@ static __net_exit void l2tp_pre_exit_net(struct net *net) } }
+static int l2tp_idr_item_unexpected(int id, void *p, void *data) +{ + const char *idr_name = data; + + pr_err("l2tp: %s IDR not empty at net %d exit\n", idr_name, id); + WARN_ON_ONCE(1); + return 1; +} + static __net_exit void l2tp_exit_net(struct net *net) { struct l2tp_net *pn = l2tp_pernet(net);
- WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_v2_session_idr)); + /* Our per-net IDRs should be empty. Check that is so, to + * help catch cleanup races or refcnt leaks. + */ + idr_for_each(&pn->l2tp_v2_session_idr, l2tp_idr_item_unexpected, + "v2_session"); + idr_for_each(&pn->l2tp_v3_session_idr, l2tp_idr_item_unexpected, + "v3_session"); + idr_for_each(&pn->l2tp_tunnel_idr, l2tp_idr_item_unexpected, + "tunnel"); + idr_destroy(&pn->l2tp_v2_session_idr); - WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_v3_session_idr)); idr_destroy(&pn->l2tp_v3_session_idr); - WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_tunnel_idr)); idr_destroy(&pn->l2tp_tunnel_idr); }
diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c index 4eb52add7103..0259cde394ba 100644 --- a/net/llc/af_llc.c +++ b/net/llc/af_llc.c @@ -1098,7 +1098,7 @@ static int llc_ui_setsockopt(struct socket *sock, int level, int optname, lock_sock(sk); if (unlikely(level != SOL_LLC || optlen != sizeof(int))) goto out; - rc = copy_from_sockptr(&opt, optval, sizeof(opt)); + rc = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen); if (rc) goto out; rc = -EINVAL; diff --git a/net/netfilter/ipset/ip_set_bitmap_ip.c b/net/netfilter/ipset/ip_set_bitmap_ip.c index e4fa00abde6a..5988b9bb9029 100644 --- a/net/netfilter/ipset/ip_set_bitmap_ip.c +++ b/net/netfilter/ipset/ip_set_bitmap_ip.c @@ -163,11 +163,8 @@ bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[], ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to); if (ret) return ret; - if (ip > ip_to) { + if (ip > ip_to) swap(ip, ip_to); - if (ip < map->first_ip) - return -IPSET_ERR_BITMAP_RANGE; - } } else if (tb[IPSET_ATTR_CIDR]) { u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]);
@@ -178,7 +175,7 @@ bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[], ip_to = ip; }
- if (ip_to > map->last_ip) + if (ip < map->first_ip || ip_to > map->last_ip) return -IPSET_ERR_BITMAP_RANGE;
for (; !before(ip_to, ip); ip += map->hosts) { diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 588a2757986c..4a137afaf0b8 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -3295,25 +3295,37 @@ int nft_expr_inner_parse(const struct nft_ctx *ctx, const struct nlattr *nla, if (!tb[NFTA_EXPR_DATA] || !tb[NFTA_EXPR_NAME]) return -EINVAL;
+ rcu_read_lock(); + type = __nft_expr_type_get(ctx->family, tb[NFTA_EXPR_NAME]); - if (!type) - return -ENOENT; + if (!type) { + err = -ENOENT; + goto out_unlock; + }
- if (!type->inner_ops) - return -EOPNOTSUPP; + if (!type->inner_ops) { + err = -EOPNOTSUPP; + goto out_unlock; + }
err = nla_parse_nested_deprecated(info->tb, type->maxattr, tb[NFTA_EXPR_DATA], type->policy, NULL); if (err < 0) - goto err_nla_parse; + goto out_unlock;
info->attr = nla; info->ops = type->inner_ops;
+ /* No module reference will be taken on type->owner. + * Presence of type->inner_ops implies that the expression + * is builtin, so it cannot go away. + */ + rcu_read_unlock(); return 0;
-err_nla_parse: +out_unlock: + rcu_read_unlock(); return err; }
@@ -3412,13 +3424,15 @@ void nft_expr_destroy(const struct nft_ctx *ctx, struct nft_expr *expr) * Rules */
-static struct nft_rule *__nft_rule_lookup(const struct nft_chain *chain, +static struct nft_rule *__nft_rule_lookup(const struct net *net, + const struct nft_chain *chain, u64 handle) { struct nft_rule *rule;
// FIXME: this sucks - list_for_each_entry_rcu(rule, &chain->rules, list) { + list_for_each_entry_rcu(rule, &chain->rules, list, + lockdep_commit_lock_is_held(net)) { if (handle == rule->handle) return rule; } @@ -3426,13 +3440,14 @@ static struct nft_rule *__nft_rule_lookup(const struct nft_chain *chain, return ERR_PTR(-ENOENT); }
-static struct nft_rule *nft_rule_lookup(const struct nft_chain *chain, +static struct nft_rule *nft_rule_lookup(const struct net *net, + const struct nft_chain *chain, const struct nlattr *nla) { if (nla == NULL) return ERR_PTR(-EINVAL);
- return __nft_rule_lookup(chain, be64_to_cpu(nla_get_be64(nla))); + return __nft_rule_lookup(net, chain, be64_to_cpu(nla_get_be64(nla))); }
static const struct nla_policy nft_rule_policy[NFTA_RULE_MAX + 1] = { @@ -3733,7 +3748,7 @@ static int nf_tables_dump_rules_done(struct netlink_callback *cb) return 0; }
-/* called with rcu_read_lock held */ +/* Caller must hold rcu read lock or transaction mutex */ static struct sk_buff * nf_tables_getrule_single(u32 portid, const struct nfnl_info *info, const struct nlattr * const nla[], bool reset) @@ -3760,7 +3775,7 @@ nf_tables_getrule_single(u32 portid, const struct nfnl_info *info, return ERR_CAST(chain); }
- rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]); + rule = nft_rule_lookup(net, chain, nla[NFTA_RULE_HANDLE]); if (IS_ERR(rule)) { NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]); return ERR_CAST(rule); @@ -4058,7 +4073,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
if (nla[NFTA_RULE_HANDLE]) { handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_HANDLE])); - rule = __nft_rule_lookup(chain, handle); + rule = __nft_rule_lookup(net, chain, handle); if (IS_ERR(rule)) { NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]); return PTR_ERR(rule); @@ -4080,7 +4095,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
if (nla[NFTA_RULE_POSITION]) { pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION])); - old_rule = __nft_rule_lookup(chain, pos_handle); + old_rule = __nft_rule_lookup(net, chain, pos_handle); if (IS_ERR(old_rule)) { NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]); return PTR_ERR(old_rule); @@ -4297,7 +4312,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
if (chain) { if (nla[NFTA_RULE_HANDLE]) { - rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]); + rule = nft_rule_lookup(info->net, chain, nla[NFTA_RULE_HANDLE]); if (IS_ERR(rule)) { if (PTR_ERR(rule) == -ENOENT && NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_DESTROYRULE) @@ -7790,9 +7805,7 @@ static int nf_tables_updobj(const struct nft_ctx *ctx, struct nft_trans *trans; int err = -ENOMEM;
- if (!try_module_get(type->owner)) - return -ENOENT; - + /* caller must have obtained type->owner reference. */ trans = nft_trans_alloc(ctx, NFT_MSG_NEWOBJ, sizeof(struct nft_trans_obj)); if (!trans) @@ -7860,15 +7873,16 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info, if (info->nlh->nlmsg_flags & NLM_F_REPLACE) return -EOPNOTSUPP;
- type = __nft_obj_type_get(objtype, family); - if (WARN_ON_ONCE(!type)) - return -ENOENT; - if (!obj->ops->update) return 0;
+ type = nft_obj_type_get(net, objtype, family); + if (WARN_ON_ONCE(IS_ERR(type))) + return PTR_ERR(type); + nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+ /* type->owner reference is put when transaction object is released. */ return nf_tables_updobj(&ctx, type, nla[NFTA_OBJ_DATA], obj); }
@@ -8104,7 +8118,7 @@ static int nf_tables_dump_obj_done(struct netlink_callback *cb) return 0; }
-/* called with rcu_read_lock held */ +/* Caller must hold rcu read lock or transaction mutex */ static struct sk_buff * nf_tables_getobj_single(u32 portid, const struct nfnl_info *info, const struct nlattr * const nla[], bool reset) diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index f84aad420d44..775d707ec708 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -2176,9 +2176,14 @@ netlink_ack_tlv_len(struct netlink_sock *nlk, int err, return tlvlen; }
+static bool nlmsg_check_in_payload(const struct nlmsghdr *nlh, const void *addr) +{ + return !WARN_ON(addr < nlmsg_data(nlh) || + addr - (const void *) nlh >= nlh->nlmsg_len); +} + static void -netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb, - const struct nlmsghdr *nlh, int err, +netlink_ack_tlv_fill(struct sk_buff *skb, const struct nlmsghdr *nlh, int err, const struct netlink_ext_ack *extack) { if (extack->_msg) @@ -2190,9 +2195,7 @@ netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb, if (!err) return;
- if (extack->bad_attr && - !WARN_ON((u8 *)extack->bad_attr < in_skb->data || - (u8 *)extack->bad_attr >= in_skb->data + in_skb->len)) + if (extack->bad_attr && nlmsg_check_in_payload(nlh, extack->bad_attr)) WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS, (u8 *)extack->bad_attr - (const u8 *)nlh)); if (extack->policy) @@ -2201,9 +2204,7 @@ netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb, if (extack->miss_type) WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_TYPE, extack->miss_type)); - if (extack->miss_nest && - !WARN_ON((u8 *)extack->miss_nest < in_skb->data || - (u8 *)extack->miss_nest > in_skb->data + in_skb->len)) + if (extack->miss_nest && nlmsg_check_in_payload(nlh, extack->miss_nest)) WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_NEST, (u8 *)extack->miss_nest - (const u8 *)nlh)); } @@ -2232,7 +2233,7 @@ static int netlink_dump_done(struct netlink_sock *nlk, struct sk_buff *skb, if (extack_len) { nlh->nlmsg_flags |= NLM_F_ACK_TLVS; if (skb_tailroom(skb) >= extack_len) { - netlink_ack_tlv_fill(cb->skb, skb, cb->nlh, + netlink_ack_tlv_fill(skb, cb->nlh, nlk->dump_done_errno, extack); nlmsg_end(skb, nlh); } @@ -2491,7 +2492,7 @@ void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err, }
if (tlvlen) - netlink_ack_tlv_fill(in_skb, skb, nlh, err, extack); + netlink_ack_tlv_fill(skb, nlh, err, extack);
nlmsg_end(skb, rep);
diff --git a/net/rfkill/rfkill-gpio.c b/net/rfkill/rfkill-gpio.c index c268c2b011f4..a8e21060112f 100644 --- a/net/rfkill/rfkill-gpio.c +++ b/net/rfkill/rfkill-gpio.c @@ -32,8 +32,12 @@ static int rfkill_gpio_set_power(void *data, bool blocked) { struct rfkill_gpio_data *rfkill = data;
- if (!blocked && !IS_ERR(rfkill->clk) && !rfkill->clk_enabled) - clk_enable(rfkill->clk); + if (!blocked && !IS_ERR(rfkill->clk) && !rfkill->clk_enabled) { + int ret = clk_enable(rfkill->clk); + + if (ret) + return ret; + }
gpiod_set_value_cansleep(rfkill->shutdown_gpio, !blocked); gpiod_set_value_cansleep(rfkill->reset_gpio, !blocked); diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index f4844683e120..9d8bd0b37e41 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -707,9 +707,10 @@ static int rxrpc_setsockopt(struct socket *sock, int level, int optname, ret = -EISCONN; if (rx->sk.sk_state != RXRPC_UNBOUND) goto error; - ret = copy_from_sockptr(&min_sec_level, optval, - sizeof(unsigned int)); - if (ret < 0) + ret = copy_safe_from_sockptr(&min_sec_level, + sizeof(min_sec_level), + optval, optlen); + if (ret) goto error; ret = -EINVAL; if (min_sec_level > RXRPC_SECURITY_MAX) diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c index 19a49af5a9e5..afefe124d903 100644 --- a/net/sched/sch_fq.c +++ b/net/sched/sch_fq.c @@ -331,6 +331,12 @@ static bool fq_fastpath_check(const struct Qdisc *sch, struct sk_buff *skb, */ if (q->internal.qlen >= 8) return false; + + /* Ordering invariants fall apart if some delayed flows + * are ready but we haven't serviced them, yet. + */ + if (q->time_next_delayed_flow <= now) + return false; }
sk = skb->sk; diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c index 1bd3e531b0e0..059f6ef1ad18 100644 --- a/net/sunrpc/cache.c +++ b/net/sunrpc/cache.c @@ -1427,7 +1427,9 @@ static int c_show(struct seq_file *m, void *p) seq_printf(m, "# expiry=%lld refcnt=%d flags=%lx\n", convert_to_wallclock(cp->expiry_time), kref_read(&cp->ref), cp->flags); - cache_get(cp); + if (!cache_get_rcu(cp)) + return 0; + if (cache_check(cd, cp, NULL)) /* cache_check does a cache_put on failure */ seq_puts(m, "# "); diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 825ec5357691..59e2c46240f5 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1551,6 +1551,10 @@ static struct svc_xprt *svc_create_socket(struct svc_serv *serv, newlen = error;
if (protocol == IPPROTO_TCP) { + __netns_tracker_free(net, &sock->sk->ns_tracker, false); + sock->sk->sk_net_refcnt = 1; + get_net_track(net, &sock->sk->ns_tracker, GFP_KERNEL); + sock_inuse_add(net, 1); if ((error = kernel_listen(sock, 64)) < 0) goto bummer; } diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c index 58ae6ec4f25b..415c0310101f 100644 --- a/net/sunrpc/xprtrdma/svc_rdma.c +++ b/net/sunrpc/xprtrdma/svc_rdma.c @@ -233,25 +233,34 @@ static int svc_rdma_proc_init(void)
rc = percpu_counter_init(&svcrdma_stat_read, 0, GFP_KERNEL); if (rc) - goto out_err; + goto err; rc = percpu_counter_init(&svcrdma_stat_recv, 0, GFP_KERNEL); if (rc) - goto out_err; + goto err_read; rc = percpu_counter_init(&svcrdma_stat_sq_starve, 0, GFP_KERNEL); if (rc) - goto out_err; + goto err_recv; rc = percpu_counter_init(&svcrdma_stat_write, 0, GFP_KERNEL); if (rc) - goto out_err; + goto err_sq;
svcrdma_table_header = register_sysctl("sunrpc/svc_rdma", svcrdma_parm_table); + if (!svcrdma_table_header) + goto err_write; + return 0;
-out_err: +err_write: + rc = -ENOMEM; + percpu_counter_destroy(&svcrdma_stat_write); +err_sq: percpu_counter_destroy(&svcrdma_stat_sq_starve); +err_recv: percpu_counter_destroy(&svcrdma_stat_recv); +err_read: percpu_counter_destroy(&svcrdma_stat_read); +err: return rc; }
diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c index ae3fb9bc8a21..292022f0976e 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c @@ -493,7 +493,13 @@ static bool xdr_check_write_chunk(struct svc_rdma_recv_ctxt *rctxt) if (xdr_stream_decode_u32(&rctxt->rc_stream, &segcount)) return false;
- /* A bogus segcount causes this buffer overflow check to fail. */ + /* Before trusting the segcount value enough to use it in + * a computation, perform a simple range check. This is an + * arbitrary but sensible limit (ie, not architectural). + */ + if (unlikely(segcount > RPCSVC_MAXPAGES)) + return false; + p = xdr_inline_decode(&rctxt->rc_stream, segcount * rpcrdma_segment_maxsz * sizeof(*p)); return p != NULL; diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index 1326fbf45a34..b69e6290acfa 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -1198,6 +1198,7 @@ static void xs_sock_reset_state_flags(struct rpc_xprt *xprt) clear_bit(XPRT_SOCK_WAKE_WRITE, &transport->sock_state); clear_bit(XPRT_SOCK_WAKE_DISCONNECT, &transport->sock_state); clear_bit(XPRT_SOCK_NOSPACE, &transport->sock_state); + clear_bit(XPRT_SOCK_UPD_TIMEOUT, &transport->sock_state); }
static void xs_run_error_worker(struct sock_xprt *transport, unsigned int nr) @@ -1939,6 +1940,13 @@ static struct socket *xs_create_sock(struct rpc_xprt *xprt, goto out; }
+ if (protocol == IPPROTO_TCP) { + __netns_tracker_free(xprt->xprt_net, &sock->sk->ns_tracker, false); + sock->sk->sk_net_refcnt = 1; + get_net_track(xprt->xprt_net, &sock->sk->ns_tracker, GFP_KERNEL); + sock_inuse_add(xprt->xprt_net, 1); + } + filp = sock_alloc_file(sock, O_NONBLOCK, NULL); if (IS_ERR(filp)) return ERR_CAST(filp); @@ -2614,11 +2622,10 @@ static int xs_tls_handshake_sync(struct rpc_xprt *lower_xprt, struct xprtsec_par rc = wait_for_completion_interruptible_timeout(&lower_transport->handshake_done, XS_TLS_HANDSHAKE_TO); if (rc <= 0) { - if (!tls_handshake_cancel(sk)) { - if (rc == 0) - rc = -ETIMEDOUT; - goto out_put_xprt; - } + tls_handshake_cancel(sk); + if (rc == 0) + rc = -ETIMEDOUT; + goto out_put_xprt; }
rc = lower_transport->xprt_err; diff --git a/net/wireless/core.c b/net/wireless/core.c index 74ca18833df1..7d313fb66d76 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -603,16 +603,20 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv, } EXPORT_SYMBOL(wiphy_new_nm);
-static int wiphy_verify_combinations(struct wiphy *wiphy) +static +int wiphy_verify_iface_combinations(struct wiphy *wiphy, + const struct ieee80211_iface_combination *iface_comb, + int n_iface_comb, + bool combined_radio) { const struct ieee80211_iface_combination *c; int i, j;
- for (i = 0; i < wiphy->n_iface_combinations; i++) { + for (i = 0; i < n_iface_comb; i++) { u32 cnt = 0; u16 all_iftypes = 0;
- c = &wiphy->iface_combinations[i]; + c = &iface_comb[i];
/* * Combinations with just one interface aren't real, @@ -625,9 +629,13 @@ static int wiphy_verify_combinations(struct wiphy *wiphy) if (WARN_ON(!c->num_different_channels)) return -EINVAL;
- /* DFS only works on one channel. */ - if (WARN_ON(c->radar_detect_widths && - (c->num_different_channels > 1))) + /* DFS only works on one channel. Avoid this check + * for multi-radio global combination, since it hold + * the capabilities of all radio combinations. + */ + if (!combined_radio && + WARN_ON(c->radar_detect_widths && + c->num_different_channels > 1)) return -EINVAL;
if (WARN_ON(!c->n_limits)) @@ -648,13 +656,21 @@ static int wiphy_verify_combinations(struct wiphy *wiphy) if (WARN_ON(wiphy->software_iftypes & types)) return -EINVAL;
- /* Only a single P2P_DEVICE can be allowed */ - if (WARN_ON(types & BIT(NL80211_IFTYPE_P2P_DEVICE) && + /* Only a single P2P_DEVICE can be allowed, avoid this + * check for multi-radio global combination, since it + * hold the capabilities of all radio combinations. + */ + if (!combined_radio && + WARN_ON(types & BIT(NL80211_IFTYPE_P2P_DEVICE) && c->limits[j].max > 1)) return -EINVAL;
- /* Only a single NAN can be allowed */ - if (WARN_ON(types & BIT(NL80211_IFTYPE_NAN) && + /* Only a single NAN can be allowed, avoid this + * check for multi-radio global combination, since it + * hold the capabilities of all radio combinations. + */ + if (!combined_radio && + WARN_ON(types & BIT(NL80211_IFTYPE_NAN) && c->limits[j].max > 1)) return -EINVAL;
@@ -693,6 +709,34 @@ static int wiphy_verify_combinations(struct wiphy *wiphy) return 0; }
+static int wiphy_verify_combinations(struct wiphy *wiphy) +{ + int i, ret; + bool combined_radio = false; + + if (wiphy->n_radio) { + for (i = 0; i < wiphy->n_radio; i++) { + const struct wiphy_radio *radio = &wiphy->radio[i]; + + ret = wiphy_verify_iface_combinations(wiphy, + radio->iface_combinations, + radio->n_iface_combinations, + false); + if (ret) + return ret; + } + + combined_radio = true; + } + + ret = wiphy_verify_iface_combinations(wiphy, + wiphy->iface_combinations, + wiphy->n_iface_combinations, + combined_radio); + + return ret; +} + int wiphy_register(struct wiphy *wiphy) { struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c index 4dac81854721..a5eb92d93074 100644 --- a/net/wireless/mlme.c +++ b/net/wireless/mlme.c @@ -340,12 +340,6 @@ cfg80211_mlme_check_mlo_compat(const struct ieee80211_multi_link_elem *mle_a, return -EINVAL; }
- if (ieee80211_mle_get_eml_med_sync_delay((const u8 *)mle_a) != - ieee80211_mle_get_eml_med_sync_delay((const u8 *)mle_b)) { - NL_SET_ERR_MSG(extack, "link EML medium sync delay mismatch"); - return -EINVAL; - } - if (ieee80211_mle_get_eml_cap((const u8 *)mle_a) != ieee80211_mle_get_eml_cap((const u8 *)mle_b)) { NL_SET_ERR_MSG(extack, "link EML capabilities mismatch"); diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index d7d099f7118a..9b1b9dc5a7eb 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -9776,6 +9776,7 @@ nl80211_parse_sched_scan(struct wiphy *wiphy, struct wireless_dev *wdev, request = kzalloc(size, GFP_KERNEL); if (!request) return ERR_PTR(-ENOMEM); + request->n_channels = n_channels;
if (n_ssids) request->ssids = (void *)request + diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 1140b2a120ca..b57d5d2904eb 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -675,6 +675,8 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs, len = desc->len;
if (!skb) { + first_frag = true; + hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom)); tr = dev->needed_tailroom; skb = sock_alloc_send_skb(&xs->sk, hr + len + tr, 1, &err); @@ -685,12 +687,8 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs, skb_put(skb, len);
err = skb_store_bits(skb, 0, buffer, len); - if (unlikely(err)) { - kfree_skb(skb); + if (unlikely(err)) goto free_err; - } - - first_frag = true; } else { int nr_frags = skb_shinfo(skb)->nr_frags; struct page *page; @@ -758,6 +756,9 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs, return skb;
free_err: + if (first_frag && skb) + kfree_skb(skb); + if (err == -EOVERFLOW) { /* Drop the packet */ xsk_set_destructor_arg(xs->skb); diff --git a/rust/helpers/spinlock.c b/rust/helpers/spinlock.c index acc1376b833c..92f7fc418425 100644 --- a/rust/helpers/spinlock.c +++ b/rust/helpers/spinlock.c @@ -7,10 +7,14 @@ void rust_helper___spin_lock_init(spinlock_t *lock, const char *name, struct lock_class_key *key) { #ifdef CONFIG_DEBUG_SPINLOCK +# if defined(CONFIG_PREEMPT_RT) + __spin_lock_init(lock, name, key, false); +# else /*!CONFIG_PREEMPT_RT */ __raw_spin_lock_init(spinlock_check(lock), name, key, LD_WAIT_CONFIG); -#else +# endif /* CONFIG_PREEMPT_RT */ +#else /* !CONFIG_DEBUG_SPINLOCK */ spin_lock_init(lock); -#endif +#endif /* CONFIG_DEBUG_SPINLOCK */ }
void rust_helper_spin_lock(spinlock_t *lock) diff --git a/rust/kernel/block/mq/request.rs b/rust/kernel/block/mq/request.rs index a0e22827f3f4..7943f43b9575 100644 --- a/rust/kernel/block/mq/request.rs +++ b/rust/kernel/block/mq/request.rs @@ -16,50 +16,55 @@ sync::atomic::{AtomicU64, Ordering}, };
-/// A wrapper around a blk-mq `struct request`. This represents an IO request. +/// A wrapper around a blk-mq [`struct request`]. This represents an IO request. /// /// # Implementation details /// /// There are four states for a request that the Rust bindings care about: /// -/// A) Request is owned by block layer (refcount 0) -/// B) Request is owned by driver but with zero `ARef`s in existence -/// (refcount 1) -/// C) Request is owned by driver with exactly one `ARef` in existence -/// (refcount 2) -/// D) Request is owned by driver with more than one `ARef` in existence -/// (refcount > 2) +/// 1. Request is owned by block layer (refcount 0). +/// 2. Request is owned by driver but with zero [`ARef`]s in existence +/// (refcount 1). +/// 3. Request is owned by driver with exactly one [`ARef`] in existence +/// (refcount 2). +/// 4. Request is owned by driver with more than one [`ARef`] in existence +/// (refcount > 2). /// /// -/// We need to track A and B to ensure we fail tag to request conversions for +/// We need to track 1 and 2 to ensure we fail tag to request conversions for /// requests that are not owned by the driver. /// -/// We need to track C and D to ensure that it is safe to end the request and hand +/// We need to track 3 and 4 to ensure that it is safe to end the request and hand /// back ownership to the block layer. /// /// The states are tracked through the private `refcount` field of /// `RequestDataWrapper`. This structure lives in the private data area of the C -/// `struct request`. +/// [`struct request`]. /// /// # Invariants /// -/// * `self.0` is a valid `struct request` created by the C portion of the kernel. +/// * `self.0` is a valid [`struct request`] created by the C portion of the +/// kernel. /// * The private data area associated with this request must be an initialized /// and valid `RequestDataWrapper<T>`. /// * `self` is reference counted by atomic modification of -/// self.wrapper_ref().refcount(). +/// `self.wrapper_ref().refcount()`. +/// +/// [`struct request`]: srctree/include/linux/blk-mq.h /// #[repr(transparent)] pub struct Request<T: Operations>(Opaquebindings::request, PhantomData<T>);
impl<T: Operations> Request<T> { - /// Create an `ARef<Request>` from a `struct request` pointer. + /// Create an [`ARef<Request>`] from a [`struct request`] pointer. /// /// # Safety /// /// * The caller must own a refcount on `ptr` that is transferred to the - /// returned `ARef`. - /// * The type invariants for `Request` must hold for the pointee of `ptr`. + /// returned [`ARef`]. + /// * The type invariants for [`Request`] must hold for the pointee of `ptr`. + /// + /// [`struct request`]: srctree/include/linux/blk-mq.h pub(crate) unsafe fn aref_from_raw(ptr: *mut bindings::request) -> ARef<Self> { // INVARIANT: By the safety requirements of this function, invariants are upheld. // SAFETY: By the safety requirement of this function, we own a @@ -84,12 +89,14 @@ pub(crate) unsafe fn start_unchecked(this: &ARef<Self>) { }
/// Try to take exclusive ownership of `this` by dropping the refcount to 0. - /// This fails if `this` is not the only `ARef` pointing to the underlying - /// `Request`. + /// This fails if `this` is not the only [`ARef`] pointing to the underlying + /// [`Request`]. /// - /// If the operation is successful, `Ok` is returned with a pointer to the - /// C `struct request`. If the operation fails, `this` is returned in the - /// `Err` variant. + /// If the operation is successful, [`Ok`] is returned with a pointer to the + /// C [`struct request`]. If the operation fails, `this` is returned in the + /// [`Err`] variant. + /// + /// [`struct request`]: srctree/include/linux/blk-mq.h fn try_set_end(this: ARef<Self>) -> Result<*mut bindings::request, ARef<Self>> { // We can race with `TagSet::tag_to_rq` if let Err(_old) = this.wrapper_ref().refcount().compare_exchange( @@ -109,7 +116,7 @@ fn try_set_end(this: ARef<Self>) -> Result<*mut bindings::request, ARef<Self>> {
/// Notify the block layer that the request has been completed without errors. /// - /// This function will return `Err` if `this` is not the only `ARef` + /// This function will return [`Err`] if `this` is not the only [`ARef`] /// referencing the request. pub fn end_ok(this: ARef<Self>) -> Result<(), ARef<Self>> { let request_ptr = Self::try_set_end(this)?; @@ -123,13 +130,13 @@ pub fn end_ok(this: ARef<Self>) -> Result<(), ARef<Self>> { Ok(()) }
- /// Return a pointer to the `RequestDataWrapper` stored in the private area + /// Return a pointer to the [`RequestDataWrapper`] stored in the private area /// of the request structure. /// /// # Safety /// /// - `this` must point to a valid allocation of size at least size of - /// `Self` plus size of `RequestDataWrapper`. + /// [`Self`] plus size of [`RequestDataWrapper`]. pub(crate) unsafe fn wrapper_ptr(this: *mut Self) -> NonNull<RequestDataWrapper> { let request_ptr = this.cast::bindings::request(); // SAFETY: By safety requirements for this function, `this` is a @@ -141,7 +148,7 @@ pub(crate) unsafe fn wrapper_ptr(this: *mut Self) -> NonNull<RequestDataWrapper> unsafe { NonNull::new_unchecked(wrapper_ptr) } }
- /// Return a reference to the `RequestDataWrapper` stored in the private + /// Return a reference to the [`RequestDataWrapper`] stored in the private /// area of the request structure. pub(crate) fn wrapper_ref(&self) -> &RequestDataWrapper { // SAFETY: By type invariant, `self.0` is a valid allocation. Further, @@ -152,13 +159,15 @@ pub(crate) fn wrapper_ref(&self) -> &RequestDataWrapper { } }
-/// A wrapper around data stored in the private area of the C `struct request`. +/// A wrapper around data stored in the private area of the C [`struct request`]. +/// +/// [`struct request`]: srctree/include/linux/blk-mq.h pub(crate) struct RequestDataWrapper { /// The Rust request refcount has the following states: /// /// - 0: The request is owned by C block layer. - /// - 1: The request is owned by Rust abstractions but there are no ARef references to it. - /// - 2+: There are `ARef` references to the request. + /// - 1: The request is owned by Rust abstractions but there are no [`ARef`] references to it. + /// - 2+: There are [`ARef`] references to the request. refcount: AtomicU64, }
@@ -204,7 +213,7 @@ fn atomic_relaxed_op_return(target: &AtomicU64, op: impl Fn(u64) -> u64) -> u64 }
/// Store the result of `op(target.load)` in `target` if `target.load() != -/// pred`, returning true if the target was updated. +/// pred`, returning [`true`] if the target was updated. fn atomic_relaxed_op_unless(target: &AtomicU64, op: impl Fn(u64) -> u64, pred: u64) -> bool { target .fetch_update(Ordering::Relaxed, Ordering::Relaxed, |x| { diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs index b5f4b3ce6b48..032c9089e686 100644 --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -83,7 +83,7 @@ pub trait Module: Sized + Sync + Send {
/// Equivalent to `THIS_MODULE` in the C API. /// -/// C header: [`include/linux/export.h`](srctree/include/linux/export.h) +/// C header: [`include/linux/init.h`](srctree/include/linux/init.h) pub struct ThisModule(*mut bindings::module);
// SAFETY: `THIS_MODULE` may be used from all threads within a module. diff --git a/rust/kernel/rbtree.rs b/rust/kernel/rbtree.rs index 25eb36fd1cdc..d03e4aa1f481 100644 --- a/rust/kernel/rbtree.rs +++ b/rust/kernel/rbtree.rs @@ -884,7 +884,8 @@ fn get_neighbor_raw(&self, direction: Direction) -> Option<NonNull<bindings::rb_ NonNull::new(neighbor) }
- /// SAFETY: + /// # Safety + /// /// - `node` must be a valid pointer to a node in an [`RBTree`]. /// - The caller has immutable access to `node` for the duration of 'b. unsafe fn to_key_value<'b>(node: NonNullbindings::rb_node) -> (&'b K, &'b V) { @@ -894,7 +895,8 @@ unsafe fn to_key_value<'b>(node: NonNullbindings::rb_node) -> (&'b K, &'b V) { (k, unsafe { &*v }) }
- /// SAFETY: + /// # Safety + /// /// - `node` must be a valid pointer to a node in an [`RBTree`]. /// - The caller has mutable access to `node` for the duration of 'b. unsafe fn to_key_value_mut<'b>(node: NonNullbindings::rb_node) -> (&'b K, &'b mut V) { @@ -904,7 +906,8 @@ unsafe fn to_key_value_mut<'b>(node: NonNullbindings::rb_node) -> (&'b K, &'b (k, unsafe { &mut *v }) }
- /// SAFETY: + /// # Safety + /// /// - `node` must be a valid pointer to a node in an [`RBTree`]. /// - The caller has immutable access to the key for the duration of 'b. unsafe fn to_key_value_raw<'b>(node: NonNullbindings::rb_node) -> (&'b K, *mut V) { diff --git a/rust/macros/lib.rs b/rust/macros/lib.rs index a626b1145e5c..90e2202ba4d5 100644 --- a/rust/macros/lib.rs +++ b/rust/macros/lib.rs @@ -359,7 +359,7 @@ pub fn pinned_drop(args: TokenStream, input: TokenStream) -> TokenStream { /// macro_rules! pub_no_prefix { /// ($prefix:ident, $($newname:ident),+) => { /// kernel::macros::paste! { -/// $(pub(crate) const fn [<$newname:lower:span>]: u32 = [<$prefix $newname:span>];)+ +/// $(pub(crate) const fn [<$newname:lower:span>]() -> u32 { [<$prefix $newname:span>] })+ /// } /// }; /// } diff --git a/samples/bpf/xdp_adjust_tail_kern.c b/samples/bpf/xdp_adjust_tail_kern.c index ffdd548627f0..da67bcad1c63 100644 --- a/samples/bpf/xdp_adjust_tail_kern.c +++ b/samples/bpf/xdp_adjust_tail_kern.c @@ -57,6 +57,7 @@ static __always_inline void swap_mac(void *data, struct ethhdr *orig_eth)
static __always_inline __u16 csum_fold_helper(__u32 csum) { + csum = (csum & 0xffff) + (csum >> 16); return ~((csum & 0xffff) + (csum >> 16)); }
diff --git a/samples/kfifo/dma-example.c b/samples/kfifo/dma-example.c index 48df719dac8c..8076ac410161 100644 --- a/samples/kfifo/dma-example.c +++ b/samples/kfifo/dma-example.c @@ -9,6 +9,7 @@ #include <linux/kfifo.h> #include <linux/module.h> #include <linux/scatterlist.h> +#include <linux/dma-mapping.h>
/* * This module shows how to handle fifo dma operations. diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 4427572b2477..b03d526e4c45 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -3209,36 +3209,31 @@ sub process {
# Check Fixes: styles is correct if (!$in_header_lines && - $line =~ /^\s*fixes:?\s*(?:commit\s*)?[0-9a-f]{5,}\b/i) { - my $orig_commit = ""; - my $id = "0123456789ab"; - my $title = "commit title"; - my $tag_case = 1; - my $tag_space = 1; - my $id_length = 1; - my $id_case = 1; + $line =~ /^\s*(fixes:?)\s*(?:commit\s*)?([0-9a-f]{5,40})(?:\s*($balanced_parens))?/i) { + my $tag = $1; + my $orig_commit = $2; + my $title; my $title_has_quotes = 0; $fixes_tag = 1; - - if ($line =~ /(\s*fixes:?)\s+([0-9a-f]{5,})\s+($balanced_parens)/i) { - my $tag = $1; - $orig_commit = $2; - $title = $3; - - $tag_case = 0 if $tag eq "Fixes:"; - $tag_space = 0 if ($line =~ /^fixes:? [0-9a-f]{5,} ($balanced_parens)/i); - - $id_length = 0 if ($orig_commit =~ /^[0-9a-f]{12}$/i); - $id_case = 0 if ($orig_commit !~ /[A-F]/); - + if (defined $3) { # Always strip leading/trailing parens then double quotes if existing - $title = substr($title, 1, -1); + $title = substr($3, 1, -1); if ($title =~ /^".*"$/) { $title = substr($title, 1, -1); $title_has_quotes = 1; } + } else { + $title = "commit title" }
+ + my $tag_case = not ($tag eq "Fixes:"); + my $tag_space = not ($line =~ /^fixes:? [0-9a-f]{5,40} ($balanced_parens)/i); + + my $id_length = not ($orig_commit =~ /^[0-9a-f]{12}$/i); + my $id_case = not ($orig_commit !~ /[A-F]/); + + my $id = "0123456789ab"; my ($cid, $ctitle) = git_commit_info($orig_commit, $id, $title);
diff --git a/scripts/faddr2line b/scripts/faddr2line index fe0cc45f03be..1fa6beef9f97 100755 --- a/scripts/faddr2line +++ b/scripts/faddr2line @@ -252,7 +252,7 @@ __faddr2line() { found=2 break fi - done < <(echo "${ELF_SYMS}" | sed 's/[.*]//' | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2 | ${GREP} -A1 --no-group-separator " ${sym_name}$") + done < <(echo "${ELF_SYMS}" | sed 's/[.*]//' | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2)
if [[ $found = 0 ]]; then warn "can't find symbol: sym_name: $sym_name sym_sec: $sym_sec sym_addr: $sym_addr sym_elf_size: $sym_elf_size" diff --git a/scripts/kernel-doc b/scripts/kernel-doc index 2791f8195203..320544321ecb 100755 --- a/scripts/kernel-doc +++ b/scripts/kernel-doc @@ -569,6 +569,8 @@ sub output_function_man(%) { my %args = %{$_[0]}; my ($parameter, $section); my $count; + my $func_macro = $args{'func_macro'}; + my $paramcount = $#{$args{'parameterlist'}}; # -1 is empty
print ".TH "$args{'function'}" 9 "$args{'function'}" "$man_date" "Kernel Hacker's Manual" LINUX\n";
@@ -600,7 +602,10 @@ sub output_function_man(%) { $parenth = ""; }
- print ".SH ARGUMENTS\n"; + $paramcount = $#{$args{'parameterlist'}}; # -1 is empty + if ($paramcount >= 0) { + print ".SH ARGUMENTS\n"; + } foreach $parameter (@{$args{'parameterlist'}}) { my $parameter_name = $parameter; $parameter_name =~ s/[.*//; @@ -822,10 +827,16 @@ sub output_function_rst(%) { my $oldprefix = $lineprefix;
my $signature = ""; - if ($args{'functiontype'} ne "") { - $signature = $args{'functiontype'} . " " . $args{'function'} . " ("; - } else { - $signature = $args{'function'} . " ("; + my $func_macro = $args{'func_macro'}; + my $paramcount = $#{$args{'parameterlist'}}; # -1 is empty + + if ($func_macro) { + $signature = $args{'function'}; + } else { + if ($args{'functiontype'}) { + $signature = $args{'functiontype'} . " "; + } + $signature .= $args{'function'} . " ("; }
my $count = 0; @@ -844,7 +855,9 @@ sub output_function_rst(%) { } }
- $signature .= ")"; + if (!$func_macro) { + $signature .= ")"; + }
if ($sphinx_major < 3) { if ($args{'typedef'}) { @@ -888,9 +901,11 @@ sub output_function_rst(%) { # Put our descriptive text into a container (thus an HTML <div>) to help # set the function prototypes apart. # - print ".. container:: kernelindent\n\n"; $lineprefix = " "; - print $lineprefix . "**Parameters**\n\n"; + if ($paramcount >= 0) { + print ".. container:: kernelindent\n\n"; + print $lineprefix . "**Parameters**\n\n"; + } foreach $parameter (@{$args{'parameterlist'}}) { my $parameter_name = $parameter; $parameter_name =~ s/[.*//; @@ -1704,7 +1719,7 @@ sub check_return_section { sub dump_function($$) { my $prototype = shift; my $file = shift; - my $noret = 0; + my $func_macro = 0;
print_lineno($new_start_line);
@@ -1769,7 +1784,7 @@ sub dump_function($$) { # declaration_name and opening parenthesis (notice the \s+). $return_type = $1; $declaration_name = $2; - $noret = 1; + $func_macro = 1; } elsif ($prototype =~ m/^()($name)\s*$prototype_end/ || $prototype =~ m/^($type1)\s+($name)\s*$prototype_end/ || $prototype =~ m/^($type2+)\s*($name)\s*$prototype_end/) { @@ -1796,7 +1811,7 @@ sub dump_function($$) { # of warnings goes sufficiently down, the check is only performed in # -Wreturn mode. # TODO: always perform the check. - if ($Wreturn && !$noret) { + if ($Wreturn && !$func_macro) { check_return_section($file, $declaration_name, $return_type); }
@@ -1814,7 +1829,8 @@ sub dump_function($$) { 'parametertypes' => %parametertypes, 'sectionlist' => @sectionlist, 'sections' => %sections, - 'purpose' => $declaration_purpose + 'purpose' => $declaration_purpose, + 'func_macro' => $func_macro }); } else { output_declaration($declaration_name, @@ -1827,7 +1843,8 @@ sub dump_function($$) { 'parametertypes' => %parametertypes, 'sectionlist' => @sectionlist, 'sections' => %sections, - 'purpose' => $declaration_purpose + 'purpose' => $declaration_purpose, + 'func_macro' => $func_macro }); } } @@ -2322,7 +2339,6 @@ sub process_inline($$) {
sub process_file($) { my $file; - my $initial_section_counter = $section_counter; my ($orig_file) = @_;
$file = map_filename($orig_file); @@ -2360,8 +2376,7 @@ sub process_file($) { }
# Make sure we got something interesting. - if ($initial_section_counter == $section_counter && $ - output_mode ne "none") { + if (!$section_counter && $output_mode ne "none") { if ($output_selection == OUTPUT_INCLUDE) { emit_warning("${file}:1", "'$_' not found\n") for keys %function_table; diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c index c4cc11aa558f..634e40748287 100644 --- a/scripts/mod/file2alias.c +++ b/scripts/mod/file2alias.c @@ -809,10 +809,7 @@ static int do_eisa_entry(const char *filename, void *symval, char *alias) { DEF_FIELD_ADDR(symval, eisa_device_id, sig); - if (sig[0]) - sprintf(alias, EISA_DEVICE_MODALIAS_FMT "*", *sig); - else - strcat(alias, "*"); + sprintf(alias, EISA_DEVICE_MODALIAS_FMT "*", *sig); return 1; }
diff --git a/scripts/package/builddeb b/scripts/package/builddeb index 441b0bb66e0d..fb686fd3266f 100755 --- a/scripts/package/builddeb +++ b/scripts/package/builddeb @@ -96,16 +96,18 @@ install_linux_image_dbg () {
# Parse modules.order directly because 'make modules_install' may sign, # compress modules, and then run unneeded depmod. - while read -r mod; do - mod="${mod%.o}.ko" - dbg="${pdir}/usr/lib/debug/lib/modules/${KERNELRELEASE}/kernel/${mod}" - buildid=$("${READELF}" -n "${mod}" | sed -n 's@^.*Build ID: (..)(.*)@\1/\2@p') - link="${pdir}/usr/lib/debug/.build-id/${buildid}.debug" - - mkdir -p "${dbg%/*}" "${link%/*}" - "${OBJCOPY}" --only-keep-debug "${mod}" "${dbg}" - ln -sf --relative "${dbg}" "${link}" - done < modules.order + if is_enabled CONFIG_MODULES; then + while read -r mod; do + mod="${mod%.o}.ko" + dbg="${pdir}/usr/lib/debug/lib/modules/${KERNELRELEASE}/kernel/${mod}" + buildid=$("${READELF}" -n "${mod}" | sed -n 's@^.*Build ID: (..)(.*)@\1/\2@p') + link="${pdir}/usr/lib/debug/.build-id/${buildid}.debug" + + mkdir -p "${dbg%/*}" "${link%/*}" + "${OBJCOPY}" --only-keep-debug "${mod}" "${dbg}" + ln -sf --relative "${dbg}" "${link}" + done < modules.order + fi
# Build debug package # Different tools want the image in different locations diff --git a/security/apparmor/capability.c b/security/apparmor/capability.c index 9934df16c843..bf7df6086830 100644 --- a/security/apparmor/capability.c +++ b/security/apparmor/capability.c @@ -96,6 +96,8 @@ static int audit_caps(struct apparmor_audit_data *ad, struct aa_profile *profile return error; } else { aa_put_profile(ent->profile); + if (profile != ent->profile) + cap_clear(ent->caps); ent->profile = aa_get_profile(profile); cap_raise(ent->caps, cap); } diff --git a/security/apparmor/policy_unpack_test.c b/security/apparmor/policy_unpack_test.c index c64733d6c98f..f070902da8fc 100644 --- a/security/apparmor/policy_unpack_test.c +++ b/security/apparmor/policy_unpack_test.c @@ -281,6 +281,8 @@ static void policy_unpack_test_unpack_strdup_with_null_name(struct kunit *test) ((uintptr_t)puf->e->start <= (uintptr_t)string) && ((uintptr_t)string <= (uintptr_t)puf->e->end)); KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA); + + kfree(string); }
static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test) @@ -296,6 +298,8 @@ static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test) ((uintptr_t)puf->e->start <= (uintptr_t)string) && ((uintptr_t)string <= (uintptr_t)puf->e->end)); KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA); + + kfree(string); }
static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test) @@ -313,6 +317,8 @@ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test) KUNIT_EXPECT_EQ(test, size, 0); KUNIT_EXPECT_NULL(test, string); KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start); + + kfree(string); }
static void policy_unpack_test_unpack_nameX_with_null_name(struct kunit *test) diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index b465fb6e1f5f..0790b5fd917e 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -3793,9 +3793,11 @@ static vm_fault_t snd_pcm_mmap_data_fault(struct vm_fault *vmf) return VM_FAULT_SIGBUS; if (substream->ops->page) page = substream->ops->page(substream, offset); - else if (!snd_pcm_get_dma_buf(substream)) + else if (!snd_pcm_get_dma_buf(substream)) { + if (WARN_ON_ONCE(!runtime->dma_area)) + return VM_FAULT_SIGBUS; page = virt_to_page(runtime->dma_area + offset); - else + } else page = snd_sgbuf_get_page(snd_pcm_get_dma_buf(substream), offset); if (!page) return VM_FAULT_SIGBUS; diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c index 03306be5fa02..348ce1b7725e 100644 --- a/sound/core/rawmidi.c +++ b/sound/core/rawmidi.c @@ -724,8 +724,9 @@ static int resize_runtime_buffer(struct snd_rawmidi_substream *substream, newbuf = kvzalloc(params->buffer_size, GFP_KERNEL); if (!newbuf) return -ENOMEM; - guard(spinlock_irq)(&substream->lock); + spin_lock_irq(&substream->lock); if (runtime->buffer_ref) { + spin_unlock_irq(&substream->lock); kvfree(newbuf); return -EBUSY; } @@ -733,6 +734,7 @@ static int resize_runtime_buffer(struct snd_rawmidi_substream *substream, runtime->buffer = newbuf; runtime->buffer_size = params->buffer_size; __reset_runtime_ptrs(runtime, is_input); + spin_unlock_irq(&substream->lock); kvfree(oldbuf); } runtime->avail_min = params->avail_min; diff --git a/sound/core/sound_kunit.c b/sound/core/sound_kunit.c index bfed1a25fc8f..84e337ecbddd 100644 --- a/sound/core/sound_kunit.c +++ b/sound/core/sound_kunit.c @@ -172,6 +172,7 @@ static void test_format_fill_silence(struct kunit *test) u32 i, j;
buffer = kunit_kzalloc(test, SILENCE_BUFFER_SIZE, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);
for (i = 0; i < ARRAY_SIZE(buf_samples); i++) { for (j = 0; j < ARRAY_SIZE(valid_fmt); j++) @@ -208,8 +209,12 @@ static void test_playback_avail(struct kunit *test) struct snd_pcm_runtime *r = kunit_kzalloc(test, sizeof(*r), GFP_KERNEL); u32 i;
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r); + r->status = kunit_kzalloc(test, sizeof(*r->status), GFP_KERNEL); r->control = kunit_kzalloc(test, sizeof(*r->control), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->status); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->control);
for (i = 0; i < ARRAY_SIZE(p_avail_data); i++) { r->buffer_size = p_avail_data[i].buffer_size; @@ -232,8 +237,12 @@ static void test_capture_avail(struct kunit *test) struct snd_pcm_runtime *r = kunit_kzalloc(test, sizeof(*r), GFP_KERNEL); u32 i;
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r); + r->status = kunit_kzalloc(test, sizeof(*r->status), GFP_KERNEL); r->control = kunit_kzalloc(test, sizeof(*r->control), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->status); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->control);
for (i = 0; i < ARRAY_SIZE(c_avail_data); i++) { r->buffer_size = c_avail_data[i].buffer_size; @@ -247,6 +256,7 @@ static void test_capture_avail(struct kunit *test) static void test_card_set_id(struct kunit *test) { struct snd_card *card = kunit_kzalloc(test, sizeof(*card), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, card);
snd_card_set_id(card, VALID_NAME); KUNIT_EXPECT_STREQ(test, card->id, VALID_NAME); @@ -280,6 +290,7 @@ static void test_pcm_format_name(struct kunit *test) static void test_card_add_component(struct kunit *test) { struct snd_card *card = kunit_kzalloc(test, sizeof(*card), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, card);
snd_component_add(card, TEST_FIRST_COMPONENT); KUNIT_ASSERT_STREQ(test, card->components, TEST_FIRST_COMPONENT); diff --git a/sound/core/ump.c b/sound/core/ump.c index 7d59a0a9b037..8d37f237f83b 100644 --- a/sound/core/ump.c +++ b/sound/core/ump.c @@ -788,7 +788,10 @@ static void fill_fb_info(struct snd_ump_endpoint *ump, info->ui_hint = buf->fb_info.ui_hint; info->first_group = buf->fb_info.first_group; info->num_groups = buf->fb_info.num_groups; - info->flags = buf->fb_info.midi_10; + if (buf->fb_info.midi_10 < 2) + info->flags = buf->fb_info.midi_10; + else + info->flags = SNDRV_UMP_BLOCK_IS_MIDI1 | SNDRV_UMP_BLOCK_IS_LOWSPEED; info->active = buf->fb_info.active; info->midi_ci_version = buf->fb_info.midi_ci_version; info->sysex8_streams = buf->fb_info.sysex8_streams; diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 24b4fe99304a..18e6779a83be 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -473,6 +473,8 @@ static void alc_fill_eapd_coef(struct hda_codec *codec) break; case 0x10ec0234: case 0x10ec0274: + alc_write_coef_idx(codec, 0x6e, 0x0c25); + fallthrough; case 0x10ec0294: case 0x10ec0700: case 0x10ec0701: @@ -3613,25 +3615,22 @@ static void alc256_init(struct hda_codec *codec)
hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
- if (hp_pin_sense) + if (hp_pin_sense) { msleep(2); + alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */ - - snd_hda_codec_write(codec, hp_pin, 0, - AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); - - if (hp_pin_sense || spec->ultra_low_power) - msleep(85); - - snd_hda_codec_write(codec, hp_pin, 0, + snd_hda_codec_write(codec, hp_pin, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
- if (hp_pin_sense || spec->ultra_low_power) - msleep(100); + msleep(75); + + snd_hda_codec_write(codec, hp_pin, 0, + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+ msleep(75); + alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */ + } alc_update_coef_idx(codec, 0x46, 3 << 12, 0); - alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15); /* @@ -3655,29 +3654,28 @@ static void alc256_shutup(struct hda_codec *codec) alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */ hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
- if (hp_pin_sense) + if (hp_pin_sense) { msleep(2);
- snd_hda_codec_write(codec, hp_pin, 0, + snd_hda_codec_write(codec, hp_pin, 0, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
- if (hp_pin_sense || spec->ultra_low_power) - msleep(85); + msleep(75);
/* 3k pull low control for Headset jack. */ /* NOTE: call this before clearing the pin, otherwise codec stalls */ /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly * when booting with headset plugged. So skip setting it for the codec alc257 */ - if (spec->en_3kpull_low) - alc_update_coef_idx(codec, 0x46, 0, 3 << 12); + if (spec->en_3kpull_low) + alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
- if (!spec->no_shutup_pins) - snd_hda_codec_write(codec, hp_pin, 0, + if (!spec->no_shutup_pins) + snd_hda_codec_write(codec, hp_pin, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
- if (hp_pin_sense || spec->ultra_low_power) - msleep(100); + msleep(75); + }
alc_auto_setup_eapd(codec, false); alc_shutup_pins(codec); @@ -3772,33 +3770,28 @@ static void alc225_init(struct hda_codec *codec) hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin); hp2_pin_sense = snd_hda_jack_detect(codec, 0x16);
- if (hp1_pin_sense || hp2_pin_sense) + if (hp1_pin_sense || hp2_pin_sense) { msleep(2); + alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */ - - if (hp1_pin_sense || spec->ultra_low_power) - snd_hda_codec_write(codec, hp_pin, 0, - AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); - if (hp2_pin_sense) - snd_hda_codec_write(codec, 0x16, 0, - AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); - - if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power) - msleep(85); - - if (hp1_pin_sense || spec->ultra_low_power) - snd_hda_codec_write(codec, hp_pin, 0, - AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); - if (hp2_pin_sense) - snd_hda_codec_write(codec, 0x16, 0, - AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); + if (hp1_pin_sense) + snd_hda_codec_write(codec, hp_pin, 0, + AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); + if (hp2_pin_sense) + snd_hda_codec_write(codec, 0x16, 0, + AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); + msleep(75);
- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power) - msleep(100); + if (hp1_pin_sense) + snd_hda_codec_write(codec, hp_pin, 0, + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE); + if (hp2_pin_sense) + snd_hda_codec_write(codec, 0x16, 0, + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
- alc_update_coef_idx(codec, 0x4a, 3 << 10, 0); - alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */ + msleep(75); + alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */ + } }
static void alc225_shutup(struct hda_codec *codec) @@ -3810,36 +3803,35 @@ static void alc225_shutup(struct hda_codec *codec) if (!hp_pin) hp_pin = 0x21;
- alc_disable_headset_jack_key(codec); - /* 3k pull low control for Headset jack. */ - alc_update_coef_idx(codec, 0x4a, 0, 3 << 10); - hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin); hp2_pin_sense = snd_hda_jack_detect(codec, 0x16);
- if (hp1_pin_sense || hp2_pin_sense) + if (hp1_pin_sense || hp2_pin_sense) { + alc_disable_headset_jack_key(codec); + /* 3k pull low control for Headset jack. */ + alc_update_coef_idx(codec, 0x4a, 0, 3 << 10); msleep(2);
- if (hp1_pin_sense || spec->ultra_low_power) - snd_hda_codec_write(codec, hp_pin, 0, - AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); - if (hp2_pin_sense) - snd_hda_codec_write(codec, 0x16, 0, - AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); - - if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power) - msleep(85); + if (hp1_pin_sense) + snd_hda_codec_write(codec, hp_pin, 0, + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); + if (hp2_pin_sense) + snd_hda_codec_write(codec, 0x16, 0, + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
- if (hp1_pin_sense || spec->ultra_low_power) - snd_hda_codec_write(codec, hp_pin, 0, - AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); - if (hp2_pin_sense) - snd_hda_codec_write(codec, 0x16, 0, - AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); + msleep(75);
- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power) - msleep(100); + if (hp1_pin_sense) + snd_hda_codec_write(codec, hp_pin, 0, + AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); + if (hp2_pin_sense) + snd_hda_codec_write(codec, 0x16, 0, + AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ msleep(75); + alc_update_coef_idx(codec, 0x4a, 3 << 10, 0); + alc_enable_headset_jack_key(codec); + } alc_auto_setup_eapd(codec, false); alc_shutup_pins(codec); if (spec->ultra_low_power) { @@ -3850,9 +3842,6 @@ static void alc225_shutup(struct hda_codec *codec) alc_update_coef_idx(codec, 0x4a, 3<<4, 2<<4); msleep(30); } - - alc_update_coef_idx(codec, 0x4a, 3 << 10, 0); - alc_enable_headset_jack_key(codec); }
static void alc_default_init(struct hda_codec *codec) @@ -7559,6 +7548,7 @@ enum { ALC269_FIXUP_THINKPAD_ACPI, ALC269_FIXUP_DMIC_THINKPAD_ACPI, ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13, + ALC269VC_FIXUP_INFINIX_Y4_MAX, ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO, ALC255_FIXUP_ACER_MIC_NO_PRESENCE, ALC255_FIXUP_ASUS_MIC_NO_PRESENCE, @@ -7786,6 +7776,7 @@ enum { ALC287_FIXUP_LENOVO_SSID_17AA3820, ALC245_FIXUP_CLEVO_NOISY_MIC, ALC269_FIXUP_VAIO_VJFH52_MIC_NO_PRESENCE, + ALC233_FIXUP_MEDION_MTL_SPK, };
/* A special fixup for Lenovo C940 and Yoga Duet 7; @@ -8015,6 +8006,15 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST }, + [ALC269VC_FIXUP_INFINIX_Y4_MAX] = { + .type = HDA_FIXUP_PINS, + .v.pins = (const struct hda_pintbl[]) { + { 0x1b, 0x90170150 }, /* use as internal speaker */ + { } + }, + .chained = true, + .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST + }, [ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO] = { .type = HDA_FIXUP_PINS, .v.pins = (const struct hda_pintbl[]) { @@ -10160,6 +10160,13 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST }, + [ALC233_FIXUP_MEDION_MTL_SPK] = { + .type = HDA_FIXUP_PINS, + .v.pins = (const struct hda_pintbl[]) { + { 0x1b, 0x90170110 }, + { } + }, + }, };
static const struct snd_pci_quirk alc269_fixup_tbl[] = { @@ -10585,6 +10592,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x103c, 0x8cdf, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x8d84, "HP EliteBook X G1i", ALC285_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), @@ -11025,7 +11033,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13), SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO), + SND_PCI_QUIRK(0x2782, 0x1701, "Infinix Y4 Max", ALC269VC_FIXUP_INFINIX_Y4_MAX), + SND_PCI_QUIRK(0x2782, 0x1705, "MEDION E15433", ALC269VC_FIXUP_INFINIX_Y4_MAX), SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME), + SND_PCI_QUIRK(0x2782, 0x4900, "MEDION E15443", ALC233_FIXUP_MEDION_MTL_SPK), SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC), SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED), SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10), diff --git a/sound/soc/amd/acp/acp-sdw-sof-mach.c b/sound/soc/amd/acp/acp-sdw-sof-mach.c index 306854fb08e3..3be401c72270 100644 --- a/sound/soc/amd/acp/acp-sdw-sof-mach.c +++ b/sound/soc/amd/acp/acp-sdw-sof-mach.c @@ -154,7 +154,7 @@ static int create_sdw_dailink(struct snd_soc_card *card, int num_cpus = hweight32(sof_dai->link_mask[stream]); int num_codecs = sof_dai->num_devs[stream]; int playback, capture; - int i = 0, j = 0; + int j = 0; char *name;
if (!sof_dai->num_devs[stream]) @@ -213,14 +213,14 @@ static int create_sdw_dailink(struct snd_soc_card *card,
int link_num = ffs(sof_end->link_mask) - 1;
- cpus[i].dai_name = devm_kasprintf(dev, GFP_KERNEL, - "SDW%d Pin%d", - link_num, cpu_pin_id); - dev_dbg(dev, "cpu[%d].dai_name:%s\n", i, cpus[i].dai_name); - if (!cpus[i].dai_name) + cpus->dai_name = devm_kasprintf(dev, GFP_KERNEL, + "SDW%d Pin%d", + link_num, cpu_pin_id); + dev_dbg(dev, "cpu->dai_name:%s\n", cpus->dai_name); + if (!cpus->dai_name) return -ENOMEM;
- codec_maps[j].cpu = i; + codec_maps[j].cpu = 0; codec_maps[j].codec = j;
codecs[j].name = sof_end->codec_name; @@ -362,7 +362,7 @@ static int sof_card_dai_links_create(struct snd_soc_card *card) dai_links = devm_kcalloc(dev, num_links, sizeof(*dai_links), GFP_KERNEL); if (!dai_links) { ret = -ENOMEM; - goto err_end; + goto err_end; }
card->codec_conf = codec_conf; diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c index 2436e8deb2be..5153a68d8c07 100644 --- a/sound/soc/amd/yc/acp6x-mach.c +++ b/sound/soc/amd/yc/acp6x-mach.c @@ -241,6 +241,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = { DMI_MATCH(DMI_PRODUCT_NAME, "21M5"), } }, + { + .driver_data = &acp6x_card, + .matches = { + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_NAME, "21ME"), + } + }, { .driver_data = &acp6x_card, .matches = { @@ -537,8 +544,14 @@ static int acp6x_probe(struct platform_device *pdev) struct acp6x_pdm *machine = NULL; struct snd_soc_card *card; struct acpi_device *adev; + acpi_handle handle; + acpi_integer dmic_status; int ret; + bool is_dmic_enable, wov_en;
+ /* IF WOV entry not found, enable dmic based on AcpDmicConnected entry*/ + is_dmic_enable = false; + wov_en = true; /* check the parent device's firmware node has _DSD or not */ adev = ACPI_COMPANION(pdev->dev.parent); if (adev) { @@ -546,9 +559,19 @@ static int acp6x_probe(struct platform_device *pdev)
if (!acpi_dev_get_property(adev, "AcpDmicConnected", ACPI_TYPE_INTEGER, &obj) && obj->integer.value == 1) - platform_set_drvdata(pdev, &acp6x_card); + is_dmic_enable = true; }
+ handle = ACPI_HANDLE(pdev->dev.parent); + ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status); + if (!ACPI_FAILURE(ret)) + wov_en = dmic_status; + + if (is_dmic_enable && wov_en) + platform_set_drvdata(pdev, &acp6x_card); + else + return 0; + /* check for any DMI overrides */ dmi_id = dmi_first_match(yc_acp_quirk_table); if (dmi_id) diff --git a/sound/soc/codecs/da7213.c b/sound/soc/codecs/da7213.c index f3ef6fb55304..486db60bf2dd 100644 --- a/sound/soc/codecs/da7213.c +++ b/sound/soc/codecs/da7213.c @@ -2136,6 +2136,7 @@ static const struct regmap_config da7213_regmap_config = { .reg_bits = 8, .val_bits = 8,
+ .max_register = DA7213_TONE_GEN_OFF_PER, .reg_defaults = da7213_reg_defaults, .num_reg_defaults = ARRAY_SIZE(da7213_reg_defaults), .volatile_reg = da7213_volatile_register, diff --git a/sound/soc/codecs/da7219.c b/sound/soc/codecs/da7219.c index 311ea7918b31..e2da3e317b5a 100644 --- a/sound/soc/codecs/da7219.c +++ b/sound/soc/codecs/da7219.c @@ -1167,17 +1167,20 @@ static int da7219_set_dai_sysclk(struct snd_soc_dai *codec_dai, struct da7219_priv *da7219 = snd_soc_component_get_drvdata(component); int ret = 0;
- if ((da7219->clk_src == clk_id) && (da7219->mclk_rate == freq)) + mutex_lock(&da7219->pll_lock); + + if ((da7219->clk_src == clk_id) && (da7219->mclk_rate == freq)) { + mutex_unlock(&da7219->pll_lock); return 0; + }
if ((freq < 2000000) || (freq > 54000000)) { + mutex_unlock(&da7219->pll_lock); dev_err(codec_dai->dev, "Unsupported MCLK value %d\n", freq); return -EINVAL; }
- mutex_lock(&da7219->pll_lock); - switch (clk_id) { case DA7219_CLKSRC_MCLK_SQR: snd_soc_component_update_bits(component, DA7219_PLL_CTRL, diff --git a/sound/soc/codecs/rt722-sdca.c b/sound/soc/codecs/rt722-sdca.c index e5bd9ef812de..f9f7512ca360 100644 --- a/sound/soc/codecs/rt722-sdca.c +++ b/sound/soc/codecs/rt722-sdca.c @@ -607,12 +607,8 @@ static int rt722_sdca_dmic_set_gain_get(struct snd_kcontrol *kcontrol,
if (!adc_vol_flag) /* boost gain */ ctl = regvalue / boost_step; - else { /* ADC gain */ - if (adc_vol_flag) - ctl = p->max - (((vol_max - regvalue) & 0xffff) / interval_offset); - else - ctl = p->max - (((0 - regvalue) & 0xffff) / interval_offset); - } + else /* ADC gain */ + ctl = p->max - (((vol_max - regvalue) & 0xffff) / interval_offset);
ucontrol->value.integer.value[i] = ctl; } diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c index f6c3aeff0d8e..a0c2ce84c32b 100644 --- a/sound/soc/fsl/fsl-asoc-card.c +++ b/sound/soc/fsl/fsl-asoc-card.c @@ -1033,14 +1033,15 @@ static int fsl_asoc_card_probe(struct platform_device *pdev) }
/* - * Properties "hp-det-gpio" and "mic-det-gpio" are optional, and + * Properties "hp-det-gpios" and "mic-det-gpios" are optional, and * simple_util_init_jack() uses these properties for creating * Headphone Jack and Microphone Jack. * * The notifier is initialized in snd_soc_card_jack_new(), then * snd_soc_jack_notifier_register can be called. */ - if (of_property_read_bool(np, "hp-det-gpio")) { + if (of_property_read_bool(np, "hp-det-gpios") || + of_property_read_bool(np, "hp-det-gpio") /* deprecated */) { ret = simple_util_init_jack(&priv->card, &priv->hp_jack, 1, NULL, "Headphone Jack"); if (ret) @@ -1049,7 +1050,8 @@ static int fsl_asoc_card_probe(struct platform_device *pdev) snd_soc_jack_notifier_register(&priv->hp_jack.jack, &hp_jack_nb); }
- if (of_property_read_bool(np, "mic-det-gpio")) { + if (of_property_read_bool(np, "mic-det-gpios") || + of_property_read_bool(np, "mic-det-gpio") /* deprecated */) { ret = simple_util_init_jack(&priv->card, &priv->mic_jack, 0, NULL, "Mic Jack"); if (ret) diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c index 0c71a73476df..67c2d4cb0dea 100644 --- a/sound/soc/fsl/fsl_micfil.c +++ b/sound/soc/fsl/fsl_micfil.c @@ -1061,7 +1061,7 @@ static irqreturn_t micfil_isr(int irq, void *devid) regmap_write_bits(micfil->regmap, REG_MICFIL_STAT, MICFIL_STAT_CHXF(i), - 1); + MICFIL_STAT_CHXF(i)); }
for (i = 0; i < MICFIL_FIFO_NUM; i++) { @@ -1096,7 +1096,7 @@ static irqreturn_t micfil_err_isr(int irq, void *devid) if (stat_reg & MICFIL_STAT_LOWFREQF) { dev_dbg(&pdev->dev, "isr: ipg_clk_app is too low\n"); regmap_write_bits(micfil->regmap, REG_MICFIL_STAT, - MICFIL_STAT_LOWFREQF, 1); + MICFIL_STAT_LOWFREQF, MICFIL_STAT_LOWFREQF); }
return IRQ_HANDLED; diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c index 6fbcf33fd0de..8e7b75cf64db 100644 --- a/sound/soc/fsl/imx-audmix.c +++ b/sound/soc/fsl/imx-audmix.c @@ -275,6 +275,9 @@ static int imx_audmix_probe(struct platform_device *pdev) /* Add AUDMIX Backend */ be_name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "audmix-%d", i); + if (!be_name) + return -ENOMEM; + priv->dai[num_dai + i].cpus = &dlc[1]; priv->dai[num_dai + i].codecs = &snd_soc_dummy_dlc;
diff --git a/sound/soc/mediatek/mt8188/mt8188-mt6359.c b/sound/soc/mediatek/mt8188/mt8188-mt6359.c index 08ae962afeb9..4eed90d13a53 100644 --- a/sound/soc/mediatek/mt8188/mt8188-mt6359.c +++ b/sound/soc/mediatek/mt8188/mt8188-mt6359.c @@ -1279,10 +1279,12 @@ static int mt8188_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
for_each_card_prelinks(card, i, dai_link) { if (strcmp(dai_link->name, "DPTX_BE") == 0) { - if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai")) + if (dai_link->num_codecs && + strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai")) dai_link->init = mt8188_dptx_codec_init; } else if (strcmp(dai_link->name, "ETDM3_OUT_BE") == 0) { - if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai")) + if (dai_link->num_codecs && + strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai")) dai_link->init = mt8188_hdmi_codec_init; } else if (strcmp(dai_link->name, "DL_SRC_BE") == 0 || strcmp(dai_link->name, "UL_SRC_BE") == 0) { @@ -1294,6 +1296,9 @@ static int mt8188_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data, strcmp(dai_link->name, "ETDM2_OUT_BE") == 0 || strcmp(dai_link->name, "ETDM1_IN_BE") == 0 || strcmp(dai_link->name, "ETDM2_IN_BE") == 0) { + if (!dai_link->num_codecs) + continue; + if (!strcmp(dai_link->codecs->dai_name, MAX98390_CODEC_DAI)) { /* * The TDM protocol settings with fixed 4 slots are defined in diff --git a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c index db00704e206d..943f81168403 100644 --- a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c +++ b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c @@ -1099,7 +1099,7 @@ static int mt8192_mt6359_legacy_probe(struct mtk_soc_card_data *soc_card_data) dai_link->ignore = 0; }
- if (dai_link->num_codecs && dai_link->codecs[0].dai_name && + if (dai_link->num_codecs && strcmp(dai_link->codecs[0].dai_name, RT1015_CODEC_DAI) == 0) dai_link->ops = &mt8192_rt1015_i2s_ops; } @@ -1127,7 +1127,7 @@ static int mt8192_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data, int i;
for_each_card_prelinks(card, i, dai_link) - if (dai_link->num_codecs && dai_link->codecs[0].dai_name && + if (dai_link->num_codecs && strcmp(dai_link->codecs[0].dai_name, RT1015_CODEC_DAI) == 0) dai_link->ops = &mt8192_rt1015_i2s_ops; } diff --git a/sound/soc/mediatek/mt8195/mt8195-mt6359.c b/sound/soc/mediatek/mt8195/mt8195-mt6359.c index 2832ef78eaed..8ebf6c7502aa 100644 --- a/sound/soc/mediatek/mt8195/mt8195-mt6359.c +++ b/sound/soc/mediatek/mt8195/mt8195-mt6359.c @@ -1380,10 +1380,12 @@ static int mt8195_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
for_each_card_prelinks(card, i, dai_link) { if (strcmp(dai_link->name, "DPTX_BE") == 0) { - if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai")) + if (dai_link->num_codecs && + strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai")) dai_link->init = mt8195_dptx_codec_init; } else if (strcmp(dai_link->name, "ETDM3_OUT_BE") == 0) { - if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai")) + if (dai_link->num_codecs && + strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai")) dai_link->init = mt8195_hdmi_codec_init; } else if (strcmp(dai_link->name, "DL_SRC_BE") == 0 || strcmp(dai_link->name, "UL_SRC1_BE") == 0 || @@ -1396,6 +1398,9 @@ static int mt8195_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data, strcmp(dai_link->name, "ETDM2_OUT_BE") == 0 || strcmp(dai_link->name, "ETDM1_IN_BE") == 0 || strcmp(dai_link->name, "ETDM2_IN_BE") == 0) { + if (!dai_link->num_codecs) + continue; + if (!strcmp(dai_link->codecs->dai_name, MAX98390_CODEC_DAI)) { if (!(codec_init & MAX98390_CODEC_INIT)) { dai_link->init = mt8195_max98390_init; diff --git a/sound/usb/6fire/chip.c b/sound/usb/6fire/chip.c index 33e962178c93..d562a30b087f 100644 --- a/sound/usb/6fire/chip.c +++ b/sound/usb/6fire/chip.c @@ -61,8 +61,10 @@ static void usb6fire_chip_abort(struct sfire_chip *chip) } }
-static void usb6fire_chip_destroy(struct sfire_chip *chip) +static void usb6fire_card_free(struct snd_card *card) { + struct sfire_chip *chip = card->private_data; + if (chip) { if (chip->pcm) usb6fire_pcm_destroy(chip); @@ -72,8 +74,6 @@ static void usb6fire_chip_destroy(struct sfire_chip *chip) usb6fire_comm_destroy(chip); if (chip->control) usb6fire_control_destroy(chip); - if (chip->card) - snd_card_free(chip->card); } }
@@ -136,6 +136,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf, chip->regidx = regidx; chip->intf_count = 1; chip->card = card; + card->private_free = usb6fire_card_free;
ret = usb6fire_comm_init(chip); if (ret < 0) @@ -162,7 +163,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf, return 0;
destroy_chip: - usb6fire_chip_destroy(chip); + snd_card_free(card); return ret; }
@@ -181,7 +182,6 @@ static void usb6fire_chip_disconnect(struct usb_interface *intf)
chip->shutdown = true; usb6fire_chip_abort(chip); - usb6fire_chip_destroy(chip); } } } diff --git a/sound/usb/caiaq/audio.c b/sound/usb/caiaq/audio.c index 772c0ecb7077..05f964347ed6 100644 --- a/sound/usb/caiaq/audio.c +++ b/sound/usb/caiaq/audio.c @@ -858,14 +858,20 @@ int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev) return 0; }
-void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev) +void snd_usb_caiaq_audio_disconnect(struct snd_usb_caiaqdev *cdev) { struct device *dev = caiaqdev_to_dev(cdev);
dev_dbg(dev, "%s(%p)\n", __func__, cdev); stream_stop(cdev); +} + +void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev) +{ + struct device *dev = caiaqdev_to_dev(cdev); + + dev_dbg(dev, "%s(%p)\n", __func__, cdev); free_urbs(cdev->data_urbs_in); free_urbs(cdev->data_urbs_out); kfree(cdev->data_cb_info); } - diff --git a/sound/usb/caiaq/audio.h b/sound/usb/caiaq/audio.h index 869bf6264d6a..07f5d064456c 100644 --- a/sound/usb/caiaq/audio.h +++ b/sound/usb/caiaq/audio.h @@ -3,6 +3,7 @@ #define CAIAQ_AUDIO_H
int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev); +void snd_usb_caiaq_audio_disconnect(struct snd_usb_caiaqdev *cdev); void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev);
#endif /* CAIAQ_AUDIO_H */ diff --git a/sound/usb/caiaq/device.c b/sound/usb/caiaq/device.c index b5cbf1f195c4..dfd820483849 100644 --- a/sound/usb/caiaq/device.c +++ b/sound/usb/caiaq/device.c @@ -376,6 +376,17 @@ static void setup_card(struct snd_usb_caiaqdev *cdev) dev_err(dev, "Unable to set up control system (ret=%d)\n", ret); }
+static void card_free(struct snd_card *card) +{ + struct snd_usb_caiaqdev *cdev = caiaqdev(card); + +#ifdef CONFIG_SND_USB_CAIAQ_INPUT + snd_usb_caiaq_input_free(cdev); +#endif + snd_usb_caiaq_audio_free(cdev); + usb_reset_device(cdev->chip.dev); +} + static int create_card(struct usb_device *usb_dev, struct usb_interface *intf, struct snd_card **cardp) @@ -489,6 +500,7 @@ static int init_card(struct snd_usb_caiaqdev *cdev) cdev->vendor_name, cdev->product_name, usbpath);
setup_card(cdev); + card->private_free = card_free; return 0;
err_kill_urb: @@ -534,15 +546,14 @@ static void snd_disconnect(struct usb_interface *intf) snd_card_disconnect(card);
#ifdef CONFIG_SND_USB_CAIAQ_INPUT - snd_usb_caiaq_input_free(cdev); + snd_usb_caiaq_input_disconnect(cdev); #endif - snd_usb_caiaq_audio_free(cdev); + snd_usb_caiaq_audio_disconnect(cdev);
usb_kill_urb(&cdev->ep1_in_urb); usb_kill_urb(&cdev->midi_out_urb);
- snd_card_free(card); - usb_reset_device(interface_to_usbdev(intf)); + snd_card_free_when_closed(card); }
diff --git a/sound/usb/caiaq/input.c b/sound/usb/caiaq/input.c index 84f26dce7f5d..a9130891bb69 100644 --- a/sound/usb/caiaq/input.c +++ b/sound/usb/caiaq/input.c @@ -829,15 +829,21 @@ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev) return ret; }
-void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev) +void snd_usb_caiaq_input_disconnect(struct snd_usb_caiaqdev *cdev) { if (!cdev || !cdev->input_dev) return;
usb_kill_urb(cdev->ep4_in_urb); + input_unregister_device(cdev->input_dev); +} + +void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev) +{ + if (!cdev || !cdev->input_dev) + return; + usb_free_urb(cdev->ep4_in_urb); cdev->ep4_in_urb = NULL; - - input_unregister_device(cdev->input_dev); cdev->input_dev = NULL; } diff --git a/sound/usb/caiaq/input.h b/sound/usb/caiaq/input.h index c42891e7be88..fbe267f85d02 100644 --- a/sound/usb/caiaq/input.h +++ b/sound/usb/caiaq/input.h @@ -4,6 +4,7 @@
void snd_usb_caiaq_input_dispatch(struct snd_usb_caiaqdev *cdev, char *buf, unsigned int len); int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev); +void snd_usb_caiaq_input_disconnect(struct snd_usb_caiaqdev *cdev); void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev);
#endif diff --git a/sound/usb/clock.c b/sound/usb/clock.c index 8f85200292f3..842ba5b801ea 100644 --- a/sound/usb/clock.c +++ b/sound/usb/clock.c @@ -36,6 +36,12 @@ union uac23_clock_multiplier_desc { struct uac_clock_multiplier_descriptor v3; };
+/* check whether the descriptor bLength has the minimal length */ +#define DESC_LENGTH_CHECK(p, proto) \ + ((proto) == UAC_VERSION_3 ? \ + ((p)->v3.bLength >= sizeof((p)->v3)) : \ + ((p)->v2.bLength >= sizeof((p)->v2))) + #define GET_VAL(p, proto, field) \ ((proto) == UAC_VERSION_3 ? (p)->v3.field : (p)->v2.field)
@@ -58,6 +64,8 @@ static bool validate_clock_source(void *p, int id, int proto) { union uac23_clock_source_desc *cs = p;
+ if (!DESC_LENGTH_CHECK(cs, proto)) + return false; return GET_VAL(cs, proto, bClockID) == id; }
@@ -65,13 +73,27 @@ static bool validate_clock_selector(void *p, int id, int proto) { union uac23_clock_selector_desc *cs = p;
- return GET_VAL(cs, proto, bClockID) == id; + if (!DESC_LENGTH_CHECK(cs, proto)) + return false; + if (GET_VAL(cs, proto, bClockID) != id) + return false; + /* additional length check for baCSourceID array (in bNrInPins size) + * and two more fields (which sizes depend on the protocol) + */ + if (proto == UAC_VERSION_3) + return cs->v3.bLength >= sizeof(cs->v3) + cs->v3.bNrInPins + + 4 /* bmControls */ + 2 /* wCSelectorDescrStr */; + else + return cs->v2.bLength >= sizeof(cs->v2) + cs->v2.bNrInPins + + 1 /* bmControls */ + 1 /* iClockSelector */; }
static bool validate_clock_multiplier(void *p, int id, int proto) { union uac23_clock_multiplier_desc *cs = p;
+ if (!DESC_LENGTH_CHECK(cs, proto)) + return false; return GET_VAL(cs, proto, bClockID) == id; }
diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c index c5fd180357d1..8538fdfce353 100644 --- a/sound/usb/quirks.c +++ b/sound/usb/quirks.c @@ -555,6 +555,7 @@ int snd_usb_create_quirk(struct snd_usb_audio *chip, static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interface *intf) { struct usb_host_config *config = dev->actconfig; + struct usb_device_descriptor new_device_descriptor; int err;
if (le16_to_cpu(get_cfg_desc(config)->wTotalLength) == EXTIGY_FIRMWARE_SIZE_OLD || @@ -566,10 +567,14 @@ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interfac if (err < 0) dev_dbg(&dev->dev, "error sending boot message: %d\n", err); err = usb_get_descriptor(dev, USB_DT_DEVICE, 0, - &dev->descriptor, sizeof(dev->descriptor)); - config = dev->actconfig; + &new_device_descriptor, sizeof(new_device_descriptor)); if (err < 0) dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err); + if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations) + dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n", + new_device_descriptor.bNumConfigurations); + else + memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor)); err = usb_reset_configuration(dev); if (err < 0) dev_dbg(&dev->dev, "error usb_reset_configuration: %d\n", err); @@ -901,6 +906,7 @@ static void mbox2_setup_48_24_magic(struct usb_device *dev) static int snd_usb_mbox2_boot_quirk(struct usb_device *dev) { struct usb_host_config *config = dev->actconfig; + struct usb_device_descriptor new_device_descriptor; int err; u8 bootresponse[0x12]; int fwsize; @@ -936,10 +942,14 @@ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev) dev_dbg(&dev->dev, "device initialised!\n");
err = usb_get_descriptor(dev, USB_DT_DEVICE, 0, - &dev->descriptor, sizeof(dev->descriptor)); - config = dev->actconfig; + &new_device_descriptor, sizeof(new_device_descriptor)); if (err < 0) dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err); + if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations) + dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n", + new_device_descriptor.bNumConfigurations); + else + memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
err = usb_reset_configuration(dev); if (err < 0) @@ -1249,6 +1259,7 @@ static void mbox3_setup_defaults(struct usb_device *dev) static int snd_usb_mbox3_boot_quirk(struct usb_device *dev) { struct usb_host_config *config = dev->actconfig; + struct usb_device_descriptor new_device_descriptor; int err; int descriptor_size;
@@ -1262,10 +1273,14 @@ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev) dev_dbg(&dev->dev, "MBOX3: device initialised!\n");
err = usb_get_descriptor(dev, USB_DT_DEVICE, 0, - &dev->descriptor, sizeof(dev->descriptor)); - config = dev->actconfig; + &new_device_descriptor, sizeof(new_device_descriptor)); if (err < 0) dev_dbg(&dev->dev, "MBOX3: error usb_get_descriptor: %d\n", err); + if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations) + dev_dbg(&dev->dev, "MBOX3: error too large bNumConfigurations: %d\n", + new_device_descriptor.bNumConfigurations); + else + memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
err = usb_reset_configuration(dev); if (err < 0) diff --git a/sound/usb/usx2y/us122l.c b/sound/usb/usx2y/us122l.c index 1be0e980feb9..ca5fac03ec79 100644 --- a/sound/usb/usx2y/us122l.c +++ b/sound/usb/usx2y/us122l.c @@ -606,10 +606,7 @@ static void snd_us122l_disconnect(struct usb_interface *intf) usb_put_intf(usb_ifnum_to_if(us122l->dev, 1)); usb_put_dev(us122l->dev);
- while (atomic_read(&us122l->mmap_count)) - msleep(500); - - snd_card_free(card); + snd_card_free_when_closed(card); }
static int snd_us122l_suspend(struct usb_interface *intf, pm_message_t message) diff --git a/sound/usb/usx2y/usbusx2y.c b/sound/usb/usx2y/usbusx2y.c index 2f9cede242b3..5f81c68fd42b 100644 --- a/sound/usb/usx2y/usbusx2y.c +++ b/sound/usb/usx2y/usbusx2y.c @@ -422,7 +422,7 @@ static void snd_usx2y_disconnect(struct usb_interface *intf) } if (usx2y->us428ctls_sharedmem) wake_up(&usx2y->us428ctls_wait_queue_head); - snd_card_free(card); + snd_card_free_when_closed(card); }
static int snd_usx2y_probe(struct usb_interface *intf, diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c index 7b8d9ec89ebd..c032d2c6ab6d 100644 --- a/tools/bpf/bpftool/jit_disasm.c +++ b/tools/bpf/bpftool/jit_disasm.c @@ -80,7 +80,8 @@ symbol_lookup_callback(__maybe_unused void *disasm_info, static int init_context(disasm_ctx_t *ctx, const char *arch, __maybe_unused const char *disassembler_options, - __maybe_unused unsigned char *image, __maybe_unused ssize_t len) + __maybe_unused unsigned char *image, __maybe_unused ssize_t len, + __maybe_unused __u64 func_ksym) { char *triple;
@@ -109,12 +110,13 @@ static void destroy_context(disasm_ctx_t *ctx) }
static int -disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc) +disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc, + __u64 func_ksym) { char buf[256]; int count;
- count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, pc, + count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, func_ksym + pc, buf, sizeof(buf)); if (json_output) printf_json(buf); @@ -136,8 +138,21 @@ int disasm_init(void) #ifdef HAVE_LIBBFD_SUPPORT #define DISASM_SPACER "\t"
+struct disasm_info { + struct disassemble_info info; + __u64 func_ksym; +}; + +static void disasm_print_addr(bfd_vma addr, struct disassemble_info *info) +{ + struct disasm_info *dinfo = container_of(info, struct disasm_info, info); + + addr += dinfo->func_ksym; + generic_print_address(addr, info); +} + typedef struct { - struct disassemble_info *info; + struct disasm_info *info; disassembler_ftype disassemble; bfd *bfdf; } disasm_ctx_t; @@ -215,7 +230,7 @@ static int fprintf_json_styled(void *out,
static int init_context(disasm_ctx_t *ctx, const char *arch, const char *disassembler_options, - unsigned char *image, ssize_t len) + unsigned char *image, ssize_t len, __u64 func_ksym) { struct disassemble_info *info; char tpath[PATH_MAX]; @@ -238,12 +253,13 @@ static int init_context(disasm_ctx_t *ctx, const char *arch, } bfdf = ctx->bfdf;
- ctx->info = malloc(sizeof(struct disassemble_info)); + ctx->info = malloc(sizeof(struct disasm_info)); if (!ctx->info) { p_err("mem alloc failed"); goto err_close; } - info = ctx->info; + ctx->info->func_ksym = func_ksym; + info = &ctx->info->info;
if (json_output) init_disassemble_info_compat(info, stdout, @@ -272,6 +288,7 @@ static int init_context(disasm_ctx_t *ctx, const char *arch, info->disassembler_options = disassembler_options; info->buffer = image; info->buffer_length = len; + info->print_address_func = disasm_print_addr;
disassemble_init_for_target(info);
@@ -304,9 +321,10 @@ static void destroy_context(disasm_ctx_t *ctx)
static int disassemble_insn(disasm_ctx_t *ctx, __maybe_unused unsigned char *image, - __maybe_unused ssize_t len, int pc) + __maybe_unused ssize_t len, int pc, + __maybe_unused __u64 func_ksym) { - return ctx->disassemble(pc, ctx->info); + return ctx->disassemble(pc, &ctx->info->info); }
int disasm_init(void) @@ -331,7 +349,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes, if (!len) return -1;
- if (init_context(&ctx, arch, disassembler_options, image, len)) + if (init_context(&ctx, arch, disassembler_options, image, len, func_ksym)) return -1;
if (json_output) @@ -360,7 +378,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes, printf("%4x:" DISASM_SPACER, pc); }
- count = disassemble_insn(&ctx, image, len, pc); + count = disassemble_insn(&ctx, image, len, pc, func_ksym);
if (json_output) { /* Operand array, was started in fprintf_json. Before diff --git a/tools/gpio/gpio-sloppy-logic-analyzer.sh b/tools/gpio/gpio-sloppy-logic-analyzer.sh index ed21a110df5e..3ef2278e49f9 100755 --- a/tools/gpio/gpio-sloppy-logic-analyzer.sh +++ b/tools/gpio/gpio-sloppy-logic-analyzer.sh @@ -113,7 +113,7 @@ init_cpu() taskset -p "$newmask" "$p" || continue done 2>/dev/null >/dev/null
- # Big hammer! Working with 'rcu_momentary_dyntick_idle()' for a more fine-grained solution + # Big hammer! Working with 'rcu_momentary_eqs()' for a more fine-grained solution # still printed warnings. Same for re-enabling the stall detector after sampling. echo 1 > /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress
diff --git a/tools/include/nolibc/arch-s390.h b/tools/include/nolibc/arch-s390.h index 2ec13d8b9a2d..f9ab83a219b8 100644 --- a/tools/include/nolibc/arch-s390.h +++ b/tools/include/nolibc/arch-s390.h @@ -10,6 +10,7 @@
#include "compiler.h" #include "crt.h" +#include "std.h"
/* Syscalls for s390: * - registers are 64-bit diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile index 1b22f0f37288..857a5f7b413d 100644 --- a/tools/lib/bpf/Makefile +++ b/tools/lib/bpf/Makefile @@ -61,7 +61,8 @@ ifndef VERBOSE endif
INCLUDES = -I$(or $(OUTPUT),.) \ - -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi + -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi \ + -I$(srctree)/tools/arch/$(SRCARCH)/include
export prefix libdir src obj
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 219facd0e66e..5ff643e60d09 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -3985,7 +3985,7 @@ static bool sym_is_subprog(const Elf64_Sym *sym, int text_shndx) return true;
/* global function */ - return bind == STB_GLOBAL && type == STT_FUNC; + return (bind == STB_GLOBAL || bind == STB_WEAK) && type == STT_FUNC; }
static int find_extern_btf_id(const struct btf *btf, const char *ext_name) @@ -4389,7 +4389,7 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
static bool prog_is_subprog(const struct bpf_object *obj, const struct bpf_program *prog) { - return prog->sec_idx == obj->efile.text_shndx && obj->nr_programs > 1; + return prog->sec_idx == obj->efile.text_shndx; }
struct bpf_program * @@ -5094,6 +5094,7 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) enum libbpf_map_type map_type = map->libbpf_type; char *cp, errmsg[STRERR_BUFSIZE]; int err, zero = 0; + size_t mmap_sz;
if (obj->gen_loader) { bpf_gen__map_update_elem(obj->gen_loader, map - obj->maps, @@ -5107,8 +5108,8 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) if (err) { err = -errno; cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg)); - pr_warn("Error setting initial map(%s) contents: %s\n", - map->name, cp); + pr_warn("map '%s': failed to set initial contents: %s\n", + bpf_map__name(map), cp); return err; }
@@ -5118,11 +5119,43 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) if (err) { err = -errno; cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg)); - pr_warn("Error freezing map(%s) as read-only: %s\n", - map->name, cp); + pr_warn("map '%s': failed to freeze as read-only: %s\n", + bpf_map__name(map), cp); return err; } } + + /* Remap anonymous mmap()-ed "map initialization image" as + * a BPF map-backed mmap()-ed memory, but preserving the same + * memory address. This will cause kernel to change process' + * page table to point to a different piece of kernel memory, + * but from userspace point of view memory address (and its + * contents, being identical at this point) will stay the + * same. This mapping will be released by bpf_object__close() + * as per normal clean up procedure. + */ + mmap_sz = bpf_map_mmap_sz(map); + if (map->def.map_flags & BPF_F_MMAPABLE) { + void *mmaped; + int prot; + + if (map->def.map_flags & BPF_F_RDONLY_PROG) + prot = PROT_READ; + else + prot = PROT_READ | PROT_WRITE; + mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map->fd, 0); + if (mmaped == MAP_FAILED) { + err = -errno; + pr_warn("map '%s': failed to re-mmap() contents: %d\n", + bpf_map__name(map), err); + return err; + } + map->mmaped = mmaped; + } else if (map->mmaped) { + munmap(map->mmaped, mmap_sz); + map->mmaped = NULL; + } + return 0; }
@@ -5439,8 +5472,7 @@ bpf_object__create_maps(struct bpf_object *obj) err = bpf_object__populate_internal_map(obj, map); if (err < 0) goto err_out; - } - if (map->def.type == BPF_MAP_TYPE_ARENA) { + } else if (map->def.type == BPF_MAP_TYPE_ARENA) { map->mmaped = mmap((void *)(long)map->map_extra, bpf_map_mmap_sz(map), PROT_READ | PROT_WRITE, map->map_extra ? MAP_SHARED | MAP_FIXED : MAP_SHARED, @@ -7352,8 +7384,14 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog, opts->prog_flags |= BPF_F_XDP_HAS_FRAGS;
/* special check for usdt to use uprobe_multi link */ - if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK)) + if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK)) { + /* for BPF_TRACE_UPROBE_MULTI, user might want to query expected_attach_type + * in prog, and expected_attach_type we set in kernel is from opts, so we + * update both. + */ prog->expected_attach_type = BPF_TRACE_UPROBE_MULTI; + opts->expected_attach_type = BPF_TRACE_UPROBE_MULTI; + }
if ((def & SEC_ATTACH_BTF) && !prog->attach_btf_id) { int btf_obj_fd = 0, btf_type_id = 0, err; @@ -7443,6 +7481,7 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog load_attr.attach_btf_id = prog->attach_btf_id; load_attr.kern_version = kern_version; load_attr.prog_ifindex = prog->prog_ifindex; + load_attr.expected_attach_type = prog->expected_attach_type;
/* specify func_info/line_info only if kernel supports them */ if (obj->btf && btf__fd(obj->btf) >= 0 && kernel_supports(obj, FEAT_BTF_FUNC)) { @@ -7474,9 +7513,6 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog insns_cnt = prog->insns_cnt; }
- /* allow prog_prepare_load_fn to change expected_attach_type */ - load_attr.expected_attach_type = prog->expected_attach_type; - if (obj->gen_loader) { bpf_gen__prog_load(obj->gen_loader, prog->type, prog->name, license, insns, insns_cnt, &load_attr, @@ -13877,46 +13913,11 @@ int bpf_object__load_skeleton(struct bpf_object_skeleton *s) for (i = 0; i < s->map_cnt; i++) { struct bpf_map_skeleton *map_skel = (void *)s->maps + i * s->map_skel_sz; struct bpf_map *map = *map_skel->map; - size_t mmap_sz = bpf_map_mmap_sz(map); - int prot, map_fd = map->fd; - void **mmaped = map_skel->mmaped; - - if (!mmaped) - continue;
- if (!(map->def.map_flags & BPF_F_MMAPABLE)) { - *mmaped = NULL; + if (!map_skel->mmaped) continue; - } - - if (map->def.type == BPF_MAP_TYPE_ARENA) { - *mmaped = map->mmaped; - continue; - }
- if (map->def.map_flags & BPF_F_RDONLY_PROG) - prot = PROT_READ; - else - prot = PROT_READ | PROT_WRITE; - - /* Remap anonymous mmap()-ed "map initialization image" as - * a BPF map-backed mmap()-ed memory, but preserving the same - * memory address. This will cause kernel to change process' - * page table to point to a different piece of kernel memory, - * but from userspace point of view memory address (and its - * contents, being identical at this point) will stay the - * same. This mapping will be released by bpf_object__close() - * as per normal clean up procedure, so we don't need to worry - * about it from skeleton's clean up perspective. - */ - *mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map_fd, 0); - if (*mmaped == MAP_FAILED) { - err = -errno; - *mmaped = NULL; - pr_warn("failed to re-mmap() map '%s': %d\n", - bpf_map__name(map), err); - return libbpf_err(err); - } + *map_skel->mmaped = map->mmaped; }
return 0; diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c index e0005c6ade88..6985ab0f1ca9 100644 --- a/tools/lib/bpf/linker.c +++ b/tools/lib/bpf/linker.c @@ -396,6 +396,8 @@ static int init_output_elf(struct bpf_linker *linker, const char *file) pr_warn_elf("failed to create SYMTAB data"); return -EINVAL; } + /* Ensure libelf translates byte-order of symbol records */ + sec->data->d_type = ELF_T_SYM;
str_off = strset__add_str(linker->strtab_strs, sec->sec_name); if (str_off < 0) diff --git a/tools/lib/thermal/commands.c b/tools/lib/thermal/commands.c index 73d4d4e8d6ec..27b4442f0e34 100644 --- a/tools/lib/thermal/commands.c +++ b/tools/lib/thermal/commands.c @@ -261,9 +261,25 @@ static struct genl_ops thermal_cmd_ops = { .o_ncmds = ARRAY_SIZE(thermal_cmds), };
-static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int cmd, - int flags, void *arg) +struct cmd_param { + int tz_id; +}; + +typedef int (*cmd_cb_t)(struct nl_msg *, struct cmd_param *); + +static int thermal_genl_tz_id_encode(struct nl_msg *msg, struct cmd_param *p) +{ + if (p->tz_id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, p->tz_id)) + return -1; + + return 0; +} + +static thermal_error_t thermal_genl_auto(struct thermal_handler *th, cmd_cb_t cmd_cb, + struct cmd_param *param, + int cmd, int flags, void *arg) { + thermal_error_t ret = THERMAL_ERROR; struct nl_msg *msg; void *hdr;
@@ -274,45 +290,55 @@ static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int hdr = genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, thermal_cmd_ops.o_id, 0, flags, cmd, THERMAL_GENL_VERSION); if (!hdr) - return THERMAL_ERROR; + goto out;
- if (id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, id)) - return THERMAL_ERROR; + if (cmd_cb && cmd_cb(msg, param)) + goto out;
if (nl_send_msg(th->sk_cmd, th->cb_cmd, msg, genl_handle_msg, arg)) - return THERMAL_ERROR; + goto out;
+ ret = THERMAL_SUCCESS; +out: nlmsg_free(msg);
- return THERMAL_SUCCESS; + return ret; }
thermal_error_t thermal_cmd_get_tz(struct thermal_handler *th, struct thermal_zone **tz) { - return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_TZ_GET_ID, + return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_TZ_GET_ID, NLM_F_DUMP | NLM_F_ACK, tz); }
thermal_error_t thermal_cmd_get_cdev(struct thermal_handler *th, struct thermal_cdev **tc) { - return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_CDEV_GET, + return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_CDEV_GET, NLM_F_DUMP | NLM_F_ACK, tc); }
thermal_error_t thermal_cmd_get_trip(struct thermal_handler *th, struct thermal_zone *tz) { - return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TRIP, - 0, tz); + struct cmd_param p = { .tz_id = tz->id }; + + return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p, + THERMAL_GENL_CMD_TZ_GET_TRIP, 0, tz); }
thermal_error_t thermal_cmd_get_governor(struct thermal_handler *th, struct thermal_zone *tz) { - return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz); + struct cmd_param p = { .tz_id = tz->id }; + + return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p, + THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz); }
thermal_error_t thermal_cmd_get_temp(struct thermal_handler *th, struct thermal_zone *tz) { - return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz); + struct cmd_param p = { .tz_id = tz->id }; + + return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p, + THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz); }
thermal_error_t thermal_cmd_exit(struct thermal_handler *th) diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config index d4332675babb..2ce71d2e5fae 100644 --- a/tools/perf/Makefile.config +++ b/tools/perf/Makefile.config @@ -1194,7 +1194,7 @@ endif ifneq ($(NO_LIBTRACEEVENT),1) $(call feature_check,libtraceevent) ifeq ($(feature-libtraceevent), 1) - CFLAGS += -DHAVE_LIBTRACEEVENT + CFLAGS += -DHAVE_LIBTRACEEVENT $(shell $(PKG_CONFIG) --cflags libtraceevent) LDFLAGS += $(shell $(PKG_CONFIG) --libs-only-L libtraceevent) EXTLIBS += $(shell $(PKG_CONFIG) --libs-only-l libtraceevent) LIBTRACEEVENT_VERSION := $(shell $(PKG_CONFIG) --modversion libtraceevent).0.0 diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c index abcdc49b7a98..272d3c70810e 100644 --- a/tools/perf/builtin-ftrace.c +++ b/tools/perf/builtin-ftrace.c @@ -815,7 +815,7 @@ static void display_histogram(int buckets[], bool use_nsec)
bar_len = buckets[0] * bar_total / total; printf(" %4d - %-4d %s | %10d | %.*s%*s |\n", - 0, 1, "us", buckets[0], bar_len, bar, bar_total - bar_len, ""); + 0, 1, use_nsec ? "ns" : "us", buckets[0], bar_len, bar, bar_total - bar_len, "");
for (i = 1; i < NUM_BUCKET - 1; i++) { int start = (1 << (i - 1)); diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c index 65b8cba324be..c5331721dfee 100644 --- a/tools/perf/builtin-list.c +++ b/tools/perf/builtin-list.c @@ -112,7 +112,7 @@ static void wordwrap(FILE *fp, const char *s, int start, int max, int corr) } }
-static void default_print_event(void *ps, const char *pmu_name, const char *topic, +static void default_print_event(void *ps, const char *topic, const char *pmu_name, const char *event_name, const char *event_alias, const char *scale_unit __maybe_unused, bool deprecated, const char *event_type_desc, @@ -353,7 +353,7 @@ static void fix_escape_fprintf(FILE *fp, struct strbuf *buf, const char *fmt, .. fputs(buf->buf, fp); }
-static void json_print_event(void *ps, const char *pmu_name, const char *topic, +static void json_print_event(void *ps, const char *topic, const char *pmu_name, const char *event_name, const char *event_alias, const char *scale_unit, bool deprecated, const char *event_type_desc, diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 689a3d43c258..4933efdfee76 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -716,15 +716,19 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) }
if (!cpu_map__is_dummy(evsel_list->core.user_requested_cpus)) { - if (affinity__setup(&saved_affinity) < 0) - return -1; + if (affinity__setup(&saved_affinity) < 0) { + err = -1; + goto err_out; + } affinity = &saved_affinity; }
evlist__for_each_entry(evsel_list, counter) { counter->reset_group = false; - if (bpf_counter__load(counter, &target)) - return -1; + if (bpf_counter__load(counter, &target)) { + err = -1; + goto err_out; + } if (!(evsel__is_bperf(counter))) all_counters_use_bpf = false; } @@ -767,7 +771,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
switch (stat_handle_error(counter)) { case COUNTER_FATAL: - return -1; + err = -1; + goto err_out; case COUNTER_RETRY: goto try_again; case COUNTER_SKIP: @@ -808,7 +813,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
switch (stat_handle_error(counter)) { case COUNTER_FATAL: - return -1; + err = -1; + goto err_out; case COUNTER_RETRY: goto try_again_reset; case COUNTER_SKIP: @@ -821,6 +827,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) } } affinity__cleanup(affinity); + affinity = NULL;
evlist__for_each_entry(evsel_list, counter) { if (!counter->supported) { @@ -833,8 +840,10 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) stat_config.unit_width = l;
if (evsel__should_store_id(counter) && - evsel__store_ids(counter, evsel_list)) - return -1; + evsel__store_ids(counter, evsel_list)) { + err = -1; + goto err_out; + } }
if (evlist__apply_filters(evsel_list, &counter, &target)) { @@ -855,20 +864,23 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) }
if (err < 0) - return err; + goto err_out;
err = perf_event__synthesize_stat_events(&stat_config, NULL, evsel_list, process_synthesized_event, is_pipe); if (err < 0) - return err; + goto err_out; + }
if (target.initial_delay) { pr_info(EVLIST_DISABLED_MSG); } else { err = enable_counters(); - if (err) - return -1; + if (err) { + err = -1; + goto err_out; + } }
/* Exec the command, if any */ @@ -878,8 +890,10 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) if (target.initial_delay > 0) { usleep(target.initial_delay * USEC_PER_MSEC); err = enable_counters(); - if (err) - return -1; + if (err) { + err = -1; + goto err_out; + }
pr_info(EVLIST_ENABLED_MSG); } @@ -899,7 +913,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) if (workload_exec_errno) { const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg)); pr_err("Workload failed: %s\n", emsg); - return -1; + err = -1; + goto err_out; }
if (WIFSIGNALED(status)) @@ -946,6 +961,13 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) evlist__close(evsel_list);
return WEXITSTATUS(status); + +err_out: + if (forks) + evlist__cancel_workload(evsel_list); + + affinity__cleanup(affinity); + return err; }
static int run_perf_stat(int argc, const char **argv, int run_idx) diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c index d3f11b90d025..ffa129527309 100644 --- a/tools/perf/builtin-trace.c +++ b/tools/perf/builtin-trace.c @@ -2702,6 +2702,7 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel, char msg[1024]; void *args, *augmented_args = NULL; int augmented_args_size; + size_t printed = 0;
if (sc == NULL) return -1; @@ -2717,8 +2718,8 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
args = perf_evsel__sc_tp_ptr(evsel, args, sample); augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls_args_size); - syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread); - fprintf(trace->output, "%s", msg); + printed += syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread); + fprintf(trace->output, "%.*s", (int)printed, msg); err = 0; out_put: thread__put(thread); @@ -3087,7 +3088,7 @@ static size_t trace__fprintf_tp_fields(struct trace *trace, struct evsel *evsel, printed += syscall_arg_fmt__scnprintf_val(arg, bf + printed, size - printed, &syscall_arg, val); }
- return printed + fprintf(trace->output, "%s", bf); + return printed + fprintf(trace->output, "%.*s", (int)printed, bf); }
static int trace__event_handler(struct trace *trace, struct evsel *evsel, @@ -3096,13 +3097,8 @@ static int trace__event_handler(struct trace *trace, struct evsel *evsel, { struct thread *thread; int callchain_ret = 0; - /* - * Check if we called perf_evsel__disable(evsel) due to, for instance, - * this event's max_events having been hit and this is an entry coming - * from the ring buffer that we should discard, since the max events - * have already been considered/printed. - */ - if (evsel->disabled) + + if (evsel->nr_events_printed >= evsel->max_events) return 0;
thread = machine__findnew_thread(trace->host, sample->pid, sample->tid); @@ -4326,6 +4322,9 @@ static int trace__run(struct trace *trace, int argc, const char **argv) sizeof(__u32), BPF_ANY); } } + + if (trace->skel) + trace->filter_pids.map = trace->skel->maps.pids_filtered; #endif err = trace__set_filter_pids(trace); if (err < 0) @@ -5449,6 +5448,10 @@ int cmd_trace(int argc, const char **argv) if (trace.summary_only) trace.summary = trace.summary_only;
+ /* Keep exited threads, otherwise information might be lost for summary */ + if (trace.summary) + symbol_conf.keep_exited_threads = true; + if (output_name != NULL) { err = trace__open_output(&trace, output_name); if (err < 0) { diff --git a/tools/perf/pmu-events/empty-pmu-events.c b/tools/perf/pmu-events/empty-pmu-events.c index c592079982fb..873e9fb2041f 100644 --- a/tools/perf/pmu-events/empty-pmu-events.c +++ b/tools/perf/pmu-events/empty-pmu-events.c @@ -380,7 +380,7 @@ int pmu_events_table__for_each_event(const struct pmu_events_table *table, continue;
ret = pmu_events_table__for_each_event_pmu(table, table_pmu, fn, data); - if (pmu || ret) + if (ret) return ret; } return 0; diff --git a/tools/perf/pmu-events/jevents.py b/tools/perf/pmu-events/jevents.py index bb0a5d92df4a..d46a22fb5573 100755 --- a/tools/perf/pmu-events/jevents.py +++ b/tools/perf/pmu-events/jevents.py @@ -930,7 +930,7 @@ int pmu_events_table__for_each_event(const struct pmu_events_table *table, continue;
ret = pmu_events_table__for_each_event_pmu(table, table_pmu, fn, data); - if (pmu || ret) + if (ret) return ret; } return 0; diff --git a/tools/perf/tests/attr/test-stat-default b/tools/perf/tests/attr/test-stat-default index a1e2da0a9a6d..e47fb4944679 100644 --- a/tools/perf/tests/attr/test-stat-default +++ b/tools/perf/tests/attr/test-stat-default @@ -88,98 +88,142 @@ enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-fe-bound (0x8200) +# PERF_TYPE_RAW / topdown-bad-spec (0x8100) [event13:base-stat] fd=13 group_fd=11 type=4 -config=33280 +config=33024 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-be-bound (0x8300) +# PERF_TYPE_RAW / topdown-fe-bound (0x8200) [event14:base-stat] fd=14 group_fd=11 type=4 -config=33536 +config=33280 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-bad-spec (0x8100) +# PERF_TYPE_RAW / topdown-be-bound (0x8300) [event15:base-stat] fd=15 group_fd=11 type=4 -config=33024 +config=33536 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING +# PERF_TYPE_RAW / topdown-heavy-ops (0x8400) [event16:base-stat] fd=16 +group_fd=11 type=4 -config=4109 +config=33792 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/ +# PERF_TYPE_RAW / topdown-br-mispredict (0x8500) [event17:base-stat] fd=17 +group_fd=11 type=4 -config=17039629 +config=34048 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD +# PERF_TYPE_RAW / topdown-fetch-lat (0x8600) [event18:base-stat] fd=18 +group_fd=11 type=4 -config=60 +config=34304 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY +# PERF_TYPE_RAW / topdown-mem-bound (0x8700) [event19:base-stat] fd=19 +group_fd=11 type=4 -config=2097421 +config=34560 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK +# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING [event20:base-stat] fd=20 type=4 -config=316 +config=4109 optional=1
-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE +# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/ [event21:base-stat] fd=21 type=4 -config=412 +config=17039629 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD [event22:base-stat] fd=22 type=4 -config=572 +config=60 optional=1
-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS +# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY [event23:base-stat] fd=23 type=4 -config=706 +config=2097421 optional=1
-# PERF_TYPE_RAW / UOPS_ISSUED.ANY +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK [event24:base-stat] fd=24 type=4 +config=316 +optional=1 + +# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE +[event25:base-stat] +fd=25 +type=4 +config=412 +optional=1 + +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE +[event26:base-stat] +fd=26 +type=4 +config=572 +optional=1 + +# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS +[event27:base-stat] +fd=27 +type=4 +config=706 +optional=1 + +# PERF_TYPE_RAW / UOPS_ISSUED.ANY +[event28:base-stat] +fd=28 +type=4 config=270 optional=1 diff --git a/tools/perf/tests/attr/test-stat-detailed-1 b/tools/perf/tests/attr/test-stat-detailed-1 index 1c52cb05c900..3d500d3e0c5c 100644 --- a/tools/perf/tests/attr/test-stat-detailed-1 +++ b/tools/perf/tests/attr/test-stat-detailed-1 @@ -90,99 +90,143 @@ enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-fe-bound (0x8200) +# PERF_TYPE_RAW / topdown-bad-spec (0x8100) [event13:base-stat] fd=13 group_fd=11 type=4 -config=33280 +config=33024 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-be-bound (0x8300) +# PERF_TYPE_RAW / topdown-fe-bound (0x8200) [event14:base-stat] fd=14 group_fd=11 type=4 -config=33536 +config=33280 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-bad-spec (0x8100) +# PERF_TYPE_RAW / topdown-be-bound (0x8300) [event15:base-stat] fd=15 group_fd=11 type=4 -config=33024 +config=33536 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING +# PERF_TYPE_RAW / topdown-heavy-ops (0x8400) [event16:base-stat] fd=16 +group_fd=11 type=4 -config=4109 +config=33792 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/ +# PERF_TYPE_RAW / topdown-br-mispredict (0x8500) [event17:base-stat] fd=17 +group_fd=11 type=4 -config=17039629 +config=34048 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD +# PERF_TYPE_RAW / topdown-fetch-lat (0x8600) [event18:base-stat] fd=18 +group_fd=11 type=4 -config=60 +config=34304 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY +# PERF_TYPE_RAW / topdown-mem-bound (0x8700) [event19:base-stat] fd=19 +group_fd=11 type=4 -config=2097421 +config=34560 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK +# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING [event20:base-stat] fd=20 type=4 -config=316 +config=4109 optional=1
-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE +# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/ [event21:base-stat] fd=21 type=4 -config=412 +config=17039629 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD [event22:base-stat] fd=22 type=4 -config=572 +config=60 optional=1
-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS +# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY [event23:base-stat] fd=23 type=4 -config=706 +config=2097421 optional=1
-# PERF_TYPE_RAW / UOPS_ISSUED.ANY +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK [event24:base-stat] fd=24 type=4 +config=316 +optional=1 + +# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE +[event25:base-stat] +fd=25 +type=4 +config=412 +optional=1 + +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE +[event26:base-stat] +fd=26 +type=4 +config=572 +optional=1 + +# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS +[event27:base-stat] +fd=27 +type=4 +config=706 +optional=1 + +# PERF_TYPE_RAW / UOPS_ISSUED.ANY +[event28:base-stat] +fd=28 +type=4 config=270 optional=1
@@ -190,8 +234,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1D << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event25:base-stat] -fd=25 +[event29:base-stat] +fd=29 type=3 config=0 optional=1 @@ -200,8 +244,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1D << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event26:base-stat] -fd=26 +[event30:base-stat] +fd=30 type=3 config=65536 optional=1 @@ -210,8 +254,8 @@ optional=1 # PERF_COUNT_HW_CACHE_LL << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event27:base-stat] -fd=27 +[event31:base-stat] +fd=31 type=3 config=2 optional=1 @@ -220,8 +264,8 @@ optional=1 # PERF_COUNT_HW_CACHE_LL << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event28:base-stat] -fd=28 +[event32:base-stat] +fd=32 type=3 config=65538 optional=1 diff --git a/tools/perf/tests/attr/test-stat-detailed-2 b/tools/perf/tests/attr/test-stat-detailed-2 index 7e961d24a885..01777a63752f 100644 --- a/tools/perf/tests/attr/test-stat-detailed-2 +++ b/tools/perf/tests/attr/test-stat-detailed-2 @@ -90,99 +90,143 @@ enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-fe-bound (0x8200) +# PERF_TYPE_RAW / topdown-bad-spec (0x8100) [event13:base-stat] fd=13 group_fd=11 type=4 -config=33280 +config=33024 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-be-bound (0x8300) +# PERF_TYPE_RAW / topdown-fe-bound (0x8200) [event14:base-stat] fd=14 group_fd=11 type=4 -config=33536 +config=33280 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-bad-spec (0x8100) +# PERF_TYPE_RAW / topdown-be-bound (0x8300) [event15:base-stat] fd=15 group_fd=11 type=4 -config=33024 +config=33536 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING +# PERF_TYPE_RAW / topdown-heavy-ops (0x8400) [event16:base-stat] fd=16 +group_fd=11 type=4 -config=4109 +config=33792 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/ +# PERF_TYPE_RAW / topdown-br-mispredict (0x8500) [event17:base-stat] fd=17 +group_fd=11 type=4 -config=17039629 +config=34048 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD +# PERF_TYPE_RAW / topdown-fetch-lat (0x8600) [event18:base-stat] fd=18 +group_fd=11 type=4 -config=60 +config=34304 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY +# PERF_TYPE_RAW / topdown-mem-bound (0x8700) [event19:base-stat] fd=19 +group_fd=11 type=4 -config=2097421 +config=34560 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK +# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING [event20:base-stat] fd=20 type=4 -config=316 +config=4109 optional=1
-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE +# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/ [event21:base-stat] fd=21 type=4 -config=412 +config=17039629 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD [event22:base-stat] fd=22 type=4 -config=572 +config=60 optional=1
-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS +# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY [event23:base-stat] fd=23 type=4 -config=706 +config=2097421 optional=1
-# PERF_TYPE_RAW / UOPS_ISSUED.ANY +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK [event24:base-stat] fd=24 type=4 +config=316 +optional=1 + +# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE +[event25:base-stat] +fd=25 +type=4 +config=412 +optional=1 + +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE +[event26:base-stat] +fd=26 +type=4 +config=572 +optional=1 + +# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS +[event27:base-stat] +fd=27 +type=4 +config=706 +optional=1 + +# PERF_TYPE_RAW / UOPS_ISSUED.ANY +[event28:base-stat] +fd=28 +type=4 config=270 optional=1
@@ -190,8 +234,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1D << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event25:base-stat] -fd=25 +[event29:base-stat] +fd=29 type=3 config=0 optional=1 @@ -200,8 +244,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1D << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event26:base-stat] -fd=26 +[event30:base-stat] +fd=30 type=3 config=65536 optional=1 @@ -210,8 +254,8 @@ optional=1 # PERF_COUNT_HW_CACHE_LL << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event27:base-stat] -fd=27 +[event31:base-stat] +fd=31 type=3 config=2 optional=1 @@ -220,8 +264,8 @@ optional=1 # PERF_COUNT_HW_CACHE_LL << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event28:base-stat] -fd=28 +[event32:base-stat] +fd=32 type=3 config=65538 optional=1 @@ -230,8 +274,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1I << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event29:base-stat] -fd=29 +[event33:base-stat] +fd=33 type=3 config=1 optional=1 @@ -240,8 +284,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1I << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event30:base-stat] -fd=30 +[event34:base-stat] +fd=34 type=3 config=65537 optional=1 @@ -250,8 +294,8 @@ optional=1 # PERF_COUNT_HW_CACHE_DTLB << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event31:base-stat] -fd=31 +[event35:base-stat] +fd=35 type=3 config=3 optional=1 @@ -260,8 +304,8 @@ optional=1 # PERF_COUNT_HW_CACHE_DTLB << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event32:base-stat] -fd=32 +[event36:base-stat] +fd=36 type=3 config=65539 optional=1 @@ -270,8 +314,8 @@ optional=1 # PERF_COUNT_HW_CACHE_ITLB << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event33:base-stat] -fd=33 +[event37:base-stat] +fd=37 type=3 config=4 optional=1 @@ -280,8 +324,8 @@ optional=1 # PERF_COUNT_HW_CACHE_ITLB << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event34:base-stat] -fd=34 +[event38:base-stat] +fd=38 type=3 config=65540 optional=1 diff --git a/tools/perf/tests/attr/test-stat-detailed-3 b/tools/perf/tests/attr/test-stat-detailed-3 index e50535f45977..8400abd7e1e4 100644 --- a/tools/perf/tests/attr/test-stat-detailed-3 +++ b/tools/perf/tests/attr/test-stat-detailed-3 @@ -90,99 +90,143 @@ enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-fe-bound (0x8200) +# PERF_TYPE_RAW / topdown-bad-spec (0x8100) [event13:base-stat] fd=13 group_fd=11 type=4 -config=33280 +config=33024 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-be-bound (0x8300) +# PERF_TYPE_RAW / topdown-fe-bound (0x8200) [event14:base-stat] fd=14 group_fd=11 type=4 -config=33536 +config=33280 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / topdown-bad-spec (0x8100) +# PERF_TYPE_RAW / topdown-be-bound (0x8300) [event15:base-stat] fd=15 group_fd=11 type=4 -config=33024 +config=33536 disabled=0 enable_on_exec=0 read_format=15 optional=1
-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING +# PERF_TYPE_RAW / topdown-heavy-ops (0x8400) [event16:base-stat] fd=16 +group_fd=11 type=4 -config=4109 +config=33792 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/ +# PERF_TYPE_RAW / topdown-br-mispredict (0x8500) [event17:base-stat] fd=17 +group_fd=11 type=4 -config=17039629 +config=34048 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD +# PERF_TYPE_RAW / topdown-fetch-lat (0x8600) [event18:base-stat] fd=18 +group_fd=11 type=4 -config=60 +config=34304 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY +# PERF_TYPE_RAW / topdown-mem-bound (0x8700) [event19:base-stat] fd=19 +group_fd=11 type=4 -config=2097421 +config=34560 +disabled=0 +enable_on_exec=0 +read_format=15 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK +# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING [event20:base-stat] fd=20 type=4 -config=316 +config=4109 optional=1
-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE +# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/ [event21:base-stat] fd=21 type=4 -config=412 +config=17039629 optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD [event22:base-stat] fd=22 type=4 -config=572 +config=60 optional=1
-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS +# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY [event23:base-stat] fd=23 type=4 -config=706 +config=2097421 optional=1
-# PERF_TYPE_RAW / UOPS_ISSUED.ANY +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK [event24:base-stat] fd=24 type=4 +config=316 +optional=1 + +# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE +[event25:base-stat] +fd=25 +type=4 +config=412 +optional=1 + +# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE +[event26:base-stat] +fd=26 +type=4 +config=572 +optional=1 + +# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS +[event27:base-stat] +fd=27 +type=4 +config=706 +optional=1 + +# PERF_TYPE_RAW / UOPS_ISSUED.ANY +[event28:base-stat] +fd=28 +type=4 config=270 optional=1
@@ -190,8 +234,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1D << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event25:base-stat] -fd=25 +[event29:base-stat] +fd=29 type=3 config=0 optional=1 @@ -200,8 +244,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1D << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event26:base-stat] -fd=26 +[event30:base-stat] +fd=30 type=3 config=65536 optional=1 @@ -210,8 +254,8 @@ optional=1 # PERF_COUNT_HW_CACHE_LL << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event27:base-stat] -fd=27 +[event31:base-stat] +fd=31 type=3 config=2 optional=1 @@ -220,8 +264,8 @@ optional=1 # PERF_COUNT_HW_CACHE_LL << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event28:base-stat] -fd=28 +[event32:base-stat] +fd=32 type=3 config=65538 optional=1 @@ -230,8 +274,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1I << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event29:base-stat] -fd=29 +[event33:base-stat] +fd=33 type=3 config=1 optional=1 @@ -240,8 +284,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1I << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event30:base-stat] -fd=30 +[event34:base-stat] +fd=34 type=3 config=65537 optional=1 @@ -250,8 +294,8 @@ optional=1 # PERF_COUNT_HW_CACHE_DTLB << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event31:base-stat] -fd=31 +[event35:base-stat] +fd=35 type=3 config=3 optional=1 @@ -260,8 +304,8 @@ optional=1 # PERF_COUNT_HW_CACHE_DTLB << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event32:base-stat] -fd=32 +[event36:base-stat] +fd=36 type=3 config=65539 optional=1 @@ -270,8 +314,8 @@ optional=1 # PERF_COUNT_HW_CACHE_ITLB << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event33:base-stat] -fd=33 +[event37:base-stat] +fd=37 type=3 config=4 optional=1 @@ -280,8 +324,8 @@ optional=1 # PERF_COUNT_HW_CACHE_ITLB << 0 | # (PERF_COUNT_HW_CACHE_OP_READ << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event34:base-stat] -fd=34 +[event38:base-stat] +fd=38 type=3 config=65540 optional=1 @@ -290,8 +334,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1D << 0 | # (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) | # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) -[event35:base-stat] -fd=35 +[event39:base-stat] +fd=39 type=3 config=512 optional=1 @@ -300,8 +344,8 @@ optional=1 # PERF_COUNT_HW_CACHE_L1D << 0 | # (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) | # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) -[event36:base-stat] -fd=36 +[event40:base-stat] +fd=40 type=3 config=66048 optional=1 diff --git a/tools/perf/util/bpf-filter.c b/tools/perf/util/bpf-filter.c index e87b6789eb9e..a4fdf6911ec1 100644 --- a/tools/perf/util/bpf-filter.c +++ b/tools/perf/util/bpf-filter.c @@ -375,7 +375,7 @@ static int create_idx_hash(struct evsel *evsel, struct perf_bpf_filter_entry *en pfi = zalloc(sizeof(*pfi)); if (pfi == NULL) { pr_err("Cannot save pinned filter index\n"); - goto err; + return -ENOMEM; }
pfi->evsel = evsel; diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c index 40f047baef81..0bf9e5c27b59 100644 --- a/tools/perf/util/cs-etm.c +++ b/tools/perf/util/cs-etm.c @@ -2490,12 +2490,6 @@ static void cs_etm__clear_all_traceid_queues(struct cs_etm_queue *etmq)
/* Ignore return value */ cs_etm__process_traceid_queue(etmq, tidq); - - /* - * Generate an instruction sample with the remaining - * branchstack entries. - */ - cs_etm__flush(etmq, tidq); } }
@@ -2638,7 +2632,7 @@ static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm)
while (1) { if (!etm->heap.heap_cnt) - goto out; + break;
/* Take the entry at the top of the min heap */ cs_queue_nr = etm->heap.heap_array[0].queue_nr; @@ -2721,6 +2715,23 @@ static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm) ret = auxtrace_heap__add(&etm->heap, cs_queue_nr, cs_timestamp); }
+ for (i = 0; i < etm->queues.nr_queues; i++) { + struct int_node *inode; + + etmq = etm->queues.queue_array[i].priv; + if (!etmq) + continue; + + intlist__for_each_entry(inode, etmq->traceid_queues_list) { + int idx = (int)(intptr_t)inode->priv; + + /* Flush any remaining branch stack entries */ + tidq = etmq->traceid_queues[idx]; + ret = cs_etm__end_block(etmq, tidq); + if (ret) + return ret; + } + } out: return ret; } diff --git a/tools/perf/util/disasm.c b/tools/perf/util/disasm.c index f05ba7739c1e..648e8d87ef19 100644 --- a/tools/perf/util/disasm.c +++ b/tools/perf/util/disasm.c @@ -1627,12 +1627,12 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym, u64 start = map__rip_2objdump(map, sym->start); u64 len; u64 offset; - int i, count; + int i, count, free_count; bool is_64bit = false; bool needs_cs_close = false; u8 *buf = NULL; csh handle; - cs_insn *insn; + cs_insn *insn = NULL; char disasm_buf[512]; struct disasm_line *dl;
@@ -1664,7 +1664,7 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym,
needs_cs_close = true;
- count = cs_disasm(handle, buf, len, start, len, &insn); + free_count = count = cs_disasm(handle, buf, len, start, len, &insn); for (i = 0, offset = 0; i < count; i++) { int printed;
@@ -1702,8 +1702,11 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym, }
out: - if (needs_cs_close) + if (needs_cs_close) { cs_close(&handle); + if (free_count > 0) + cs_free(insn, free_count); + } free(buf); return count < 0 ? count : 0;
@@ -1717,7 +1720,7 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym, */ list_for_each_entry_safe(dl, tmp, ¬es->src->source, al.node) { list_del(&dl->al.node); - free(dl); + disasm_line__free(dl); } } count = -1; @@ -1782,7 +1785,7 @@ static int symbol__disassemble_raw(char *filename, struct symbol *sym, sprintf(args->line, "%x", line[i]); dl = disasm_line__new(args); if (dl == NULL) - goto err; + break;
annotation_line__add(&dl->al, ¬es->src->source); offset += 4; diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index f14b7e6ff1dc..a9df84692d4a 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -48,6 +48,7 @@ #include <sys/mman.h> #include <sys/prctl.h> #include <sys/timerfd.h> +#include <sys/wait.h>
#include <linux/bitops.h> #include <linux/hash.h> @@ -1484,6 +1485,8 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const int child_ready_pipe[2], go_pipe[2]; char bf;
+ evlist->workload.cork_fd = -1; + if (pipe(child_ready_pipe) < 0) { perror("failed to create 'ready' pipe"); return -1; @@ -1536,7 +1539,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const * For cancelling the workload without actually running it, * the parent will just close workload.cork_fd, without writing * anything, i.e. read will return zero and we just exit() - * here. + * here (See evlist__cancel_workload()). */ if (ret != 1) { if (ret == -1) @@ -1600,7 +1603,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
int evlist__start_workload(struct evlist *evlist) { - if (evlist->workload.cork_fd > 0) { + if (evlist->workload.cork_fd >= 0) { char bf = 0; int ret; /* @@ -1611,12 +1614,24 @@ int evlist__start_workload(struct evlist *evlist) perror("unable to write to pipe");
close(evlist->workload.cork_fd); + evlist->workload.cork_fd = -1; return ret; }
return 0; }
+void evlist__cancel_workload(struct evlist *evlist) +{ + int status; + + if (evlist->workload.cork_fd >= 0) { + close(evlist->workload.cork_fd); + evlist->workload.cork_fd = -1; + waitpid(evlist->workload.pid, &status, WNOHANG); + } +} + int evlist__parse_sample(struct evlist *evlist, union perf_event *event, struct perf_sample *sample) { struct evsel *evsel = evlist__event2evsel(evlist, event); diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index bcc1c6984bb5..888fda751e1a 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -186,6 +186,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const char *argv[], bool pipe_output, void (*exec_error)(int signo, siginfo_t *info, void *ucontext)); int evlist__start_workload(struct evlist *evlist); +void evlist__cancel_workload(struct evlist *evlist);
struct option;
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index fad227b625d1..4f0ac998b0cc 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -1343,7 +1343,7 @@ static int maps__set_module_path(struct maps *maps, const char *path, struct kmo * we need to update the symtab_type if needed. */ if (m->comp && is_kmod_dso(dso)) { - dso__set_symtab_type(dso, dso__symtab_type(dso)); + dso__set_symtab_type(dso, dso__symtab_type(dso)+1); dso__set_comp(dso, m->comp); } map__put(map); diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c index 051feb93ed8d..bf5090f5220b 100644 --- a/tools/perf/util/mem-events.c +++ b/tools/perf/util/mem-events.c @@ -366,6 +366,12 @@ static const char * const mem_lvl[] = { };
static const char * const mem_lvlnum[] = { + [PERF_MEM_LVLNUM_L1] = "L1", + [PERF_MEM_LVLNUM_L2] = "L2", + [PERF_MEM_LVLNUM_L3] = "L3", + [PERF_MEM_LVLNUM_L4] = "L4", + [PERF_MEM_LVLNUM_L2_MHB] = "L2 MHB", + [PERF_MEM_LVLNUM_MSC] = "Memory-side Cache", [PERF_MEM_LVLNUM_UNC] = "Uncached", [PERF_MEM_LVLNUM_CXL] = "CXL", [PERF_MEM_LVLNUM_IO] = "I/O", @@ -448,7 +454,7 @@ int perf_mem__lvl_scnprintf(char *out, size_t sz, const struct mem_info *mem_inf if (mem_lvlnum[lvl]) l += scnprintf(out + l, sz - l, mem_lvlnum[lvl]); else - l += scnprintf(out + l, sz - l, "L%d", lvl); + l += scnprintf(out + l, sz - l, "Unknown level %d", lvl);
l += scnprintf(out + l, sz - l, " %s", hit_miss); return l; diff --git a/tools/perf/util/pfm.c b/tools/perf/util/pfm.c index 5ccfe4b64cdf..0dacc133ed39 100644 --- a/tools/perf/util/pfm.c +++ b/tools/perf/util/pfm.c @@ -233,7 +233,7 @@ print_libpfm_event(const struct print_callbacks *print_cb, void *print_state, }
if (is_libpfm_event_supported(name, cpus, threads)) { - print_cb->print_event(print_state, pinfo->name, topic, + print_cb->print_event(print_state, topic, pinfo->name, name, info->equiv, /*scale_unit=*/NULL, /*deprecated=*/NULL, "PFM event", @@ -267,8 +267,8 @@ print_libpfm_event(const struct print_callbacks *print_cb, void *print_state, continue;
print_cb->print_event(print_state, - pinfo->name, topic, + pinfo->name, name, /*alias=*/NULL, /*scale_unit=*/NULL, /*deprecated=*/NULL, "PFM event", diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c index 52109af5f2f1..d7d67e09d759 100644 --- a/tools/perf/util/pmus.c +++ b/tools/perf/util/pmus.c @@ -494,8 +494,8 @@ void perf_pmus__print_pmu_events(const struct print_callbacks *print_cb, void *p goto free;
print_cb->print_event(print_state, - aliases[j].pmu_name, aliases[j].topic, + aliases[j].pmu_name, aliases[j].name, aliases[j].alias, aliases[j].scale_unit, diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c index 630e16c54ed5..a30f88ed0300 100644 --- a/tools/perf/util/probe-finder.c +++ b/tools/perf/util/probe-finder.c @@ -1379,6 +1379,10 @@ int debuginfo__find_trace_events(struct debuginfo *dbg, if (ret >= 0 && tf.pf.skip_empty_arg) ret = fill_empty_trace_arg(pev, tf.tevs, tf.ntevs);
+#if _ELFUTILS_PREREQ(0, 142) + dwarf_cfi_end(tf.pf.cfi_eh); +#endif + if (ret < 0 || tf.ntevs == 0) { for (i = 0; i < tf.ntevs; i++) clear_probe_trace_event(&tf.tevs[i]); @@ -1583,8 +1587,21 @@ int debuginfo__find_probe_point(struct debuginfo *dbg, u64 addr,
/* Find a corresponding function (name, baseline and baseaddr) */ if (die_find_realfunc(&cudie, (Dwarf_Addr)addr, &spdie)) { - /* Get function entry information */ - func = basefunc = dwarf_diename(&spdie); + /* + * Get function entry information. + * + * As described in the document DWARF Debugging Information + * Format Version 5, section 2.22 Linkage Names, "mangled names, + * are used in various ways, ... to distinguish multiple + * entities that have the same name". + * + * Firstly try to get distinct linkage name, if fail then + * rollback to get associated name in DIE. + */ + func = basefunc = die_get_linkage_name(&spdie); + if (!func) + func = basefunc = dwarf_diename(&spdie); + if (!func || die_entrypc(&spdie, &baseaddr) != 0 || dwarf_decl_line(&spdie, &baseline) != 0) { diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h index 3add5ff516e1..724db829b49e 100644 --- a/tools/perf/util/probe-finder.h +++ b/tools/perf/util/probe-finder.h @@ -64,9 +64,9 @@ struct probe_finder {
/* For variable searching */ #if _ELFUTILS_PREREQ(0, 142) - /* Call Frame Information from .eh_frame */ + /* Call Frame Information from .eh_frame. Owned by this struct. */ Dwarf_CFI *cfi_eh; - /* Call Frame Information from .debug_frame */ + /* Call Frame Information from .debug_frame. Not owned. */ Dwarf_CFI *cfi_dbg; #endif Dwarf_Op *fb_ops; /* Frame base attribute */ diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c index 089220aaa5c9..a5ebee8b23bb 100644 --- a/tools/power/x86/turbostat/turbostat.c +++ b/tools/power/x86/turbostat/turbostat.c @@ -5385,6 +5385,9 @@ static int parse_cpu_str(char *cpu_str, cpu_set_t *cpu_set, int cpu_set_size) if (*next == '-') /* no negative cpu numbers */ return 1;
+ if (*next == '\0' || *next == '\n') + break; + start = strtoul(next, &next, 10);
if (start >= CPU_SUBSET_MAXCPUS) @@ -9781,7 +9784,7 @@ void cmdline(int argc, char **argv) * Parse some options early, because they may make other options invalid, * like adding the MSR counter with --add and at the same time using --no-msr. */ - while ((opt = getopt_long_only(argc, argv, "MPn:", long_options, &option_index)) != -1) { + while ((opt = getopt_long_only(argc, argv, "+MPn:", long_options, &option_index)) != -1) { switch (opt) { case 'M': no_msr = 1; diff --git a/tools/testing/selftests/arm64/abi/hwcap.c b/tools/testing/selftests/arm64/abi/hwcap.c index f2d6007a2b98..265654ec48b9 100644 --- a/tools/testing/selftests/arm64/abi/hwcap.c +++ b/tools/testing/selftests/arm64/abi/hwcap.c @@ -361,8 +361,8 @@ static void sveaes_sigill(void)
static void sveb16b16_sigill(void) { - /* BFADD ZA.H[W0, 0], {Z0.H-Z1.H} */ - asm volatile(".inst 0xC1E41C00" : : : ); + /* BFADD Z0.H, Z0.H, Z0.H */ + asm volatile(".inst 0x65000000" : : : ); }
static void svepmull_sigill(void) @@ -490,7 +490,7 @@ static const struct hwcap_data { .name = "F8DP2", .at_hwcap = AT_HWCAP2, .hwcap_bit = HWCAP2_F8DP2, - .cpuinfo = "f8dp4", + .cpuinfo = "f8dp2", .sigill_fn = f8dp2_sigill, }, { diff --git a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c index 2b1425b92b69..a3d1e23fe02a 100644 --- a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c +++ b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c @@ -65,7 +65,7 @@ static int check_single_included_tags(int mem_type, int mode) ptr = mte_insert_tags(ptr, BUFFER_SIZE); /* Check tag value */ if (MT_FETCH_TAG((uintptr_t)ptr) == tag) { - ksft_print_msg("FAIL: wrong tag = 0x%x with include mask=0x%x\n", + ksft_print_msg("FAIL: wrong tag = 0x%lx with include mask=0x%x\n", MT_FETCH_TAG((uintptr_t)ptr), MT_INCLUDE_VALID_TAG(tag)); result = KSFT_FAIL; @@ -97,7 +97,7 @@ static int check_multiple_included_tags(int mem_type, int mode) ptr = mte_insert_tags(ptr, BUFFER_SIZE); /* Check tag value */ if (MT_FETCH_TAG((uintptr_t)ptr) < tag) { - ksft_print_msg("FAIL: wrong tag = 0x%x with include mask=0x%x\n", + ksft_print_msg("FAIL: wrong tag = 0x%lx with include mask=0x%lx\n", MT_FETCH_TAG((uintptr_t)ptr), MT_INCLUDE_VALID_TAGS(excl_mask)); result = KSFT_FAIL; diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c index 00ffd34c66d3..1120f5aa7655 100644 --- a/tools/testing/selftests/arm64/mte/mte_common_util.c +++ b/tools/testing/selftests/arm64/mte/mte_common_util.c @@ -38,7 +38,7 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc) if (cur_mte_cxt.trig_si_code == si->si_code) cur_mte_cxt.fault_valid = true; else - ksft_print_msg("Got unexpected SEGV_MTEAERR at pc=$lx, fault addr=%lx\n", + ksft_print_msg("Got unexpected SEGV_MTEAERR at pc=%llx, fault addr=%lx\n", ((ucontext_t *)uc)->uc_mcontext.pc, addr); return; @@ -64,7 +64,7 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc) exit(1); } } else if (signum == SIGBUS) { - ksft_print_msg("INFO: SIGBUS signal at pc=%lx, fault addr=%lx, si_code=%lx\n", + ksft_print_msg("INFO: SIGBUS signal at pc=%llx, fault addr=%lx, si_code=%x\n", ((ucontext_t *)uc)->uc_mcontext.pc, addr, si->si_code); if ((cur_mte_cxt.trig_range >= 0 && addr >= MT_CLEAR_TAG(cur_mte_cxt.trig_addr) && diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 75016962f795..43a029318478 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -10,6 +10,7 @@ TOOLSDIR := $(abspath ../../..) LIBDIR := $(TOOLSDIR)/lib BPFDIR := $(LIBDIR)/bpf TOOLSINCDIR := $(TOOLSDIR)/include +TOOLSARCHINCDIR := $(TOOLSDIR)/arch/$(SRCARCH)/include BPFTOOLDIR := $(TOOLSDIR)/bpf/bpftool APIDIR := $(TOOLSINCDIR)/uapi ifneq ($(O),) @@ -44,7 +45,7 @@ CFLAGS += -g $(OPT_FLAGS) -rdynamic \ -Wall -Werror -fno-omit-frame-pointer \ $(GENFLAGS) $(SAN_CFLAGS) $(LIBELF_CFLAGS) \ -I$(CURDIR) -I$(INCLUDE_DIR) -I$(GENDIR) -I$(LIBDIR) \ - -I$(TOOLSINCDIR) -I$(APIDIR) -I$(OUTPUT) + -I$(TOOLSINCDIR) -I$(TOOLSARCHINCDIR) -I$(APIDIR) -I$(OUTPUT) LDFLAGS += $(SAN_LDFLAGS) LDLIBS += $(LIBELF_LIBS) -lz -lrt -lpthread
diff --git a/tools/testing/selftests/bpf/network_helpers.h b/tools/testing/selftests/bpf/network_helpers.h index c72c16e1aff8..5764155b6d25 100644 --- a/tools/testing/selftests/bpf/network_helpers.h +++ b/tools/testing/selftests/bpf/network_helpers.h @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef __NETWORK_HELPERS_H #define __NETWORK_HELPERS_H +#include <arpa/inet.h> #include <sys/socket.h> #include <sys/types.h> #include <linux/types.h> diff --git a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c index 871d16cb95cf..1a2f99596916 100644 --- a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c +++ b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c @@ -5,6 +5,7 @@ #include <test_progs.h> #include <pthread.h> #include <network_helpers.h> +#include <sys/sysinfo.h>
#include "timer_lockup.skel.h"
@@ -52,6 +53,11 @@ void test_timer_lockup(void) pthread_t thrds[2]; void *ret;
+ if (get_nprocs() < 2) { + test__skip(); + return; + } + skel = timer_lockup__open_and_load(); if (!ASSERT_OK_PTR(skel, "timer_lockup__open_and_load")) return; diff --git a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c index 43f40c4fe241..1c8b678e2e9a 100644 --- a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c +++ b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c @@ -28,8 +28,8 @@ struct { }, };
-SEC(".data.A") struct bpf_spin_lock lockA; -SEC(".data.B") struct bpf_spin_lock lockB; +static struct bpf_spin_lock lockA SEC(".data.A"); +static struct bpf_spin_lock lockB SEC(".data.B");
SEC("?tc") int lock_id_kptr_preserve(void *ctx) diff --git a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c index bba3e37f749b..5aaf2b065f86 100644 --- a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c +++ b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c @@ -7,7 +7,11 @@ #include "bpf_misc.h"
SEC("tp_btf/bpf_testmod_test_nullable_bare") -__failure __msg("R1 invalid mem access 'trusted_ptr_or_null_'") +/* This used to be a failure test, but raw_tp nullable arguments can now + * directly be dereferenced, whether they have nullable annotation or not, + * and don't need to be explicitly checked. + */ +__success int BPF_PROG(handle_tp_btf_nullable_bare1, struct bpf_testmod_test_read_ctx *nullable_ctx) { return nullable_ctx->len; diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c index c7a70e1a1085..fa829a7854f2 100644 --- a/tools/testing/selftests/bpf/test_progs.c +++ b/tools/testing/selftests/bpf/test_progs.c @@ -20,11 +20,13 @@
#include "network_helpers.h"
+/* backtrace() and backtrace_symbols_fd() are glibc specific, + * use header file when glibc is available and provide stub + * implementations when another libc implementation is used. + */ #ifdef __GLIBC__ #include <execinfo.h> /* backtrace */ -#endif - -/* Default backtrace funcs if missing at link */ +#else __weak int backtrace(void **buffer, int size) { return 0; @@ -34,6 +36,7 @@ __weak void backtrace_symbols_fd(void *const *buffer, int size, int fd) { dprintf(fd, "<backtrace not supported>\n"); } +#endif /*__GLIBC__ */
int env_verbosity = 0;
diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c index 3e02d7267de8..61a747afcd05 100644 --- a/tools/testing/selftests/bpf/test_sockmap.c +++ b/tools/testing/selftests/bpf/test_sockmap.c @@ -56,6 +56,8 @@ static void running_handler(int a); #define BPF_SOCKHASH_FILENAME "test_sockhash_kern.bpf.o" #define CG_PATH "/sockmap"
+#define EDATAINTEGRITY 2001 + /* global sockets */ int s1, s2, c1, c2, p1, p2; int test_cnt; @@ -86,6 +88,10 @@ int ktls; int peek_flag; int skb_use_parser; int txmsg_omit_skb_parser; +int verify_push_start; +int verify_push_len; +int verify_pop_start; +int verify_pop_len;
static const struct option long_options[] = { {"help", no_argument, NULL, 'h' }, @@ -418,16 +424,18 @@ static int msg_loop_sendpage(int fd, int iov_length, int cnt, { bool drop = opt->drop_expected; unsigned char k = 0; + int i, j, fp; FILE *file; - int i, fp;
file = tmpfile(); if (!file) { perror("create file for sendpage"); return 1; } - for (i = 0; i < iov_length * cnt; i++, k++) - fwrite(&k, sizeof(char), 1, file); + for (i = 0; i < cnt; i++, k = 0) { + for (j = 0; j < iov_length; j++, k++) + fwrite(&k, sizeof(char), 1, file); + } fflush(file); fseek(file, 0, SEEK_SET);
@@ -510,42 +518,111 @@ static int msg_alloc_iov(struct msghdr *msg, return -ENOMEM; }
-static int msg_verify_data(struct msghdr *msg, int size, int chunk_sz) +/* In push or pop test, we need to do some calculations for msg_verify_data */ +static void msg_verify_date_prep(void) { - int i, j = 0, bytes_cnt = 0; - unsigned char k = 0; + int push_range_end = txmsg_start_push + txmsg_end_push - 1; + int pop_range_end = txmsg_start_pop + txmsg_pop - 1; + + if (txmsg_end_push && txmsg_pop && + txmsg_start_push <= pop_range_end && txmsg_start_pop <= push_range_end) { + /* The push range and the pop range overlap */ + int overlap_len; + + verify_push_start = txmsg_start_push; + verify_pop_start = txmsg_start_pop; + if (txmsg_start_push < txmsg_start_pop) + overlap_len = min(push_range_end - txmsg_start_pop + 1, txmsg_pop); + else + overlap_len = min(pop_range_end - txmsg_start_push + 1, txmsg_end_push); + verify_push_len = max(txmsg_end_push - overlap_len, 0); + verify_pop_len = max(txmsg_pop - overlap_len, 0); + } else { + /* Otherwise */ + verify_push_start = txmsg_start_push; + verify_pop_start = txmsg_start_pop; + verify_push_len = txmsg_end_push; + verify_pop_len = txmsg_pop; + } +} + +static int msg_verify_data(struct msghdr *msg, int size, int chunk_sz, + unsigned char *k_p, int *bytes_cnt_p, + int *check_cnt_p, int *push_p) +{ + int bytes_cnt = *bytes_cnt_p, check_cnt = *check_cnt_p, push = *push_p; + unsigned char k = *k_p; + int i, j;
- for (i = 0; i < msg->msg_iovlen; i++) { + for (i = 0, j = 0; i < msg->msg_iovlen && size; i++, j = 0) { unsigned char *d = msg->msg_iov[i].iov_base;
/* Special case test for skb ingress + ktls */ if (i == 0 && txmsg_ktls_skb) { if (msg->msg_iov[i].iov_len < 4) - return -EIO; + return -EDATAINTEGRITY; if (memcmp(d, "PASS", 4) != 0) { fprintf(stderr, "detected skb data error with skb ingress update @iov[%i]:%i "%02x %02x %02x %02x" != "PASS"\n", i, 0, d[0], d[1], d[2], d[3]); - return -EIO; + return -EDATAINTEGRITY; } j = 4; /* advance index past PASS header */ }
for (; j < msg->msg_iov[i].iov_len && size; j++) { + if (push > 0 && + check_cnt == verify_push_start + verify_push_len - push) { + int skipped; +revisit_push: + skipped = push; + if (j + push >= msg->msg_iov[i].iov_len) + skipped = msg->msg_iov[i].iov_len - j; + push -= skipped; + size -= skipped; + j += skipped - 1; + check_cnt += skipped; + continue; + } + + if (verify_pop_len > 0 && check_cnt == verify_pop_start) { + bytes_cnt += verify_pop_len; + check_cnt += verify_pop_len; + k += verify_pop_len; + + if (bytes_cnt == chunk_sz) { + k = 0; + bytes_cnt = 0; + check_cnt = 0; + push = verify_push_len; + } + + if (push > 0 && + check_cnt == verify_push_start + verify_push_len - push) + goto revisit_push; + } + if (d[j] != k++) { fprintf(stderr, "detected data corruption @iov[%i]:%i %02x != %02x, %02x ?= %02x\n", i, j, d[j], k - 1, d[j+1], k); - return -EIO; + return -EDATAINTEGRITY; } bytes_cnt++; + check_cnt++; if (bytes_cnt == chunk_sz) { k = 0; bytes_cnt = 0; + check_cnt = 0; + push = verify_push_len; } size--; } } + *k_p = k; + *bytes_cnt_p = bytes_cnt; + *check_cnt_p = check_cnt; + *push_p = push; return 0; }
@@ -598,10 +675,14 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt, } clock_gettime(CLOCK_MONOTONIC, &s->end); } else { + float total_bytes, txmsg_pop_total, txmsg_push_total; int slct, recvp = 0, recv, max_fd = fd; - float total_bytes, txmsg_pop_total; int fd_flags = O_NONBLOCK; struct timeval timeout; + unsigned char k = 0; + int bytes_cnt = 0; + int check_cnt = 0; + int push = 0; fd_set w;
fcntl(fd, fd_flags); @@ -615,12 +696,22 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt, * This is really only useful for testing edge cases in code * paths. */ - total_bytes = (float)iov_count * (float)iov_length * (float)cnt; - if (txmsg_apply) + total_bytes = (float)iov_length * (float)cnt; + if (!opt->sendpage) + total_bytes *= (float)iov_count; + if (txmsg_apply) { + txmsg_push_total = txmsg_end_push * (total_bytes / txmsg_apply); txmsg_pop_total = txmsg_pop * (total_bytes / txmsg_apply); - else + } else { + txmsg_push_total = txmsg_end_push * cnt; txmsg_pop_total = txmsg_pop * cnt; + } + total_bytes += txmsg_push_total; total_bytes -= txmsg_pop_total; + if (data) { + msg_verify_date_prep(); + push = verify_push_len; + } err = clock_gettime(CLOCK_MONOTONIC, &s->start); if (err < 0) perror("recv start time"); @@ -693,10 +784,11 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
if (data) { int chunk_sz = opt->sendpage ? - iov_length * cnt : + iov_length : iov_length * iov_count;
- errno = msg_verify_data(&msg, recv, chunk_sz); + errno = msg_verify_data(&msg, recv, chunk_sz, &k, &bytes_cnt, + &check_cnt, &push); if (errno) { perror("data verify msg failed"); goto out_errno; @@ -704,7 +796,11 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt, if (recvp) { errno = msg_verify_data(&msg_peek, recvp, - chunk_sz); + chunk_sz, + &k, + &bytes_cnt, + &check_cnt, + &push); if (errno) { perror("data verify msg_peek failed"); goto out_errno; @@ -786,8 +882,6 @@ static int sendmsg_test(struct sockmap_options *opt)
rxpid = fork(); if (rxpid == 0) { - if (txmsg_pop || txmsg_start_pop) - iov_buf -= (txmsg_pop - txmsg_start_pop + 1); if (opt->drop_expected || txmsg_ktls_skb_drop) _exit(0);
@@ -812,7 +906,7 @@ static int sendmsg_test(struct sockmap_options *opt) s.bytes_sent, sent_Bps, sent_Bps/giga, s.bytes_recvd, recvd_Bps, recvd_Bps/giga, peek_flag ? "(peek_msg)" : ""); - if (err && txmsg_cork) + if (err && err != -EDATAINTEGRITY && txmsg_cork) err = 0; exit(err ? 1 : 0); } else if (rxpid == -1) { @@ -1456,8 +1550,8 @@ static void test_send_many(struct sockmap_options *opt, int cgrp)
static void test_send_large(struct sockmap_options *opt, int cgrp) { - opt->iov_length = 256; - opt->iov_count = 1024; + opt->iov_length = 8192; + opt->iov_count = 32; opt->rate = 2; test_exec(cgrp, opt); } @@ -1586,17 +1680,19 @@ static void test_txmsg_cork_hangs(int cgrp, struct sockmap_options *opt) static void test_txmsg_pull(int cgrp, struct sockmap_options *opt) { /* Test basic start/end */ + txmsg_pass = 1; txmsg_start = 1; txmsg_end = 2; test_send(opt, cgrp);
/* Test >4k pull */ + txmsg_pass = 1; txmsg_start = 4096; txmsg_end = 9182; test_send_large(opt, cgrp);
/* Test pull + redirect */ - txmsg_redir = 0; + txmsg_redir = 1; txmsg_start = 1; txmsg_end = 2; test_send(opt, cgrp); @@ -1618,12 +1714,16 @@ static void test_txmsg_pull(int cgrp, struct sockmap_options *opt)
static void test_txmsg_pop(int cgrp, struct sockmap_options *opt) { + bool data = opt->data_test; + /* Test basic pop */ + txmsg_pass = 1; txmsg_start_pop = 1; txmsg_pop = 2; test_send_many(opt, cgrp);
/* Test pop with >4k */ + txmsg_pass = 1; txmsg_start_pop = 4096; txmsg_pop = 4096; test_send_large(opt, cgrp); @@ -1634,6 +1734,12 @@ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt) txmsg_pop = 2; test_send_many(opt, cgrp);
+ /* TODO: Test for pop + cork should be different, + * - It makes the layout of the received data difficult + * - It makes it hard to calculate the total_bytes in the recvmsg + * Temporarily skip the data integrity test for this case now. + */ + opt->data_test = false; /* Test pop + cork */ txmsg_redir = 0; txmsg_cork = 512; @@ -1647,16 +1753,21 @@ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt) txmsg_start_pop = 1; txmsg_pop = 2; test_send_many(opt, cgrp); + opt->data_test = data; }
static void test_txmsg_push(int cgrp, struct sockmap_options *opt) { + bool data = opt->data_test; + /* Test basic push */ + txmsg_pass = 1; txmsg_start_push = 1; txmsg_end_push = 1; test_send(opt, cgrp);
/* Test push 4kB >4k */ + txmsg_pass = 1; txmsg_start_push = 4096; txmsg_end_push = 4096; test_send_large(opt, cgrp); @@ -1667,16 +1778,24 @@ static void test_txmsg_push(int cgrp, struct sockmap_options *opt) txmsg_end_push = 2; test_send_many(opt, cgrp);
+ /* TODO: Test for push + cork should be different, + * - It makes the layout of the received data difficult + * - It makes it hard to calculate the total_bytes in the recvmsg + * Temporarily skip the data integrity test for this case now. + */ + opt->data_test = false; /* Test push + cork */ txmsg_redir = 0; txmsg_cork = 512; txmsg_start_push = 1; txmsg_end_push = 2; test_send_many(opt, cgrp); + opt->data_test = data; }
static void test_txmsg_push_pop(int cgrp, struct sockmap_options *opt) { + txmsg_pass = 1; txmsg_start_push = 1; txmsg_end_push = 10; txmsg_start_pop = 5; diff --git a/tools/testing/selftests/bpf/uprobe_multi.c b/tools/testing/selftests/bpf/uprobe_multi.c index c7828b13e5ff..dd38dc68f635 100644 --- a/tools/testing/selftests/bpf/uprobe_multi.c +++ b/tools/testing/selftests/bpf/uprobe_multi.c @@ -12,6 +12,10 @@ #define MADV_POPULATE_READ 22 #endif
+#ifndef MADV_PAGEOUT +#define MADV_PAGEOUT 21 +#endif + int __attribute__((weak)) uprobe(void) { return 0; diff --git a/tools/testing/selftests/mount_setattr/mount_setattr_test.c b/tools/testing/selftests/mount_setattr/mount_setattr_test.c index 68801e1a9ec2..70f65eb320a7 100644 --- a/tools/testing/selftests/mount_setattr/mount_setattr_test.c +++ b/tools/testing/selftests/mount_setattr/mount_setattr_test.c @@ -1026,7 +1026,7 @@ FIXTURE_SETUP(mount_setattr_idmapped) "size=100000,mode=700"), 0);
ASSERT_EQ(mount("testing", "/mnt", "tmpfs", MS_NOATIME | MS_NODEV, - "size=100000,mode=700"), 0); + "size=2m,mode=700"), 0);
ASSERT_EQ(mkdir("/mnt/A", 0777), 0);
diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile index 5e86f7a51b43..2c4b6e404a7c 100644 --- a/tools/testing/selftests/net/Makefile +++ b/tools/testing/selftests/net/Makefile @@ -97,6 +97,7 @@ TEST_PROGS += fdb_flush.sh TEST_PROGS += fq_band_pktlimit.sh TEST_PROGS += vlan_hw_filter.sh TEST_PROGS += bpf_offload.py +TEST_PROGS += ipv6_route_update_soft_lockup.sh
# YNL files, must be before "include ..lib.mk" EXTRA_CLEAN += $(OUTPUT)/libynl.a diff --git a/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh b/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh new file mode 100755 index 000000000000..a6b2b1f9c641 --- /dev/null +++ b/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh @@ -0,0 +1,262 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Testing for potential kernel soft lockup during IPv6 routing table +# refresh under heavy outgoing IPv6 traffic. If a kernel soft lockup +# occurs, a kernel panic will be triggered to prevent associated issues. +# +# +# Test Environment Layout +# +# ┌----------------┐ ┌----------------┐ +# | SOURCE_NS | | SINK_NS | +# | NAMESPACE | | NAMESPACE | +# |(iperf3 clients)| |(iperf3 servers)| +# | | | | +# | | | | +# | ┌-----------| nexthops |---------┐ | +# | |veth_source|<--------------------------------------->|veth_sink|<┐ | +# | └-----------|2001:0DB8:1::0:1/96 2001:0DB8:1::1:1/96 |---------┘ | | +# | | ^ 2001:0DB8:1::1:2/96 | | | +# | | . . | fwd | | +# | ┌---------┐ | . . | | | +# | | IPv6 | | . . | V | +# | | routing | | . 2001:0DB8:1::1:80/96| ┌-----┐ | +# | | table | | . | | lo | | +# | | nexthop | | . └--------┴-----┴-┘ +# | | update | | ............................> 2001:0DB8:2::1:1/128 +# | └-------- ┘ | +# └----------------┘ +# +# The test script sets up two network namespaces, source_ns and sink_ns, +# connected via a veth link. Within source_ns, it continuously updates the +# IPv6 routing table by flushing and inserting IPV6_NEXTHOP_ADDR_COUNT nexthop +# IPs destined for SINK_LOOPBACK_IP_ADDR in sink_ns. This refresh occurs at a +# rate of 1/ROUTING_TABLE_REFRESH_PERIOD per second for TEST_DURATION seconds. +# +# Simultaneously, multiple iperf3 clients within source_ns generate heavy +# outgoing IPv6 traffic. Each client is assigned a unique port number starting +# at 5000 and incrementing sequentially. Each client targets a unique iperf3 +# server running in sink_ns, connected to the SINK_LOOPBACK_IFACE interface +# using the same port number. +# +# The number of iperf3 servers and clients is set to half of the total +# available cores on each machine. +# +# NOTE: We have tested this script on machines with various CPU specifications, +# ranging from lower to higher performance as listed below. The test script +# effectively triggered a kernel soft lockup on machines running an unpatched +# kernel in under a minute: +# +# - 1x Intel Xeon E-2278G 8-Core Processor @ 3.40GHz +# - 1x Intel Xeon E-2378G Processor 8-Core @ 2.80GHz +# - 1x AMD EPYC 7401P 24-Core Processor @ 2.00GHz +# - 1x AMD EPYC 7402P 24-Core Processor @ 2.80GHz +# - 2x Intel Xeon Gold 5120 14-Core Processor @ 2.20GHz +# - 1x Ampere Altra Q80-30 80-Core Processor @ 3.00GHz +# - 2x Intel Xeon Gold 5120 14-Core Processor @ 2.20GHz +# - 2x Intel Xeon Silver 4214 24-Core Processor @ 2.20GHz +# - 1x AMD EPYC 7502P 32-Core @ 2.50GHz +# - 1x Intel Xeon Gold 6314U 32-Core Processor @ 2.30GHz +# - 2x Intel Xeon Gold 6338 32-Core Processor @ 2.00GHz +# +# On less performant machines, you may need to increase the TEST_DURATION +# parameter to enhance the likelihood of encountering a race condition leading +# to a kernel soft lockup and avoid a false negative result. +# +# NOTE: The test may not produce the expected result in virtualized +# environments (e.g., qemu) due to differences in timing and CPU handling, +# which can affect the conditions needed to trigger a soft lockup. + +source lib.sh +source net_helper.sh + +TEST_DURATION=300 +ROUTING_TABLE_REFRESH_PERIOD=0.01 + +IPERF3_BITRATE="300m" + + +IPV6_NEXTHOP_ADDR_COUNT="128" +IPV6_NEXTHOP_ADDR_MASK="96" +IPV6_NEXTHOP_PREFIX="2001:0DB8:1" + + +SOURCE_TEST_IFACE="veth_source" +SOURCE_TEST_IP_ADDR="2001:0DB8:1::0:1/96" + +SINK_TEST_IFACE="veth_sink" +# ${SINK_TEST_IFACE} is populated with the following range of IPv6 addresses: +# 2001:0DB8:1::1:1 to 2001:0DB8:1::1:${IPV6_NEXTHOP_ADDR_COUNT} +SINK_LOOPBACK_IFACE="lo" +SINK_LOOPBACK_IP_MASK="128" +SINK_LOOPBACK_IP_ADDR="2001:0DB8:2::1:1" + +nexthop_ip_list="" +termination_signal="" +kernel_softlokup_panic_prev_val="" + +terminate_ns_processes_by_pattern() { + local ns=$1 + local pattern=$2 + + for pid in $(ip netns pids ${ns}); do + [ -e /proc/$pid/cmdline ] && grep -qe "${pattern}" /proc/$pid/cmdline && kill -9 $pid + done +} + +cleanup() { + echo "info: cleaning up namespaces and terminating all processes within them..." + + + # Terminate iperf3 instances running in the source_ns. To avoid race + # conditions, first iterate over the PIDs and terminate those + # associated with the bash shells running the + # `while true; do iperf3 -c ...; done` loops. In a second iteration, + # terminate the individual `iperf3 -c ...` instances. + terminate_ns_processes_by_pattern ${source_ns} while + terminate_ns_processes_by_pattern ${source_ns} iperf3 + + # Repeat the same process for sink_ns + terminate_ns_processes_by_pattern ${sink_ns} while + terminate_ns_processes_by_pattern ${sink_ns} iperf3 + + # Check if any iperf3 instances are still running. This could happen + # if a core has entered an infinite loop and the timeout for detecting + # the soft lockup has not expired, but either the test interval has + # already elapsed or the test was terminated manually (e.g., with ^C) + for pid in $(ip netns pids ${source_ns}); do + if [ -e /proc/$pid/cmdline ] && grep -qe 'iperf3' /proc/$pid/cmdline; then + echo "FAIL: unable to terminate some iperf3 instances. Soft lockup is underway. A kernel panic is on the way!" + exit ${ksft_fail} + fi + done + + if [ "$termination_signal" == "SIGINT" ]; then + echo "SKIP: Termination due to ^C (SIGINT)" + elif [ "$termination_signal" == "SIGALRM" ]; then + echo "PASS: No kernel soft lockup occurred during this ${TEST_DURATION} second test" + fi + + cleanup_ns ${source_ns} ${sink_ns} + + sysctl -qw kernel.softlockup_panic=${kernel_softlokup_panic_prev_val} +} + +setup_prepare() { + setup_ns source_ns sink_ns + + ip -n ${source_ns} link add name ${SOURCE_TEST_IFACE} type veth peer name ${SINK_TEST_IFACE} netns ${sink_ns} + + # Setting up the Source namespace + ip -n ${source_ns} addr add ${SOURCE_TEST_IP_ADDR} dev ${SOURCE_TEST_IFACE} + ip -n ${source_ns} link set dev ${SOURCE_TEST_IFACE} qlen 10000 + ip -n ${source_ns} link set dev ${SOURCE_TEST_IFACE} up + ip netns exec ${source_ns} sysctl -qw net.ipv6.fib_multipath_hash_policy=1 + + # Setting up the Sink namespace + ip -n ${sink_ns} addr add ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK} dev ${SINK_LOOPBACK_IFACE} + ip -n ${sink_ns} link set dev ${SINK_LOOPBACK_IFACE} up + ip netns exec ${sink_ns} sysctl -qw net.ipv6.conf.${SINK_LOOPBACK_IFACE}.forwarding=1 + + ip -n ${sink_ns} link set ${SINK_TEST_IFACE} up + ip netns exec ${sink_ns} sysctl -qw net.ipv6.conf.${SINK_TEST_IFACE}.forwarding=1 + + + # Populate nexthop IPv6 addresses on the test interface in the sink_ns + echo "info: populating ${IPV6_NEXTHOP_ADDR_COUNT} IPv6 addresses on the ${SINK_TEST_IFACE} interface ..." + for IP in $(seq 1 ${IPV6_NEXTHOP_ADDR_COUNT}); do + ip -n ${sink_ns} addr add ${IPV6_NEXTHOP_PREFIX}::$(printf "1:%x" "${IP}")/${IPV6_NEXTHOP_ADDR_MASK} dev ${SINK_TEST_IFACE}; + done + + # Preparing list of nexthops + for IP in $(seq 1 ${IPV6_NEXTHOP_ADDR_COUNT}); do + nexthop_ip_list=$nexthop_ip_list" nexthop via ${IPV6_NEXTHOP_PREFIX}::$(printf "1:%x" $IP) dev ${SOURCE_TEST_IFACE} weight 1" + done +} + + +test_soft_lockup_during_routing_table_refresh() { + # Start num_of_iperf_servers iperf3 servers in the sink_ns namespace, + # each listening on ports starting at 5001 and incrementing + # sequentially. Since iperf3 instances may terminate unexpectedly, a + # while loop is used to automatically restart them in such cases. + echo "info: starting ${num_of_iperf_servers} iperf3 servers in the sink_ns namespace ..." + for i in $(seq 1 ${num_of_iperf_servers}); do + cmd="iperf3 --bind ${SINK_LOOPBACK_IP_ADDR} -s -p $(printf '5%03d' ${i}) --rcv-timeout 200 &>/dev/null" + ip netns exec ${sink_ns} bash -c "while true; do ${cmd}; done &" &>/dev/null + done + + # Wait for the iperf3 servers to be ready + for i in $(seq ${num_of_iperf_servers}); do + port=$(printf '5%03d' ${i}); + wait_local_port_listen ${sink_ns} ${port} tcp + done + + # Continuously refresh the routing table in the background within + # the source_ns namespace + ip netns exec ${source_ns} bash -c " + while $(ip netns list | grep -q ${source_ns}); do + ip -6 route add ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK} ${nexthop_ip_list}; + sleep ${ROUTING_TABLE_REFRESH_PERIOD}; + ip -6 route delete ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK}; + done &" + + # Start num_of_iperf_servers iperf3 clients in the source_ns namespace, + # each sending TCP traffic on sequential ports starting at 5001. + # Since iperf3 instances may terminate unexpectedly (e.g., if the route + # to the server is deleted in the background during a route refresh), a + # while loop is used to automatically restart them in such cases. + echo "info: starting ${num_of_iperf_servers} iperf3 clients in the source_ns namespace ..." + for i in $(seq 1 ${num_of_iperf_servers}); do + cmd="iperf3 -c ${SINK_LOOPBACK_IP_ADDR} -p $(printf '5%03d' ${i}) --length 64 --bitrate ${IPERF3_BITRATE} -t 0 --connect-timeout 150 &>/dev/null" + ip netns exec ${source_ns} bash -c "while true; do ${cmd}; done &" &>/dev/null + done + + echo "info: IPv6 routing table is being updated at the rate of $(echo "1/${ROUTING_TABLE_REFRESH_PERIOD}" | bc)/s for ${TEST_DURATION} seconds ..." + echo "info: A kernel soft lockup, if detected, results in a kernel panic!" + + wait +} + +# Make sure 'iperf3' is installed, skip the test otherwise +if [ ! -x "$(command -v "iperf3")" ]; then + echo "SKIP: 'iperf3' is not installed. Skipping the test." + exit ${ksft_skip} +fi + +# Determine the number of cores on the machine +num_of_iperf_servers=$(( $(nproc)/2 )) + +# Check if we are running on a multi-core machine, skip the test otherwise +if [ "${num_of_iperf_servers}" -eq 0 ]; then + echo "SKIP: This test is not valid on a single core machine!" + exit ${ksft_skip} +fi + +# Since the kernel soft lockup we're testing causes at least one core to enter +# an infinite loop, destabilizing the host and likely affecting subsequent +# tests, we trigger a kernel panic instead of reporting a failure and +# continuing +kernel_softlokup_panic_prev_val=$(sysctl -n kernel.softlockup_panic) +sysctl -qw kernel.softlockup_panic=1 + +handle_sigint() { + termination_signal="SIGINT" + cleanup + exit ${ksft_skip} +} + +handle_sigalrm() { + termination_signal="SIGALRM" + cleanup + exit ${ksft_pass} +} + +trap handle_sigint SIGINT +trap handle_sigalrm SIGALRM + +(sleep ${TEST_DURATION} && kill -s SIGALRM $$)& + +setup_prepare +test_soft_lockup_during_routing_table_refresh diff --git a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c index 254ff03297f0..5f827e10717d 100644 --- a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c +++ b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c @@ -43,6 +43,8 @@ static int build_cta_tuple_v4(struct nlmsghdr *nlh, int type, mnl_attr_nest_end(nlh, nest_proto);
mnl_attr_nest_end(nlh, nest); + + return 0; }
static int build_cta_tuple_v6(struct nlmsghdr *nlh, int type, @@ -71,6 +73,8 @@ static int build_cta_tuple_v6(struct nlmsghdr *nlh, int type, mnl_attr_nest_end(nlh, nest_proto);
mnl_attr_nest_end(nlh, nest); + + return 0; }
static int build_cta_proto(struct nlmsghdr *nlh) @@ -90,6 +94,8 @@ static int build_cta_proto(struct nlmsghdr *nlh) mnl_attr_nest_end(nlh, nest_proto);
mnl_attr_nest_end(nlh, nest); + + return 0; }
static int conntrack_data_insert(struct mnl_socket *sock, struct nlmsghdr *nlh, diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh index 569bce8b6383..6c651c880fe8 100755 --- a/tools/testing/selftests/net/pmtu.sh +++ b/tools/testing/selftests/net/pmtu.sh @@ -2056,7 +2056,7 @@ check_running() { pid=${1} cmd=${2}
- [ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "{cmd}" ] + [ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "${cmd}" ] }
test_cleanup_vxlanX_exception() { diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c index ae120f1735c0..34e5df721430 100644 --- a/tools/testing/selftests/resctrl/fill_buf.c +++ b/tools/testing/selftests/resctrl/fill_buf.c @@ -127,7 +127,7 @@ unsigned char *alloc_buffer(size_t buf_size, int memflush) { void *buf = NULL; uint64_t *p64; - size_t s64; + ssize_t s64; int ret;
ret = posix_memalign(&buf, PAGE_SIZE, buf_size); diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c index 6b5a3b52d861..cf08ba5e314e 100644 --- a/tools/testing/selftests/resctrl/mbm_test.c +++ b/tools/testing/selftests/resctrl/mbm_test.c @@ -40,7 +40,8 @@ show_bw_info(unsigned long *bw_imc, unsigned long *bw_resc, size_t span) ksft_print_msg("%s Check MBM diff within %d%%\n", ret ? "Fail:" : "Pass:", MAX_DIFF_PERCENT); ksft_print_msg("avg_diff_per: %d%%\n", avg_diff_per); - ksft_print_msg("Span (MB): %zu\n", span / MB); + if (span) + ksft_print_msg("Span (MB): %zu\n", span / MB); ksft_print_msg("avg_bw_imc: %lu\n", avg_bw_imc); ksft_print_msg("avg_bw_resc: %lu\n", avg_bw_resc);
@@ -138,15 +139,26 @@ static int mbm_run_test(const struct resctrl_test *test, const struct user_param .setup = mbm_setup, .measure = mbm_measure, }; + char *endptr = NULL; + size_t span = 0; int ret;
remove(RESULT_FILE_NAME);
+ if (uparams->benchmark_cmd[0] && strcmp(uparams->benchmark_cmd[0], "fill_buf") == 0) { + if (uparams->benchmark_cmd[1] && *uparams->benchmark_cmd[1] != '\0') { + errno = 0; + span = strtoul(uparams->benchmark_cmd[1], &endptr, 10); + if (errno || *endptr != '\0') + return -EINVAL; + } + } + ret = resctrl_val(test, uparams, uparams->benchmark_cmd, ¶m); if (ret) return ret;
- ret = check_results(DEFAULT_SPAN); + ret = check_results(span); if (ret && (get_vendor() == ARCH_INTEL)) ksft_print_msg("Intel MBM may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n");
diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c index 8c275f6b4dd7..f118f659e896 100644 --- a/tools/testing/selftests/resctrl/resctrl_val.c +++ b/tools/testing/selftests/resctrl/resctrl_val.c @@ -83,13 +83,12 @@ void get_event_and_umask(char *cas_count_cfg, int count, bool op) char *token[MAX_TOKENS]; int i = 0;
- strcat(cas_count_cfg, ","); token[0] = strtok(cas_count_cfg, "=,");
for (i = 1; i < MAX_TOKENS; i++) token[i] = strtok(NULL, "=,");
- for (i = 0; i < MAX_TOKENS; i++) { + for (i = 0; i < MAX_TOKENS - 1; i++) { if (!token[i]) break; if (strcmp(token[i], "event") == 0) { diff --git a/tools/testing/selftests/vDSO/parse_vdso.c b/tools/testing/selftests/vDSO/parse_vdso.c index 7dd5668ea8a6..28f35620c499 100644 --- a/tools/testing/selftests/vDSO/parse_vdso.c +++ b/tools/testing/selftests/vDSO/parse_vdso.c @@ -222,8 +222,7 @@ void *vdso_sym(const char *version, const char *name) ELF(Sym) *sym = &vdso_info.symtab[chain];
/* Check for a defined global or weak function w/ right name. */ - if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC && - ELF64_ST_TYPE(sym->st_info) != STT_NOTYPE) + if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC) continue; if (ELF64_ST_BIND(sym->st_info) != STB_GLOBAL && ELF64_ST_BIND(sym->st_info) != STB_WEAK) diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh index 405ff262ca93..55500f901fbc 100755 --- a/tools/testing/selftests/wireguard/netns.sh +++ b/tools/testing/selftests/wireguard/netns.sh @@ -332,6 +332,7 @@ waitiface $netns1 vethc waitiface $netns2 veths
n0 bash -c 'printf 1 > /proc/sys/net/ipv4/ip_forward' +[[ -e /proc/sys/net/netfilter/nf_conntrack_udp_timeout ]] || modprobe nf_conntrack n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout' n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout_stream' n0 iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -d 10.0.0.0/24 -j SNAT --to 10.0.0.1 diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c index a3907c390d67..829511a71222 100644 --- a/tools/tracing/rtla/src/timerlat_hist.c +++ b/tools/tracing/rtla/src/timerlat_hist.c @@ -1064,7 +1064,7 @@ timerlat_hist_apply_config(struct osnoise_tool *tool, struct timerlat_hist_param * If the user did not specify a type of thread, try user-threads first. * Fall back to kernel threads otherwise. */ - if (!params->kernel_workload && !params->user_workload) { + if (!params->kernel_workload && !params->user_hist) { retval = tracefs_file_exists(NULL, "osnoise/per_cpu/cpu0/timerlat_fd"); if (retval) { debug_msg("User-space interface detected, setting user-threads\n"); diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c index 210b0f533534..3b62519a412f 100644 --- a/tools/tracing/rtla/src/timerlat_top.c +++ b/tools/tracing/rtla/src/timerlat_top.c @@ -830,7 +830,7 @@ timerlat_top_apply_config(struct osnoise_tool *top, struct timerlat_top_params * * If the user did not specify a type of thread, try user-threads first. * Fall back to kernel threads otherwise. */ - if (!params->kernel_workload && !params->user_workload) { + if (!params->kernel_workload && !params->user_top) { retval = tracefs_file_exists(NULL, "osnoise/per_cpu/cpu0/timerlat_fd"); if (retval) { debug_msg("User-space interface detected, setting user-threads\n"); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6ca7a1045bbb..279e03029ce1 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -6561,106 +6561,3 @@ void kvm_exit(void) kvm_irqfd_exit(); } EXPORT_SYMBOL_GPL(kvm_exit); - -struct kvm_vm_worker_thread_context { - struct kvm *kvm; - struct task_struct *parent; - struct completion init_done; - kvm_vm_thread_fn_t thread_fn; - uintptr_t data; - int err; -}; - -static int kvm_vm_worker_thread(void *context) -{ - /* - * The init_context is allocated on the stack of the parent thread, so - * we have to locally copy anything that is needed beyond initialization - */ - struct kvm_vm_worker_thread_context *init_context = context; - struct task_struct *parent; - struct kvm *kvm = init_context->kvm; - kvm_vm_thread_fn_t thread_fn = init_context->thread_fn; - uintptr_t data = init_context->data; - int err; - - err = kthread_park(current); - /* kthread_park(current) is never supposed to return an error */ - WARN_ON(err != 0); - if (err) - goto init_complete; - - err = cgroup_attach_task_all(init_context->parent, current); - if (err) { - kvm_err("%s: cgroup_attach_task_all failed with err %d\n", - __func__, err); - goto init_complete; - } - - set_user_nice(current, task_nice(init_context->parent)); - -init_complete: - init_context->err = err; - complete(&init_context->init_done); - init_context = NULL; - - if (err) - goto out; - - /* Wait to be woken up by the spawner before proceeding. */ - kthread_parkme(); - - if (!kthread_should_stop()) - err = thread_fn(kvm, data); - -out: - /* - * Move kthread back to its original cgroup to prevent it lingering in - * the cgroup of the VM process, after the latter finishes its - * execution. - * - * kthread_stop() waits on the 'exited' completion condition which is - * set in exit_mm(), via mm_release(), in do_exit(). However, the - * kthread is removed from the cgroup in the cgroup_exit() which is - * called after the exit_mm(). This causes the kthread_stop() to return - * before the kthread actually quits the cgroup. - */ - rcu_read_lock(); - parent = rcu_dereference(current->real_parent); - get_task_struct(parent); - rcu_read_unlock(); - cgroup_attach_task_all(parent, current); - put_task_struct(parent); - - return err; -} - -int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn, - uintptr_t data, const char *name, - struct task_struct **thread_ptr) -{ - struct kvm_vm_worker_thread_context init_context = {}; - struct task_struct *thread; - - *thread_ptr = NULL; - init_context.kvm = kvm; - init_context.parent = current; - init_context.thread_fn = thread_fn; - init_context.data = data; - init_completion(&init_context.init_done); - - thread = kthread_run(kvm_vm_worker_thread, &init_context, - "%s-%d", name, task_pid_nr(current)); - if (IS_ERR(thread)) - return PTR_ERR(thread); - - /* kthread_run is never supposed to return NULL */ - WARN_ON(thread == NULL); - - wait_for_completion(&init_context.init_done); - - if (!init_context.err) - *thread_ptr = thread; - - return init_context.err; -}
On Thu, 2024-12-05 at 14:53 +0100, Greg Kroah-Hartman wrote:
I'm announcing the release of the 6.12.2 kernel.
6.12.1 works fine but 6.12.2 generates lots of errors during boot. Tried on several machines all exhibit the same issue. This one is an older lenovo laptop. They do boot and appear to be working but the boot errors are worrisome.
It's been several hours since 12.2.2 was released, so others may not have a problem?
Thought it best to share before I start to work on bisect. config and dmesg outputs attached.
The beginning of the logs are lost. The first trace I find in dmesg is below.
thanks
gene
[ 311.339926] ------------[ cut here ]------------ [ 311.339931] list_add corruption. prev->next should be next (ffffffff93c5d040), but was 0000000000000000. (prev=f fff922280ce0370). [ 311.339946] WARNING: CPU: 5 PID: 810 at lib/list_debug.c:32 __list_add_valid_or_report+0x83/0xa0
... [ 311.340080] CPU: 5 UID: 0 PID: 810 Comm: pool-spawner Tainted: G W 6.12.2-stable-1 #1 cb3f45e00c 917cdbcd684527448a31ee00f68a7b [ 311.340085] Tainted: [W]=WARN [ 311.340086] Hardware name: LENOVO 20BGCTO1WW/20BGCTO1WW, BIOS GNET94WW (2.42 ) 06/02/2021 [ 311.340088] RIP: 0010:__list_add_valid_or_report+0x83/0xa0 [ 311.340093] Code: eb e9 48 89 c1 48 c7 c7 18 61 53 93 e8 76 26 a0 ff 0f 0b eb d6 48 89 d1 48 89 c6 4c 89 c2 48 c 7 c7 68 61 53 93 e8 5d 26 a0 ff <0f> 0b eb bd 48 89 f2 48 89 c1 48 89 fe 48 c7 c7 b8 61 53 93 e8 44 [ 311.340095] RSP: 0018:ffffb91dc17fbc20 EFLAGS: 00010086 [ 311.340098] RAX: 0000000000000000 RBX: ffff9222dc49a9c0 RCX: 0000000000000027 [ 311.340100] RDX: ffff9227e62a18c8 RSI: 0000000000000001 RDI: ffff9227e62a18c0 [ 311.340101] RBP: ffff922280ce0370 R08: 0000000000000000 R09: ffffb91dc17fbaa0 [ 311.340103] R10: ffffffff93c66058 R11: 0000000000000003 R12: ffff9222dc49ad30 [ 311.340104] R13: 0000000000000100 R14: ffffb91dc17fbde0 R15: ffff92228ccc0d80 [ 311.340106] FS: 00007f69f79fe6c0(0000) GS:ffff9227e6280000(0000) knlGS:0000000000000000 [ 311.340108] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 311.340110] CR2: 00007f8d64bda998 CR3: 0000000103516002 CR4: 00000000001726f0
[ 311.340112] Call Trace: [ 311.340114] <TASK> [ 311.340115] ? __list_add_valid_or_report+0x83/0xa0 [ 311.340120] ? __warn.cold+0x93/0xf6 [ 311.340125] ? __list_add_valid_or_report+0x83/0xa0 [ 311.340132] ? report_bug+0xff/0x140 [ 311.340137] ? handle_bug+0x58/0x90 [ 311.340141] ? exc_invalid_op+0x17/0x70 [ 311.340144] ? asm_exc_invalid_op+0x1a/0x20 [ 311.340152] ? __list_add_valid_or_report+0x83/0xa0 [ 311.340156] scx_post_fork+0x5b/0x180 [ 311.340161] copy_process+0x2048/0x2710 [ 311.340164] ? __futex_wait+0x159/0x1c0 [ 311.340170] ? __pfx_futex_wake_mark+0x10/0x10 [ 311.340174] kernel_clone+0xbd/0x420 [ 311.340177] __do_sys_clone3+0xe4/0x130 [ 311.340181] do_syscall_64+0x82/0x160 [ 311.340184] ? do_syscall_64+0x8e/0x160 [ 311.340186] ? exc_page_fault+0x7e/0x180 [ 311.340190] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 311.340194] RIP: 0033:0x7f69f931348d
On 2024-12-05 23:04, Genes Lists wrote:
On Thu, 2024-12-05 at 14:53 +0100, Greg Kroah-Hartman wrote:
I'm announcing the release of the 6.12.2 kernel.
6.12.1 works fine but 6.12.2 generates lots of errors during boot. Tried on several machines all exhibit the same issue. This one is an older lenovo laptop. They do boot and appear to be working but the boot errors are worrisome.
It's been several hours since 12.2.2 was released, so others may not have a problem?
As Linus has indicated in: https://lore.kernel.org/stable/CAHk-=whGd0dfaJNiWSR60HH5iwxqhUZPDWgHCQd446gH...
the problem is the missing commit b23decf8ac91 ("sched: Initialize idle tasks only once"). Applying that on top of 6.12.2 fixes the problem.
We just encountered this issue in Gentoo as well when releasing a new kernel and adding that patch has resolved the issue.
Thought it best to share before I start to work on bisect. config and dmesg outputs attached.
No need to, just add b23decf8ac91 :)
cheers Holger
On Fri, 2024-12-06 at 00:07 +0100, Holger Hoffstätte wrote:
6.12.1 works fine but 6.12.2 generates lots of errors during
...
As Linus has indicated in: https://lore.kernel.org/stable/CAHk- =whGd0dfaJNiWSR60HH5iwxqhUZPDWgHCQd446gH2Wu0yQ@mail.gmail.com/
the problem is the missing commit b23decf8ac91 ("sched: Initialize idle tasks only once"). Applying that on top of 6.12.2 fixes the problem.
We just encountered this issue in Gentoo as well when releasing a new kernel and adding that patch has resolved the issue.
Thought it best to share before I start to work on bisect....
No need to, just add b23decf8ac91 :)
cheers Holger
Excellent - missed that completely. I will stop bisecting now, thank you so much.
On Thu, Dec 05, 2024 at 06:24:36PM -0500, Genes Lists wrote:
On Fri, 2024-12-06 at 00:07 +0100, Holger Hoffstätte wrote:
6.12.1 works fine but 6.12.2 generates lots of errors during
...
As Linus has indicated in: https://lore.kernel.org/stable/CAHk- =whGd0dfaJNiWSR60HH5iwxqhUZPDWgHCQd446gH2Wu0yQ@mail.gmail.com/
the problem is the missing commit b23decf8ac91 ("sched: Initialize idle tasks only once"). Applying that on top of 6.12.2 fixes the problem.
We just encountered this issue in Gentoo as well when releasing a new kernel and adding that patch has resolved the issue.
Thought it best to share before I start to work on bisect....
No need to, just add b23decf8ac91 :)
cheers Holger
Excellent - missed that completely. I will stop bisecting now, thank you so much.
Sorry about this, I'll go push out a new kernel with this fix in it now.
greg k-h
linux-stable-mirror@lists.linaro.org