This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.29-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.1.29-rc1
Aurabindo Pillai aurabindo.pillai@amd.com drm/amd/display: Fix hang when skipping modeset
Christophe Leroy christophe.leroy@csgroup.eu spi: fsl-cpm: Use 16 bit mode for large transfers with even size
Christophe Leroy christophe.leroy@csgroup.eu spi: fsl-spi: Re-organise transfer bits_per_word adaptation
Linus Torvalds torvalds@linux-foundation.org x86: fix clear_user_rep_good() exception handling annotation
Mario Limonciello mario.limonciello@amd.com x86/amd_nb: Add PCI ID for family 19h model 78h
Chao Yu chao@kernel.org f2fs: inode: fix to do sanity check on extent cache correctly
Chao Yu chao@kernel.org f2fs: fix to do sanity check on extent cache correctly
Jani Nikula jani.nikula@intel.com drm/dsc: fix DP_DSC_MAX_BPP_DELTA_* macro values
Theodore Ts'o tytso@mit.edu ext4: fix invalid free tracking in ext4_xattr_move_to_block()
Theodore Ts'o tytso@mit.edu ext4: remove a BUG_ON in ext4_mb_release_group_pa()
Jan Kara jack@suse.cz ext4: fix lockdep warning when enabling MMP
Theodore Ts'o tytso@mit.edu ext4: bail out of ext4_xattr_ibody_get() fails for any reason
Theodore Ts'o tytso@mit.edu ext4: add bounds checking in get_max_inline_xattr_value_size()
Theodore Ts'o tytso@mit.edu ext4: fix deadlock when converting an inline directory in nojournal mode
Theodore Ts'o tytso@mit.edu ext4: improve error handling from ext4_dirhash()
Theodore Ts'o tytso@mit.edu ext4: improve error recovery code paths in __ext4_remount()
Baokun Li libaokun1@huawei.com ext4: check iomap type only if ext4_iomap_begin() does not fail
Jan Kara jack@suse.cz ext4: fix data races when using cached status extents
Tudor Ambarus tudor.ambarus@linaro.org ext4: avoid a potential slab-out-of-bounds in ext4_group_desc_csum
Ye Bin yebin10@huawei.com ext4: fix WARNING in mb_find_extent
John Stultz jstultz@google.com locking/rwsem: Add __always_inline annotation to __down_read_common() and inlined callers
Jani Nikula jani.nikula@intel.com drm/dsc: fix drm_edp_dsc_sink_output_bpp() DPCD high byte usage
Stanislav Lisovskiy stanislav.lisovskiy@intel.com drm: Add missing DP DSC extended capability definitions.
Namjae Jeon linkinjeon@kernel.org ksmbd: fix racy issue from smb2 close and logoff with multichannel
Namjae Jeon linkinjeon@kernel.org ksmbd: block asynchronous requests when making a delay on session setup
Namjae Jeon linkinjeon@kernel.org ksmbd: destroy expired sessions
Namjae Jeon linkinjeon@kernel.org ksmbd: fix racy issue from session setup and logoff
Dawei Li set_pte_at@outlook.com ksmbd: Implements sess->ksmbd_chann_list as xarray
Leo Chen sancchen@amd.com drm/amd/display: Change default Z8 watermark values
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Update Z8 SR exit/enter latencies
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Update Z8 watermarks for DCN314
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org ASoC: codecs: wcd938x: fix accessing regmap on unattached devices
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org ASoC: codecs: constify static sdw_slave_ops struct
Shuming Fan shumingf@realtek.com ASoC: rt1318: Add RT1318 SDCA vendor-specific driver
Leo Chen sancchen@amd.com drm/amd/display: Lowering min Z8 residency time
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Update minimum stutter residency for DCN314 Z8
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Add minimum Z8 residency debug option
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Fix Z8 support configurations
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Add debug option to skip PSR CRTC disable
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Add Z8 allow states to z-state support list
Ian Chen ian.chen@amd.com drm/amd/display: Refactor eDP PSR codes
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915: Check pipe source size when using skl+ scalers
Animesh Manna animesh.manna@intel.com drm/i915/mtl: update scaler source and destination limits for MTL
Sascha Hauer s.hauer@pengutronix.de wifi: rtw88: rtw8821c: Fix rfe_option field width
Jianmin Lv lvjianmin@loongson.cn irqchip/loongson-eiointc: Fix registration of syscore_ops
Jianmin Lv lvjianmin@loongson.cn irqchip/loongson-eiointc: Fix incorrect use of acpi_get_vec_parent
Huacai Chen chenhuacai@kernel.org irqchip/loongarch: Adjust acpi_cascade_irqdomain_init() and sub-routines
Johan Hovold johan+linaro@kernel.org drm/msm: fix missing wq allocation error handling
Rob Clark robdclark@chromium.org drm/msm: Hangcheck progress detection
Rob Clark robdclark@chromium.org drm/msm/adreno: Simplify read64/write64 helpers
Jaegeuk Kim jaegeuk@kernel.org f2fs: factor out victim_entry usage from general rb_tree use
Jaegeuk Kim jaegeuk@kernel.org f2fs: allocate the extent_cache by default
Jaegeuk Kim jaegeuk@kernel.org f2fs: refactor extent_cache to support for read and more
Jaegeuk Kim jaegeuk@kernel.org f2fs: remove unnecessary __init_extent_tree
Jaegeuk Kim jaegeuk@kernel.org f2fs: move internal functions into extent_cache.c
Jaegeuk Kim jaegeuk@kernel.org f2fs: specify extent cache for read explicitly
Konrad Dybcio konrad.dybcio@linaro.org drm/msm/adreno: adreno_gpu: Use suspend() instead of idle() on load error
Konstantin Komarov almaz.alexandrovich@paragon-software.com fs/ntfs3: Refactoring of various minor issues
Ping Cheng pinglinux@gmail.com HID: wacom: insert timestamp to packed Bluetooth (BT) events
Ping Cheng pinglinux@gmail.com HID: wacom: Set a default resolution for older tablets
Mario Limonciello mario.limonciello@amd.com drm/amd: Use `amdgpu_ucode_*` helpers for MES
Mario Limonciello mario.limonciello@amd.com drm/amd: Add a new helper for loading/validating microcode
Mario Limonciello mario.limonciello@amd.com drm/amd: Load MES microcode during early_init
Graham Sider Graham.Sider@amd.com drm/amdgpu: remove deprecated MES version vars
Guchun Chen guchun.chen@amd.com drm/amd/pm: avoid potential UBSAN issue on legacy asics
Guchun Chen guchun.chen@amd.com drm/amdgpu: disable sdma ecc irq only when sdma RAS is enabled in suspend
Guchun Chen guchun.chen@amd.com drm/amd/pm: parse pp_handle under appropriate conditions
Alvin Lee Alvin.Lee2@amd.com drm/amd/display: Enforce 60us prefetch for 200Mhz DCFCLK modes
Lin.Cao lincao12@amd.com drm/amdgpu: Fix vram recover doesn't work after whole GPU reset (v2)
Yifan Zhang yifan1.zhang@amd.com drm/amdgpu: change gfx 11.0.4 external_id range
Saleemkhan Jamadar saleemkhan.jamadar@amd.com drm/amdgpu/jpeg: Remove harvest checking for JPEG3
Guchun Chen guchun.chen@amd.com drm/amdgpu/gfx: disable gfx9 cp_ecc_error_irq only when enabling legacy gfx ras
Horatio Zhang Hongkun.Zhang@amd.com drm/amdgpu: fix amdgpu_irq_put call trace in gmc_v11_0_hw_fini
Hamza Mahfooz hamza.mahfooz@amd.com drm/amdgpu: fix an amdgpu_irq_put() issue in gmc_v9_0_hw_fini()
Horatio Zhang Hongkun.Zhang@amd.com drm/amdgpu: fix amdgpu_irq_put call trace in gmc_v10_0_hw_fini
Hamza Mahfooz hamza.mahfooz@amd.com drm/amd/display: fix flickering caused by S/G mode
Samson Tam Samson.Tam@amd.com drm/amd/display: filter out invalid bits in pipe_fuses
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Fix 4to1 MPC black screen with DPP RCO
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Add NULL plane_state check for cursor disable logic
James Cowgill james.cowgill@blaize.com drm/panel: otm8009a: Set backlight parent to panel device
Jianmin Lv lvjianmin@loongson.cn irqchip/loongson-eiointc: Fix returned value on parsing MADT
Jianmin Lv lvjianmin@loongson.cn irqchip/loongson-pch-pic: Fix pch_pic_acpi_init calling
Jaegeuk Kim jaegeuk@kernel.org f2fs: fix potential corruption when moving a directory
Jaegeuk Kim jaegeuk@kernel.org f2fs: fix null pointer panic in tracepoint in __replace_atomic_write_block
Hans de Goede hdegoede@redhat.com drm/i915/dsi: Use unconditional msleep() instead of intel_dsi_msleep()
Johan Hovold johan+linaro@kernel.org drm/msm: fix workqueue leak on bind errors
Johan Hovold johan+linaro@kernel.org drm/msm: fix vram leak on bind errors
Johan Hovold johan+linaro@kernel.org drm/msm: fix drm device leak on bind errors
Johan Hovold johan+linaro@kernel.org drm/msm: fix NULL-deref on irq uninstall
Johan Hovold johan+linaro@kernel.org drm/msm: fix NULL-deref on snapshot tear down
Chaitanya Kumar Borah chaitanya.kumar.borah@intel.com drm/i915/color: Fix typo for Plane CSC indexes
Francesco Dolcini francesco.dolcini@toradex.com drm/bridge: lt8912b: Fix DSI Video Mode
Johan Hovold johan+linaro@kernel.org drm/msm/adreno: fix runtime PM imbalance at gpu load
Zev Weiss zev@bewilderbeest.net ARM: dts: aspeed: romed8hm3: Fix GPIO polarity of system-fault LED
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org ARM: dts: s5pv210: correct MIPI CSIS clock name
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org ARM: dts: exynos: fix WM8960 clock name in Itop Elite
Zev Weiss zev@bewilderbeest.net ARM: dts: aspeed: asrock: Correct firmware flash SPI clocks
Luis Chamberlain mcgrof@kernel.org sysctl: clarify register_sysctl_init() base directory order
Mathieu Poirier mathieu.poirier@linaro.org remoteproc: rcar_rproc: Call of_node_put() on iteration error
Mathieu Poirier mathieu.poirier@linaro.org remoteproc: imx_rproc: Call of_node_put() on iteration error
Mathieu Poirier mathieu.poirier@linaro.org remoteproc: imx_dsp_rproc: Call of_node_put() on iteration error
Mathieu Poirier mathieu.poirier@linaro.org remoteproc: st: Call of_node_put() on iteration error
Mathieu Poirier mathieu.poirier@linaro.org remoteproc: stm32: Call of_node_put() on iteration error
Luis Chamberlain mcgrof@kernel.org proc_sysctl: enhance documentation
Luis Chamberlain mcgrof@kernel.org proc_sysctl: update docs for __register_sysctl_table()
Randy Dunlap rdunlap@infradead.org sh: nmi_debug: fix return value of __setup handler
Randy Dunlap rdunlap@infradead.org sh: init: use OF_EARLY_FLATTREE for early init
Randy Dunlap rdunlap@infradead.org sh: mcount.S: fix build error when PRINTK is not enabled
Randy Dunlap rdunlap@infradead.org sh: math-emu: fix macro redefined warning
Steve French stfrench@microsoft.com SMB3: force unmount was failing to close deferred close files
Steve French stfrench@microsoft.com smb3: fix problem remounting a share after shutdown
Jan Kara jack@suse.cz inotify: Avoid reporting event with invalid wd
Mark Pearson mpearson-lenovo@squebb.ca platform/x86: thinkpad_acpi: Add profile force ability
Andrey Avdeev jamesstoun@gmail.com platform/x86: touchscreen_dmi: Add info for the Dexp Ursus KX210i
Mark Pearson mpearson-lenovo@squebb.ca platform/x86: thinkpad_acpi: Fix platform profiles on T490
Hans de Goede hdegoede@redhat.com platform/x86: touchscreen_dmi: Add upside-down quirk for GDIX1002 ts on the Juno Tablet
Srinivas Pandruvada srinivas.pandruvada@linux.intel.com platform/x86/intel-uncore-freq: Return error on write frequency
Steve French stfrench@microsoft.com cifs: release leases for deferred close handles when freezing
Pawel Witek pawel.ireneusz.witek@gmail.com cifs: fix pcchunk length type in smb2_copychunk_range
Naohiro Aota Naohiro.Aota@wdc.com btrfs: zoned: fix full zone super block reading on ZNS
Naohiro Aota Naohiro.Aota@wdc.com btrfs: zoned: zone finish data relocation BG with last IO
Filipe Manana fdmanana@suse.com btrfs: fix space cache inconsistency after error loading it from disk
Anastasia Belova abelova@astralinux.ru btrfs: print-tree: parent bytenr must be aligned to sector size
Qu Wenruo wqu@suse.com btrfs: make clear_cache mount option to rebuild FST without disabling it
Christoph Hellwig hch@lst.de btrfs: zero the buffer before marking it dirty in btrfs_redirty_list_add
Josef Bacik josef@toxicpanda.com btrfs: don't free qgroup space unless specified
Boris Burkov boris@bur.io btrfs: fix encoded write i_size corruption with no-holes
xiaoshoukui xiaoshoukui@gmail.com btrfs: fix assertion of exclop condition when starting balance
Qu Wenruo wqu@suse.com btrfs: properly reject clear_cache and v1 cache for block-group-tree
Naohiro Aota naohiro.aota@wdc.com btrfs: zoned: fix wrong use of bitops API in btrfs_ensure_empty_zones
Filipe Manana fdmanana@suse.com btrfs: fix btrfs_prev_leaf() to not return the same key twice
Borislav Petkov (AMD) bp@alien8.de x86/retbleed: Fix return thunk alignment
Conor Dooley conor.dooley@microchip.com RISC-V: fix taking the text_mutex twice during sifive errata patching
Conor Dooley conor.dooley@microchip.com RISC-V: take text_mutex during alternative patching
Dmitrii Dolgov 9erthalion6@gmail.com perf stat: Separate bperf from bpf_profiler
Yang Jihong yangjihong1@huawei.com perf tracepoint: Fix memory leak in is_valid_tracepoint()
Yang Jihong yangjihong1@huawei.com perf symbols: Fix return incorrect build_id size in elf_read_build_id()
Olivier Bacon olivierb89@gmail.com crypto: engine - fix crypto_queue backlog handling
Herbert Xu herbert@gondor.apana.org.au crypto: engine - Use crypto_request_complete
Herbert Xu herbert@gondor.apana.org.au crypto: api - Add scaffolding to change completion function signature
Christophe JAILLET christophe.jaillet@wanadoo.fr crypto: sun8i-ss - Fix a test in sun8i_ss_setup_ivs()
James Clark james.clark@arm.com perf cs-etm: Fix timeless decode mode detection
Markus Elfring Markus.Elfring@web.de perf map: Delete two variable initialisations before null pointer checks in sort__sym_from_cmp()
Arnaldo Carvalho de Melo acme@redhat.com perf pmu: zfree() expects a pointer to a pointer to zero it after freeing its contents
Kajol Jain kjain@linux.ibm.com perf vendor events power9: Remove UTF-8 characters from JSON files
Yang Jihong yangjihong1@huawei.com perf ftrace: Make system wide the default target for latency subcommand
Patrice Duroux patrice.duroux@gmail.com perf tests record_offcpu.sh: Fix redirection of stderr to stdin
Thomas Richter tmricht@linux.ibm.com perf vendor events s390: Remove UTF-8 characters from JSON file
Roman Lozko lozko.roma@gmail.com perf scripts intel-pt-events.py: Fix IPC output for Python 2
Kan Liang kan.liang@linux.intel.com perf record: Fix "read LOST count failed" msg with sample read
Florian Fainelli f.fainelli@gmail.com net: bcmgenet: Remove phy_stop() from bcmgenet_netif_stop()
Wei Fang wei.fang@nxp.com net: enetc: check the index of the SFI rather than the handle
Wenliang Wang wangwenliang.1995@bytedance.com virtio_net: suppress cpu stall when free_unused_bufs
Michal Swiatkowski michal.swiatkowski@linux.intel.com ice: block LAN in case of VF to VF offload
Arınç ÜNAL arinc.unal@arinc9.com net: dsa: mt7530: fix network connectivity with multiple CPU ports
Daniel Golle daniel@makrotopia.org net: dsa: mt7530: split-off common parts from mt7531_setup
Arınç ÜNAL arinc.unal@arinc9.com net: dsa: mt7530: fix corrupt frames using trgmii on 40 MHz XTAL MT7621
Claudio Imbrenda imbrenda@linux.ibm.com KVM: s390: fix race in gmap_make_secure()
Ruliang Lin u202112092@hust.edu.cn ALSA: caiaq: input: Add error handling for unsupported input methods in `snd_usb_caiaq_input_init`
Chia-I Wu olvaffe@gmail.com drm/amdgpu: add a missing lock for AMDGPU_SCHED
Kuniyuki Iwashima kuniyu@amazon.com af_packet: Don't send zero-byte data in packet_sendmsg_spkt().
Shannon Nelson shannon.nelson@amd.com ionic: catch failure from devlink_alloc
Ido Schimmel idosch@nvidia.com ethtool: Fix uninitialized number of lanes
Shannon Nelson shannon.nelson@amd.com ionic: remove noise from ethtool rxnfc error msg
Subbaraya Sundeep sbhatta@marvell.com octeontx2-vf: Detach LF resources on probe cleanup
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: Disable packet I/O for graceful exit
Ratheesh Kannoth rkannoth@marvell.com octeontx2-af: Skip PFs if not enabled
Ratheesh Kannoth rkannoth@marvell.com octeontx2-af: Fix issues with NPC field hash extract
Ratheesh Kannoth rkannoth@marvell.com octeontx2-af: Update/Fix NPC field hash extract feature
Suman Ghosh sumang@marvell.com octeontx2-pf: Add additional checks while configuring ucast/bcast/mcast rules
Suman Ghosh sumang@marvell.com octeontx2-af: Allow mkex profile without DMAC and add L2M/L2B header extraction support
Ratheesh Kannoth rkannoth@marvell.com octeontx2-pf: Increase the size of dmac filter flows
Ratheesh Kannoth rkannoth@marvell.com octeontx2-af: Fix depth of cam and mem table.
Ratheesh Kannoth rkannoth@marvell.com octeontx2-af: Fix start and end bit for scan config
Geetha sowjanya gakula@marvell.com octeontx2-af: Secure APR table update with the lock
Jeremy Sowden jeremy@azazel.net selftests: netfilter: fix libmnl pkg-config usage
Radhakrishna Sripada radhakrishna.sripada@intel.com drm/i915/mtl: Add the missing CPU transcoder mask in intel_device_info
Guo Ren guoren@kernel.org riscv: compat_syscall_table: Fixup compile warning
David Howells dhowells@redhat.com rxrpc: Fix hard call timeout units
Andy Moreton andy.moreton@amd.com sfc: Fix module EEPROM reporting for QSFP modules
Hayes Wang hayeswang@realtek.com r8152: move setting r8153b_rx_agg_chg_indicate()
Hayes Wang hayeswang@realtek.com r8152: fix the poor throughput for 2.5G devices
Hayes Wang hayeswang@realtek.com r8152: fix flow control issue of RTL8156A
Victor Nogueira victor@mojatatu.com net/sched: act_mirred: Add carrier check
Akhil R akhilrajeev@nvidia.com i2c: tegra: Fix PEC support for SMBUS block read
Sia Jee Heng jeeheng.sia@starfivetech.com RISC-V: mm: Enable huge page support to kernel_page_present() function
Christophe JAILLET christophe.jaillet@wanadoo.fr watchdog: dw_wdt: Fix the error handling path of dw_wdt_drv_probe()
Tao Su tao1.su@linux.intel.com block: Skip destroyed blkg when restart in blkg_destroy_all()
Maxim Korotkov korotkov.maxim.s@gmail.com writeback: fix call of incorrect macro
Angelo Dureghello angelo.dureghello@timesys.com net: dsa: mv88e6xxx: add mv88e6321 rsvd2cpu
Antoine Tenart atenart@kernel.org net: ipv6: fix skb hash for some RST packets
Andrea Mayer andrea.mayer@uniroma2.it selftests: srv6: make srv6_end_dt46_l3vpn_test more robust
Cong Wang cong.wang@bytedance.com sit: update dev->needed_headroom in ipip6_tunnel_bind_dev()
Vlad Buslov vladbu@nvidia.com net/sched: cls_api: remove block_cb from driver_list before freeing
Eric Dumazet edumazet@google.com tcp: fix skb_copy_ubufs() vs BIG TCP
Cosmo Chou chou.cosmo@gmail.com net/ncsi: clear Tx enable mode when handling a Config required AEN
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: mcs: Do not reset PN while updating secy
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: mcs: Fix shared counters logic
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: mcs: Clear stats before freeing resource
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: mcs: Match macsec ethertype along with DMAC
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: mcs: Fix NULL pointer dereferences
Geetha sowjanya gakula@marvell.com octeontx2-af: mcs: Fix MCS block interrupt
Geetha sowjanya gakula@marvell.com octeontx2-af: mcs: Config parser to skip 8B header
Subbaraya Sundeep sbhatta@marvell.com octeontx2-af: mcs: Write TCAM_DATA and TCAM_MASK registers at once
Geetha sowjanya gakula@marvell.com octeonxt2-af: mcs: Fix per port bypass config
John Hickey jjh@daedalian.us ixgbe: Fix panic during XDP_TX with > 64 CPUs
Aurabindo Pillai aurabindo.pillai@amd.com drm/amd/display: Update bounding box values for DCN321
Aurabindo Pillai aurabindo.pillai@amd.com drm/amd/display: Do not clear GPINT register when releasing DMUB from reset
Cruise Hung Cruise.Hung@amd.com drm/amd/display: Reset OUTBOX0 r/w pointer on DMUB reset
Aurabindo Pillai aurabindo.pillai@amd.com drm/amd/display: Fixes for dcn32_clk_mgr implementation
Hersen Wu hersenxs.wu@amd.com drm/amd/display: Return error code on DSC atomic check failure
Rodrigo Siqueira Rodrigo.Siqueira@amd.com drm/amd/display: Add missing WA and MCLK validation
Rodrigo Siqueira Rodrigo.Siqueira@amd.com drm/amd/display: Remove FPU guards from the DML folder
Zheng Wang zyytlz.wz@163.com scsi: qedi: Fix use after free bug in qedi_remove()
Hans de Goede hdegoede@redhat.com ASoC: Intel: soc-acpi-byt: Fix "WM510205" match no longer working
Sean Christopherson seanjc@google.com KVM: x86/mmu: Refresh CR0.WP prior to checking for emulated permission faults
Mathias Krause minipli@grsecurity.net KVM: VMX: Make CR0.WP a guest owned bit
Mathias Krause minipli@grsecurity.net KVM: x86: Make use of kvm_read_cr*_bits() when testing bits
Mathias Krause minipli@grsecurity.net KVM: x86: Do not unload MMU roots when only toggling CR0.WP with TDP enabled
Paolo Bonzini pbonzini@redhat.com KVM: x86/mmu: Avoid indirect call for get_cr3
Ryan Lin tsung-hua.lin@amd.com drm/amd/display: Ext displays with dock can't recognized after resume
ZhangPeng zhangpeng362@huawei.com fs/ntfs3: Fix null-ptr-deref on inode->i_op in ntfs_lookup()
Takahiro Kuwano Takahiro.Kuwano@infineon.com mtd: spi-nor: spansion: Enable JFFS2 write buffer for Infineon s25hx SEMPER flash
Tanmay Shah tanmay.shah@amd.com mailbox: zynqmp: Fix counts of child nodes
Christophe JAILLET christophe.jaillet@wanadoo.fr mailbox: zynq: Switch to flexible array to simplify code
Manivannan Sadhasivam mani@kernel.org soc: qcom: llcc: Do not create EDAC platform device on SDM845
Manivannan Sadhasivam mani@kernel.org qcom: llcc/edac: Support polling mode for ECC handling
Takahiro Kuwano Takahiro.Kuwano@infineon.com mtd: spi-nor: spansion: Enable JFFS2 write buffer for Infineon s28hx SEMPER flash
Miquel Raynal miquel.raynal@bootlin.com mtd: spi-nor: Add a RWW flag
Sudip Mukherjee sudip.mukherjee@sifive.com mtd: spi-nor: add SFDP fixups for Quad Page Program
Takahiro Kuwano Takahiro.Kuwano@infineon.com mtd: spi-nor: spansion: Remove NO_SFDP_FLAGS from s28hs512t info
Sean Christopherson seanjc@google.com KVM: x86/pmu: Disallow legacy LBRs if architectural LBRs are available
Sean Christopherson seanjc@google.com KVM: x86: Track supported PERF_CAPABILITIES in kvm_caps
Sean Christopherson seanjc@google.com perf/x86/core: Zero @lbr instead of returning -1 in x86_perf_get_lbr() stub
Jeremi Piotrowski jpiotrowski@linux.microsoft.com crypto: ccp - Clear PSP interrupt status register before calling handler
Martin Krastev krastevm@vmware.com drm/vmwgfx: Fix Legacy Display Unit atomic drm support
Zack Rusin zackr@vmware.com drm/vmwgfx: Remove explicit and broken vblank handling
Wesley Cheng quic_wcheng@quicinc.com usb: dwc3: gadget: Execute gadget stop after halting the controller
Johan Hovold johan+linaro@kernel.org USB: dwc3: gadget: drop dead hibernation code
-------------
Diffstat:
Makefile | 4 +- arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts | 2 +- arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts | 4 +- arch/arm/boot/dts/exynos4412-itop-elite.dts | 2 +- arch/arm/boot/dts/s5pv210.dtsi | 2 +- arch/riscv/errata/sifive/errata.c | 3 + arch/riscv/errata/thead/errata.c | 8 +- arch/riscv/kernel/Makefile | 1 + arch/riscv/kernel/cpufeature.c | 6 +- arch/riscv/mm/pageattr.c | 8 + arch/s390/kernel/uv.c | 32 +- arch/sh/Kconfig.debug | 2 +- arch/sh/kernel/head_32.S | 6 +- arch/sh/kernel/nmi_debug.c | 4 +- arch/sh/kernel/setup.c | 4 +- arch/sh/math-emu/sfp-util.h | 4 - arch/x86/events/intel/lbr.c | 6 +- arch/x86/include/asm/perf_event.h | 6 +- arch/x86/kernel/amd_nb.c | 2 + arch/x86/kvm/kvm_cache_regs.h | 2 +- arch/x86/kvm/mmu.h | 26 +- arch/x86/kvm/mmu/mmu.c | 46 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/pmu.c | 4 +- arch/x86/kvm/svm/svm.c | 2 + arch/x86/kvm/vmx/capabilities.h | 24 - arch/x86/kvm/vmx/nested.c | 4 +- arch/x86/kvm/vmx/pmu_intel.c | 2 +- arch/x86/kvm/vmx/vmx.c | 42 +- arch/x86/kvm/vmx/vmx.h | 18 + arch/x86/kvm/x86.c | 12 + arch/x86/kvm/x86.h | 1 + arch/x86/lib/clear_page_64.S | 2 +- arch/x86/lib/retpoline.S | 4 +- block/blk-cgroup.c | 3 + crypto/algapi.c | 3 + crypto/crypto_engine.c | 10 +- .../crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c | 2 +- drivers/crypto/ccp/psp-dev.c | 6 +- drivers/edac/qcom_edac.c | 52 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c | 59 ++ drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 36 + drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 3 + drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3 +- drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c | 1 - drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c | 1 - drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 1 - drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 1 + drivers/gpu/drm/amd/amdgpu/mes_v10_1.c | 111 +-- drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 102 +- drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 8 +- drivers/gpu/drm/amd/amdgpu/soc21.c | 2 +- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 28 +- .../amd/display/amdgpu_dm/amdgpu_dm_mst_types.c | 1 + .../drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c | 4 +- .../drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c | 12 +- .../amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c | 5 + drivers/gpu/drm/amd/display/dc/core/dc_link.c | 2 +- drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 3 + drivers/gpu/drm/amd/display/dc/dc.h | 5 +- drivers/gpu/drm/amd/display/dc/dc_link.h | 14 +- .../drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 16 +- drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 8 +- .../gpu/drm/amd/display/dc/dcn21/dcn21_resource.c | 5 +- .../gpu/drm/amd/display/dc/dcn30/dcn30_resource.c | 15 +- .../drm/amd/display/dc/dcn302/dcn302_resource.c | 14 +- .../drm/amd/display/dc/dcn303/dcn303_resource.c | 13 +- drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c | 13 +- .../gpu/drm/amd/display/dc/dcn31/dcn31_resource.c | 4 + .../gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c | 23 + .../gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c | 10 + .../gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h | 2 + .../gpu/drm/amd/display/dc/dcn314/dcn314_init.c | 1 + .../drm/amd/display/dc/dcn314/dcn314_resource.c | 6 + .../drm/amd/display/dc/dcn315/dcn315_resource.c | 4 + .../drm/amd/display/dc/dcn316/dcn316_resource.c | 4 + drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c | 1 + .../gpu/drm/amd/display/dc/dcn32/dcn32_resource.c | 12 +- .../drm/amd/display/dc/dcn321/dcn321_resource.c | 10 +- .../gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 21 +- .../gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c | 20 +- .../gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c | 4 +- .../gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c | 17 +- .../amd/display/dc/dml/dcn32/display_mode_vba_32.c | 5 +- .../amd/display/dc/dml/dcn32/display_mode_vba_32.h | 1 + .../gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c | 24 +- drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h | 23 +- .../drm/amd/display/dc/inc/hw_sequencer_private.h | 4 + drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c | 3 +- drivers/gpu/drm/amd/pm/amdgpu_dpm.c | 25 +- drivers/gpu/drm/bridge/lontium-lt8912b.c | 1 - drivers/gpu/drm/i915/display/icl_dsi.c | 2 +- drivers/gpu/drm/i915/display/intel_dsi_vbt.c | 11 - drivers/gpu/drm/i915/display/intel_dsi_vbt.h | 1 - drivers/gpu/drm/i915/display/skl_scaler.c | 57 +- drivers/gpu/drm/i915/display/vlv_dsi.c | 22 +- drivers/gpu/drm/i915/i915_pci.c | 2 + drivers/gpu/drm/i915/i915_reg.h | 4 +- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 27 +- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 4 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 58 +- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 3 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 16 +- drivers/gpu/drm/msm/msm_drv.c | 53 +- drivers/gpu/drm/msm/msm_drv.h | 8 +- drivers/gpu/drm/msm/msm_gpu.c | 31 +- drivers/gpu/drm/msm/msm_gpu.h | 22 +- drivers/gpu/drm/msm/msm_ringbuffer.h | 28 + drivers/gpu/drm/panel/panel-orisetech-otm8009a.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 3 - drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 96 +- drivers/gpu/drm/vmwgfx/vmwgfx_kms.h | 5 - drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c | 53 +- drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c | 31 +- drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c | 26 - drivers/hid/wacom_wac.c | 38 +- drivers/hid/wacom_wac.h | 1 + drivers/i2c/busses/i2c-tegra.c | 40 +- drivers/irqchip/irq-loongarch-cpu.c | 30 +- drivers/irqchip/irq-loongson-eiointc.c | 60 +- drivers/irqchip/irq-loongson-pch-pic.c | 18 +- drivers/mailbox/zynqmp-ipi-mailbox.c | 13 +- drivers/mtd/spi-nor/core.c | 12 + drivers/mtd/spi-nor/core.h | 6 + drivers/mtd/spi-nor/debugfs.c | 2 + drivers/mtd/spi-nor/issi.c | 1 + drivers/mtd/spi-nor/spansion.c | 35 +- drivers/net/dsa/mt7530.c | 113 ++- drivers/net/dsa/mv88e6xxx/chip.c | 1 + drivers/net/ethernet/broadcom/genet/bcmgenet.c | 1 - drivers/net/ethernet/freescale/enetc/enetc_qos.c | 2 +- drivers/net/ethernet/intel/ice/ice_tc_lib.c | 3 +- drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 3 - drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 6 +- drivers/net/ethernet/marvell/octeontx2/af/mbox.c | 5 +- drivers/net/ethernet/marvell/octeontx2/af/mbox.h | 33 +- drivers/net/ethernet/marvell/octeontx2/af/mcs.c | 110 +-- drivers/net/ethernet/marvell/octeontx2/af/mcs.h | 26 +- .../ethernet/marvell/octeontx2/af/mcs_cnf10kb.c | 63 ++ .../net/ethernet/marvell/octeontx2/af/mcs_reg.h | 6 +- .../net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c | 37 + drivers/net/ethernet/marvell/octeontx2/af/npc.h | 1 + drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 49 +- drivers/net/ethernet/marvell/octeontx2/af/rvu.h | 2 + .../net/ethernet/marvell/octeontx2/af/rvu_cgx.c | 2 + .../net/ethernet/marvell/octeontx2/af/rvu_cn10k.c | 13 +- .../ethernet/marvell/octeontx2/af/rvu_debugfs.c | 11 +- .../net/ethernet/marvell/octeontx2/af/rvu_npc.c | 22 + .../net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c | 169 +++- .../net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h | 4 + .../ethernet/marvell/octeontx2/af/rvu_npc_hash.c | 123 ++- .../ethernet/marvell/octeontx2/af/rvu_npc_hash.h | 10 +- .../ethernet/marvell/octeontx2/nic/cn10k_macsec.c | 48 +- .../ethernet/marvell/octeontx2/nic/otx2_common.h | 6 +- .../ethernet/marvell/octeontx2/nic/otx2_flows.c | 27 +- .../net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 14 +- .../net/ethernet/marvell/octeontx2/nic/otx2_vf.c | 2 +- .../net/ethernet/pensando/ionic/ionic_devlink.c | 2 + .../net/ethernet/pensando/ionic/ionic_ethtool.c | 2 +- drivers/net/ethernet/sfc/mcdi_port_common.c | 11 +- drivers/net/usb/r8152.c | 84 +- drivers/net/virtio_net.c | 2 + drivers/net/wireless/realtek/rtw88/rtw8821c.c | 2 +- .../uncore-frequency/uncore-frequency-common.c | 6 +- drivers/platform/x86/thinkpad_acpi.c | 24 +- drivers/platform/x86/touchscreen_dmi.c | 41 + drivers/remoteproc/imx_dsp_rproc.c | 12 +- drivers/remoteproc/imx_rproc.c | 7 +- drivers/remoteproc/rcar_rproc.c | 9 +- drivers/remoteproc/st_remoteproc.c | 5 +- drivers/remoteproc/stm32_rproc.c | 6 +- drivers/scsi/qedi/qedi_main.c | 3 + drivers/soc/qcom/llcc-qcom.c | 15 +- drivers/spi/spi-fsl-cpm.c | 23 + drivers/spi/spi-fsl-spi.c | 49 +- drivers/usb/dwc3/gadget.c | 59 +- drivers/watchdog/dw_wdt.c | 7 +- fs/btrfs/block-rsv.c | 3 +- fs/btrfs/ctree.c | 32 +- fs/btrfs/disk-io.c | 25 +- fs/btrfs/file-item.c | 5 +- fs/btrfs/free-space-cache.c | 7 +- fs/btrfs/free-space-tree.c | 50 +- fs/btrfs/free-space-tree.h | 3 +- fs/btrfs/inode.c | 3 + fs/btrfs/ioctl.c | 4 +- fs/btrfs/print-tree.c | 6 +- fs/btrfs/super.c | 6 +- fs/btrfs/zoned.c | 17 +- fs/cifs/cifsfs.c | 16 + fs/cifs/connect.c | 7 + fs/cifs/smb2ops.c | 2 +- fs/ext4/balloc.c | 25 + fs/ext4/extents_status.c | 30 +- fs/ext4/hash.c | 6 +- fs/ext4/inline.c | 17 +- fs/ext4/inode.c | 2 +- fs/ext4/mballoc.c | 6 +- fs/ext4/mmp.c | 30 +- fs/ext4/namei.c | 53 +- fs/ext4/super.c | 19 +- fs/ext4/xattr.c | 5 +- fs/f2fs/data.c | 20 +- fs/f2fs/debug.c | 65 +- fs/f2fs/extent_cache.c | 585 ++++++----- fs/f2fs/f2fs.h | 215 ++-- fs/f2fs/file.c | 8 +- fs/f2fs/gc.c | 143 +-- fs/f2fs/gc.h | 14 +- fs/f2fs/inode.c | 44 +- fs/f2fs/namei.c | 20 +- fs/f2fs/node.c | 10 +- fs/f2fs/node.h | 2 +- fs/f2fs/segment.c | 11 +- fs/f2fs/shrinker.c | 19 +- fs/f2fs/super.c | 12 +- fs/fs-writeback.c | 2 +- fs/ksmbd/connection.c | 68 +- fs/ksmbd/connection.h | 58 +- fs/ksmbd/mgmt/tree_connect.c | 3 + fs/ksmbd/mgmt/user_session.c | 138 +-- fs/ksmbd/mgmt/user_session.h | 5 +- fs/ksmbd/server.c | 3 +- fs/ksmbd/smb2pdu.c | 122 +-- fs/ksmbd/smb2pdu.h | 2 + fs/ksmbd/transport_tcp.c | 2 +- fs/notify/inotify/inotify_fsnotify.c | 11 +- fs/ntfs3/bitmap.c | 3 +- fs/ntfs3/namei.c | 10 + fs/ntfs3/ntfs.h | 3 - fs/proc/proc_sysctl.c | 42 +- include/crypto/algapi.h | 7 + include/drm/display/drm_dp.h | 10 +- include/drm/display/drm_dp_helper.h | 5 +- include/linux/crypto.h | 6 + include/linux/pci_ids.h | 1 + include/trace/events/f2fs.h | 62 +- kernel/locking/rwsem.c | 8 +- net/core/skbuff.c | 20 +- net/ethtool/ioctl.c | 2 +- net/ipv6/sit.c | 8 +- net/ipv6/tcp_ipv6.c | 2 +- net/ncsi/ncsi-aen.c | 1 + net/packet/af_packet.c | 2 +- net/rxrpc/sendmsg.c | 2 +- net/sched/act_mirred.c | 2 +- net/sched/cls_api.c | 1 + sound/soc/codecs/Kconfig | 6 + sound/soc/codecs/Makefile | 2 + sound/soc/codecs/rt1316-sdw.c | 2 +- sound/soc/codecs/rt1318-sdw.c | 884 +++++++++++++++++ sound/soc/codecs/rt1318-sdw.h | 101 ++ sound/soc/codecs/rt711-sdca-sdw.c | 2 +- sound/soc/codecs/rt715-sdca-sdw.c | 2 +- sound/soc/codecs/wcd938x-sdw.c | 1039 +++++++++++++++++++- sound/soc/codecs/wcd938x.c | 1003 +------------------ sound/soc/codecs/wcd938x.h | 1 + sound/soc/codecs/wsa881x.c | 2 +- sound/soc/codecs/wsa883x.c | 2 +- sound/soc/intel/common/soc-acpi-intel-byt-match.c | 2 +- sound/usb/caiaq/input.c | 1 + tools/perf/builtin-ftrace.c | 6 +- tools/perf/builtin-record.c | 2 +- tools/perf/builtin-stat.c | 4 +- .../perf/pmu-events/arch/powerpc/power9/other.json | 4 +- .../pmu-events/arch/powerpc/power9/pipeline.json | 2 +- .../perf/pmu-events/arch/s390/cf_z16/extended.json | 10 +- tools/perf/scripts/python/intel-pt-events.py | 2 +- tools/perf/tests/shell/record_offcpu.sh | 2 +- tools/perf/util/cs-etm.c | 34 +- tools/perf/util/evsel.h | 5 + tools/perf/util/pmu.c | 2 +- tools/perf/util/sort.c | 3 +- tools/perf/util/symbol-elf.c | 2 +- tools/perf/util/tracepoint.c | 1 + .../selftests/net/srv6_end_dt46_l3vpn_test.sh | 10 +- tools/testing/selftests/netfilter/Makefile | 7 +- 281 files changed, 5468 insertions(+), 3039 deletions(-)
From: Johan Hovold johan+linaro@kernel.org
[ Upstream commit bdb19d01026a5cccfa437be8adcf2df472c5889e ]
The hibernation code is broken and has never been enabled in mainline and should thus be dropped.
Remove the hibernation bits from the gadget code, which effectively reverts commits e1dadd3b0f27 ("usb: dwc3: workaround: bogus hibernation events") and 7b2a0368bbc9 ("usb: dwc3: gadget: set KEEP_CONNECT in case of hibernation") except for the spurious interrupt warning.
Acked-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Link: https://lore.kernel.org/r/20230404072524.19014-5-johan+linaro@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Stable-dep-of: 39674be56fba ("usb: dwc3: gadget: Execute gadget stop after halting the controller") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/dwc3/gadget.c | 46 +++++---------------------------------- 1 file changed, 6 insertions(+), 40 deletions(-)
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index d2622378ce040..2a6a5ffa54836 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -2468,7 +2468,7 @@ static void __dwc3_gadget_set_speed(struct dwc3 *dwc) dwc3_writel(dwc->regs, DWC3_DCFG, reg); }
-static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend) +static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on) { u32 reg; u32 timeout = 2000; @@ -2487,17 +2487,11 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend) reg &= ~DWC3_DCTL_KEEP_CONNECT; reg |= DWC3_DCTL_RUN_STOP;
- if (dwc->has_hibernation) - reg |= DWC3_DCTL_KEEP_CONNECT; - __dwc3_gadget_set_speed(dwc); dwc->pullups_connected = true; } else { reg &= ~DWC3_DCTL_RUN_STOP;
- if (dwc->has_hibernation && !suspend) - reg &= ~DWC3_DCTL_KEEP_CONNECT; - dwc->pullups_connected = false; }
@@ -2579,7 +2573,7 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc) * remaining event generated by the controller while polling for * DSTS.DEVCTLHLT. */ - return dwc3_gadget_run_stop(dwc, false, false); + return dwc3_gadget_run_stop(dwc, false); }
static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on) @@ -2633,7 +2627,7 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
dwc3_event_buffers_setup(dwc); __dwc3_gadget_start(dwc); - ret = dwc3_gadget_run_stop(dwc, true, false); + ret = dwc3_gadget_run_stop(dwc, true); }
pm_runtime_put(dwc->dev); @@ -4200,30 +4194,6 @@ static void dwc3_gadget_suspend_interrupt(struct dwc3 *dwc, dwc->link_state = next; }
-static void dwc3_gadget_hibernation_interrupt(struct dwc3 *dwc, - unsigned int evtinfo) -{ - unsigned int is_ss = evtinfo & BIT(4); - - /* - * WORKAROUND: DWC3 revision 2.20a with hibernation support - * have a known issue which can cause USB CV TD.9.23 to fail - * randomly. - * - * Because of this issue, core could generate bogus hibernation - * events which SW needs to ignore. - * - * Refers to: - * - * STAR#9000546576: Device Mode Hibernation: Issue in USB 2.0 - * Device Fallback from SuperSpeed - */ - if (is_ss ^ (dwc->speed == USB_SPEED_SUPER)) - return; - - /* enter hibernation here */ -} - static void dwc3_gadget_interrupt(struct dwc3 *dwc, const struct dwc3_event_devt *event) { @@ -4241,11 +4211,7 @@ static void dwc3_gadget_interrupt(struct dwc3 *dwc, dwc3_gadget_wakeup_interrupt(dwc); break; case DWC3_DEVICE_EVENT_HIBER_REQ: - if (dev_WARN_ONCE(dwc->dev, !dwc->has_hibernation, - "unexpected hibernation event\n")) - break; - - dwc3_gadget_hibernation_interrupt(dwc, event->event_info); + dev_WARN_ONCE(dwc->dev, true, "unexpected hibernation event\n"); break; case DWC3_DEVICE_EVENT_LINK_STATUS_CHANGE: dwc3_gadget_linksts_change_interrupt(dwc, event->event_info); @@ -4582,7 +4548,7 @@ int dwc3_gadget_suspend(struct dwc3 *dwc) if (!dwc->gadget_driver) return 0;
- dwc3_gadget_run_stop(dwc, false, false); + dwc3_gadget_run_stop(dwc, false);
spin_lock_irqsave(&dwc->lock, flags); dwc3_disconnect_gadget(dwc); @@ -4603,7 +4569,7 @@ int dwc3_gadget_resume(struct dwc3 *dwc) if (ret < 0) goto err0;
- ret = dwc3_gadget_run_stop(dwc, true, false); + ret = dwc3_gadget_run_stop(dwc, true); if (ret < 0) goto err1;
From: Wesley Cheng quic_wcheng@quicinc.com
[ Upstream commit 39674be56fba1cd3a03bf4617f523a35f85fd2c1 ]
Do not call gadget stop until the poll for controller halt is completed. DEVTEN is cleared as part of gadget stop, so the intention to allow ep0 events to continue while waiting for controller halt is not happening.
Fixes: c96683798e27 ("usb: dwc3: ep0: Don't prepare beyond Setup stage") Cc: stable@vger.kernel.org Acked-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Signed-off-by: Wesley Cheng quic_wcheng@quicinc.com Link: https://lore.kernel.org/r/20230420212759.29429-2-quic_wcheng@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/dwc3/gadget.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index 2a6a5ffa54836..daa7673833557 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -2536,7 +2536,6 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc) * bit. */ dwc3_stop_active_transfers(dwc); - __dwc3_gadget_stop(dwc); spin_unlock_irqrestore(&dwc->lock, flags);
/* @@ -2573,7 +2572,19 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc) * remaining event generated by the controller while polling for * DSTS.DEVCTLHLT. */ - return dwc3_gadget_run_stop(dwc, false); + ret = dwc3_gadget_run_stop(dwc, false); + + /* + * Stop the gadget after controller is halted, so that if needed, the + * events to update EP0 state can still occur while the run/stop + * routine polls for the halted state. DEVTEN is cleared as part of + * gadget stop. + */ + spin_lock_irqsave(&dwc->lock, flags); + __dwc3_gadget_stop(dwc); + spin_unlock_irqrestore(&dwc->lock, flags); + + return ret; }
static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
From: Zack Rusin zackr@vmware.com
[ Upstream commit 2e10cdc6e85de5998b0b140deff01765ceb92f64 ]
The explicit vblank handling was never finished. The driver never had the full implementation of vblank and what was there is emulated by DRM when the driver doesn't pretend to be implementing it itself.
Let DRM handle the vblank emulation and stop pretending the driver is doing anything special with vblank. In the future it would make sense to implement helpers for full vblank handling because vkms and amdgpu_vkms already have that code. Exporting it to common helpers and having all three drivers share it would make sense (that would be largely just to allow more of igt to run).
Signed-off-by: Zack Rusin zackr@vmware.com Reviewed-by: Maaz Mombasawala mombasawalam@vmware.com Reviewed-by: Martin Krastev krastevm@vmware.com Reviewed-by: Michael Banack banackm@vmware.com Link: https://patchwork.freedesktop.org/patch/msgid/20221022040236.616490-15-zack@... Stable-dep-of: a37a512db3fa ("drm/vmwgfx: Fix Legacy Display Unit atomic drm support") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 3 --- drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 34 ---------------------------- drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c | 8 ------- drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c | 31 +------------------------ drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c | 26 --------------------- 5 files changed, 1 insertion(+), 101 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index 0bc1ebc43002b..1ec9c53a7bf43 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -1221,9 +1221,6 @@ int vmw_kms_write_svga(struct vmw_private *vmw_priv, bool vmw_kms_validate_mode_vram(struct vmw_private *dev_priv, uint32_t pitch, uint32_t height); -u32 vmw_get_vblank_counter(struct drm_crtc *crtc); -int vmw_enable_vblank(struct drm_crtc *crtc); -void vmw_disable_vblank(struct drm_crtc *crtc); int vmw_kms_present(struct vmw_private *dev_priv, struct drm_file *file_priv, struct vmw_framebuffer *vfb, diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c index 13721bcf047c0..d5c4da2b05b1a 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -31,7 +31,6 @@ #include <drm/drm_fourcc.h> #include <drm/drm_rect.h> #include <drm/drm_sysfs.h> -#include <drm/drm_vblank.h>
#include "vmwgfx_kms.h"
@@ -832,15 +831,6 @@ void vmw_du_crtc_atomic_begin(struct drm_crtc *crtc, void vmw_du_crtc_atomic_flush(struct drm_crtc *crtc, struct drm_atomic_state *state) { - struct drm_pending_vblank_event *event = crtc->state->event; - - if (event) { - crtc->state->event = NULL; - - spin_lock_irq(&crtc->dev->event_lock); - drm_crtc_send_vblank_event(crtc, event); - spin_unlock_irq(&crtc->dev->event_lock); - } }
@@ -2158,30 +2148,6 @@ bool vmw_kms_validate_mode_vram(struct vmw_private *dev_priv, dev_priv->max_primary_mem : dev_priv->vram_size); }
- -/* - * Function called by DRM code called with vbl_lock held. - */ -u32 vmw_get_vblank_counter(struct drm_crtc *crtc) -{ - return 0; -} - -/* - * Function called by DRM code called with vbl_lock held. - */ -int vmw_enable_vblank(struct drm_crtc *crtc) -{ - return -EINVAL; -} - -/* - * Function called by DRM code called with vbl_lock held. - */ -void vmw_disable_vblank(struct drm_crtc *crtc) -{ -} - /** * vmw_du_update_layout - Update the display unit with topology from resolution * plugin and generate DRM uevent diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c index b8761f16dd785..a56e5d0ca3c65 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c @@ -28,7 +28,6 @@ #include <drm/drm_atomic.h> #include <drm/drm_atomic_helper.h> #include <drm/drm_fourcc.h> -#include <drm/drm_vblank.h>
#include "vmwgfx_kms.h"
@@ -235,9 +234,6 @@ static const struct drm_crtc_funcs vmw_legacy_crtc_funcs = { .atomic_duplicate_state = vmw_du_crtc_duplicate_state, .atomic_destroy_state = vmw_du_crtc_destroy_state, .set_config = drm_atomic_helper_set_config, - .get_vblank_counter = vmw_get_vblank_counter, - .enable_vblank = vmw_enable_vblank, - .disable_vblank = vmw_disable_vblank, };
@@ -507,10 +503,6 @@ int vmw_kms_ldu_init_display(struct vmw_private *dev_priv) dev_priv->ldu_priv->last_num_active = 0; dev_priv->ldu_priv->fb = NULL;
- ret = drm_vblank_init(dev, num_display_units); - if (ret != 0) - goto err_free; - vmw_kms_create_implicit_placement_property(dev_priv);
for (i = 0; i < num_display_units; ++i) { diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c index 9c79873f62f06..e1f36a09c59c1 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c @@ -29,7 +29,6 @@ #include <drm/drm_atomic_helper.h> #include <drm/drm_damage_helper.h> #include <drm/drm_fourcc.h> -#include <drm/drm_vblank.h>
#include "vmwgfx_kms.h"
@@ -320,9 +319,6 @@ static const struct drm_crtc_funcs vmw_screen_object_crtc_funcs = { .atomic_destroy_state = vmw_du_crtc_destroy_state, .set_config = drm_atomic_helper_set_config, .page_flip = drm_atomic_helper_page_flip, - .get_vblank_counter = vmw_get_vblank_counter, - .enable_vblank = vmw_enable_vblank, - .disable_vblank = vmw_disable_vblank, };
/* @@ -730,7 +726,6 @@ vmw_sou_primary_plane_atomic_update(struct drm_plane *plane, struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, plane); struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, plane); struct drm_crtc *crtc = new_state->crtc; - struct drm_pending_vblank_event *event = NULL; struct vmw_fence_obj *fence = NULL; int ret;
@@ -754,24 +749,6 @@ vmw_sou_primary_plane_atomic_update(struct drm_plane *plane, return; }
- /* For error case vblank event is send from vmw_du_crtc_atomic_flush */ - event = crtc->state->event; - if (event && fence) { - struct drm_file *file_priv = event->base.file_priv; - - ret = vmw_event_fence_action_queue(file_priv, - fence, - &event->base, - &event->event.vbl.tv_sec, - &event->event.vbl.tv_usec, - true); - - if (unlikely(ret != 0)) - DRM_ERROR("Failed to queue event on fence.\n"); - else - crtc->state->event = NULL; - } - if (fence) vmw_fence_obj_unreference(&fence); } @@ -947,7 +924,7 @@ static int vmw_sou_init(struct vmw_private *dev_priv, unsigned unit) int vmw_kms_sou_init_display(struct vmw_private *dev_priv) { struct drm_device *dev = &dev_priv->drm; - int i, ret; + int i;
/* Screen objects won't work if GMR's aren't available */ if (!dev_priv->has_gmr) @@ -957,12 +934,6 @@ int vmw_kms_sou_init_display(struct vmw_private *dev_priv) return -ENOSYS; }
- ret = -ENOMEM; - - ret = drm_vblank_init(dev, VMWGFX_NUM_DISPLAY_UNITS); - if (unlikely(ret != 0)) - return ret; - for (i = 0; i < VMWGFX_NUM_DISPLAY_UNITS; ++i) vmw_sou_init(dev_priv, i);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c index 8650c3aea8f0a..0090abe892548 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c @@ -29,7 +29,6 @@ #include <drm/drm_atomic_helper.h> #include <drm/drm_damage_helper.h> #include <drm/drm_fourcc.h> -#include <drm/drm_vblank.h>
#include "vmwgfx_kms.h" #include "vmw_surface_cache.h" @@ -925,9 +924,6 @@ static const struct drm_crtc_funcs vmw_stdu_crtc_funcs = { .atomic_destroy_state = vmw_du_crtc_destroy_state, .set_config = drm_atomic_helper_set_config, .page_flip = drm_atomic_helper_page_flip, - .get_vblank_counter = vmw_get_vblank_counter, - .enable_vblank = vmw_enable_vblank, - .disable_vblank = vmw_disable_vblank, };
@@ -1591,7 +1587,6 @@ vmw_stdu_primary_plane_atomic_update(struct drm_plane *plane, struct vmw_plane_state *vps = vmw_plane_state_to_vps(new_state); struct drm_crtc *crtc = new_state->crtc; struct vmw_screen_target_display_unit *stdu; - struct drm_pending_vblank_event *event; struct vmw_fence_obj *fence = NULL; struct vmw_private *dev_priv; int ret; @@ -1640,23 +1635,6 @@ vmw_stdu_primary_plane_atomic_update(struct drm_plane *plane, return; }
- /* In case of error, vblank event is send in vmw_du_crtc_atomic_flush */ - event = crtc->state->event; - if (event && fence) { - struct drm_file *file_priv = event->base.file_priv; - - ret = vmw_event_fence_action_queue(file_priv, - fence, - &event->base, - &event->event.vbl.tv_sec, - &event->event.vbl.tv_usec, - true); - if (ret) - DRM_ERROR("Failed to queue event on fence.\n"); - else - crtc->state->event = NULL; - } - if (fence) vmw_fence_obj_unreference(&fence); } @@ -1883,10 +1861,6 @@ int vmw_kms_stdu_init_display(struct vmw_private *dev_priv) if (!(dev_priv->capabilities & SVGA_CAP_GBOBJECTS)) return -ENOSYS;
- ret = drm_vblank_init(dev, VMWGFX_NUM_DISPLAY_UNITS); - if (unlikely(ret != 0)) - return ret; - dev_priv->active_display_unit = vmw_du_screen_target;
for (i = 0; i < VMWGFX_NUM_DISPLAY_UNITS; ++i) {
From: Martin Krastev krastevm@vmware.com
[ Upstream commit a37a512db3fa1b65fe9087003e5b2072cefb3667 ]
Legacy Display Unit (LDU) fb dirty support used a custom fb dirty callback. Latter handled only the DIRTYFB IOCTL presentation path but not the ADDFB2/PAGE_FLIP/RMFB IOCTL path, common for Wayland compositors.
Get rid of the custom callback in favor of drm_atomic_helper_dirtyfb and unify the handling of the presentation paths inside of vmw_ldu_primary_plane_atomic_update. This also homogenizes the fb dirty callbacks across all DUs: LDU, SOU and STDU.
Signed-off-by: Martin Krastev krastevm@vmware.com Reviewed-by: Maaz Mombasawala mombasawalam@vmware.com Fixes: 2f5544ff0300 ("drm/vmwgfx: Use atomic helper function for dirty fb IOCTL") Cc: stable@vger.kernel.org # v5.0+ Signed-off-by: Zack Rusin zackr@vmware.com Link: https://patchwork.freedesktop.org/patch/msgid/20230321020949.335012-3-zack@k... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 62 +---------------------------- drivers/gpu/drm/vmwgfx/vmwgfx_kms.h | 5 --- drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c | 45 +++++++++++++++++---- 3 files changed, 38 insertions(+), 74 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c index d5c4da2b05b1a..aab6389cb4aab 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -1264,70 +1264,10 @@ static void vmw_framebuffer_bo_destroy(struct drm_framebuffer *framebuffer) kfree(vfbd); }
-static int vmw_framebuffer_bo_dirty(struct drm_framebuffer *framebuffer, - struct drm_file *file_priv, - unsigned int flags, unsigned int color, - struct drm_clip_rect *clips, - unsigned int num_clips) -{ - struct vmw_private *dev_priv = vmw_priv(framebuffer->dev); - struct vmw_framebuffer_bo *vfbd = - vmw_framebuffer_to_vfbd(framebuffer); - struct drm_clip_rect norect; - int ret, increment = 1; - - drm_modeset_lock_all(&dev_priv->drm); - - if (!num_clips) { - num_clips = 1; - clips = &norect; - norect.x1 = norect.y1 = 0; - norect.x2 = framebuffer->width; - norect.y2 = framebuffer->height; - } else if (flags & DRM_MODE_FB_DIRTY_ANNOTATE_COPY) { - num_clips /= 2; - increment = 2; - } - - switch (dev_priv->active_display_unit) { - case vmw_du_legacy: - ret = vmw_kms_ldu_do_bo_dirty(dev_priv, &vfbd->base, 0, 0, - clips, num_clips, increment); - break; - default: - ret = -EINVAL; - WARN_ONCE(true, "Dirty called with invalid display system.\n"); - break; - } - - vmw_cmd_flush(dev_priv, false); - - drm_modeset_unlock_all(&dev_priv->drm); - - return ret; -} - -static int vmw_framebuffer_bo_dirty_ext(struct drm_framebuffer *framebuffer, - struct drm_file *file_priv, - unsigned int flags, unsigned int color, - struct drm_clip_rect *clips, - unsigned int num_clips) -{ - struct vmw_private *dev_priv = vmw_priv(framebuffer->dev); - - if (dev_priv->active_display_unit == vmw_du_legacy && - vmw_cmd_supported(dev_priv)) - return vmw_framebuffer_bo_dirty(framebuffer, file_priv, flags, - color, clips, num_clips); - - return drm_atomic_helper_dirtyfb(framebuffer, file_priv, flags, color, - clips, num_clips); -} - static const struct drm_framebuffer_funcs vmw_framebuffer_bo_funcs = { .create_handle = vmw_framebuffer_bo_create_handle, .destroy = vmw_framebuffer_bo_destroy, - .dirty = vmw_framebuffer_bo_dirty_ext, + .dirty = drm_atomic_helper_dirtyfb, };
/* diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h index 85f86faa32439..b02d2793659f9 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h @@ -517,11 +517,6 @@ void vmw_du_connector_destroy_state(struct drm_connector *connector, */ int vmw_kms_ldu_init_display(struct vmw_private *dev_priv); int vmw_kms_ldu_close_display(struct vmw_private *dev_priv); -int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv, - struct vmw_framebuffer *framebuffer, - unsigned int flags, unsigned int color, - struct drm_clip_rect *clips, - unsigned int num_clips, int increment); int vmw_kms_update_proxy(struct vmw_resource *res, const struct drm_clip_rect *clips, unsigned num_clips, diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c index a56e5d0ca3c65..ac72c20715f32 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c @@ -234,6 +234,7 @@ static const struct drm_crtc_funcs vmw_legacy_crtc_funcs = { .atomic_duplicate_state = vmw_du_crtc_duplicate_state, .atomic_destroy_state = vmw_du_crtc_destroy_state, .set_config = drm_atomic_helper_set_config, + .page_flip = drm_atomic_helper_page_flip, };
@@ -273,6 +274,12 @@ static const struct drm_connector_helper_funcs vmw_ldu_connector_helper_funcs = { };
+static int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv, + struct vmw_framebuffer *framebuffer, + unsigned int flags, unsigned int color, + struct drm_mode_rect *clips, + unsigned int num_clips); + /* * Legacy Display Plane Functions */ @@ -291,7 +298,6 @@ vmw_ldu_primary_plane_atomic_update(struct drm_plane *plane, struct drm_framebuffer *fb; struct drm_crtc *crtc = new_state->crtc ?: old_state->crtc;
- ldu = vmw_crtc_to_ldu(crtc); dev_priv = vmw_priv(plane->dev); fb = new_state->fb; @@ -304,8 +310,31 @@ vmw_ldu_primary_plane_atomic_update(struct drm_plane *plane, vmw_ldu_del_active(dev_priv, ldu);
vmw_ldu_commit_list(dev_priv); -}
+ if (vfb && vmw_cmd_supported(dev_priv)) { + struct drm_mode_rect fb_rect = { + .x1 = 0, + .y1 = 0, + .x2 = vfb->base.width, + .y2 = vfb->base.height + }; + struct drm_mode_rect *damage_rects = drm_plane_get_damage_clips(new_state); + u32 rect_count = drm_plane_get_damage_clips_count(new_state); + int ret; + + if (!damage_rects) { + damage_rects = &fb_rect; + rect_count = 1; + } + + ret = vmw_kms_ldu_do_bo_dirty(dev_priv, vfb, 0, 0, damage_rects, rect_count); + + drm_WARN_ONCE(plane->dev, ret, + "vmw_kms_ldu_do_bo_dirty failed with: ret=%d\n", ret); + + vmw_cmd_flush(dev_priv, false); + } +}
static const struct drm_plane_funcs vmw_ldu_plane_funcs = { .update_plane = drm_atomic_helper_update_plane, @@ -536,11 +565,11 @@ int vmw_kms_ldu_close_display(struct vmw_private *dev_priv) }
-int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv, - struct vmw_framebuffer *framebuffer, - unsigned int flags, unsigned int color, - struct drm_clip_rect *clips, - unsigned int num_clips, int increment) +static int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv, + struct vmw_framebuffer *framebuffer, + unsigned int flags, unsigned int color, + struct drm_mode_rect *clips, + unsigned int num_clips) { size_t fifo_size; int i; @@ -556,7 +585,7 @@ int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv, return -ENOMEM;
memset(cmd, 0, fifo_size); - for (i = 0; i < num_clips; i++, clips += increment) { + for (i = 0; i < num_clips; i++, clips++) { cmd[i].header = SVGA_CMD_UPDATE; cmd[i].body.x = clips->x1; cmd[i].body.y = clips->y1;
From: Jeremi Piotrowski jpiotrowski@linux.microsoft.com
[ Upstream commit 45121ad4a1750ca47ce3f32bd434bdb0cdbf0043 ]
The PSP IRQ is edge-triggered (MSI or MSI-X) in all cases supported by the psp module so clear the interrupt status register early in the handler to prevent missed interrupts. sev_irq_handler() calls wake_up() on a wait queue, which can result in a new command being submitted from a different CPU. This then races with the clearing of isr and can result in missed interrupts. A missed interrupt results in a command waiting until it times out, which results in the psp being declared dead.
This is unlikely on bare metal, but has been observed when running virtualized. In the cases where this is observed, sev->cmdresp_reg has PSP_CMDRESP_RESP set which indicates that the command was processed correctly but no interrupt was asserted.
The full sequence of events looks like this:
CPU 1: submits SEV cmd #1 CPU 1: calls wait_event_timeout() CPU 0: enters psp_irq_handler() CPU 0: calls sev_handler()->wake_up() CPU 1: wakes up; finishes processing cmd #1 CPU 1: submits SEV cmd #2 CPU 1: calls wait_event_timeout() PSP: finishes processing cmd #2; interrupt status is still set; no interrupt CPU 0: clears intsts CPU 0: exits psp_irq_handler() CPU 1: wait_event_timeout() times out; psp_dead=true
Fixes: 200664d5237f ("crypto: ccp: Add Secure Encrypted Virtualization (SEV) command support") Cc: stable@vger.kernel.org Signed-off-by: Jeremi Piotrowski jpiotrowski@linux.microsoft.com Acked-by: Tom Lendacky thomas.lendacky@amd.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/crypto/ccp/psp-dev.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c index c9c741ac84421..949a3fa0b94a9 100644 --- a/drivers/crypto/ccp/psp-dev.c +++ b/drivers/crypto/ccp/psp-dev.c @@ -42,6 +42,9 @@ static irqreturn_t psp_irq_handler(int irq, void *data) /* Read the interrupt status: */ status = ioread32(psp->io_regs + psp->vdata->intsts_reg);
+ /* Clear the interrupt status by writing the same value we read. */ + iowrite32(status, psp->io_regs + psp->vdata->intsts_reg); + /* invoke subdevice interrupt handlers */ if (status) { if (psp->sev_irq_handler) @@ -51,9 +54,6 @@ static irqreturn_t psp_irq_handler(int irq, void *data) psp->tee_irq_handler(irq, psp->tee_irq_data, status); }
- /* Clear the interrupt status by writing the same value we read. */ - iowrite32(status, psp->io_regs + psp->vdata->intsts_reg); - return IRQ_HANDLED; }
From: Sean Christopherson seanjc@google.com
[ Upstream commit 0b9ca98b722969660ad98b39f766a561ccb39f5f ]
Drop the return value from x86_perf_get_lbr() and have the stub zero out the @lbr structure instead of returning -1 to indicate "no LBR support". KVM doesn't actually check the return value, and instead subtly relies on zeroing the number of LBRs in intel_pmu_init().
Formalize "nr=0 means unsupported" so that KVM doesn't need to add a pointless check on the return value to fix KVM's benign bug.
Note, the stub is necessary even though KVM x86 selects PERF_EVENTS and the caller exists only when CONFIG_KVM_INTEL=y. Despite the name, KVM_INTEL doesn't strictly require CPU_SUP_INTEL, it can be built with any of INTEL || CENTAUR || ZHAOXIN CPUs.
Signed-off-by: Sean Christopherson seanjc@google.com Message-Id: 20221006000314.73240-2-seanjc@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Stable-dep-of: 098f4c061ea1 ("KVM: x86/pmu: Disallow legacy LBRs if architectural LBRs are available") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/events/intel/lbr.c | 6 +----- arch/x86/include/asm/perf_event.h | 6 +++--- arch/x86/kvm/vmx/capabilities.h | 3 ++- 3 files changed, 6 insertions(+), 9 deletions(-)
diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c index 8259d725054d0..4dbde69c423ba 100644 --- a/arch/x86/events/intel/lbr.c +++ b/arch/x86/events/intel/lbr.c @@ -1603,10 +1603,8 @@ void __init intel_pmu_arch_lbr_init(void) * x86_perf_get_lbr - get the LBR records information * * @lbr: the caller's memory to store the LBR records information - * - * Returns: 0 indicates the LBR info has been successfully obtained */ -int x86_perf_get_lbr(struct x86_pmu_lbr *lbr) +void x86_perf_get_lbr(struct x86_pmu_lbr *lbr) { int lbr_fmt = x86_pmu.intel_cap.lbr_format;
@@ -1614,8 +1612,6 @@ int x86_perf_get_lbr(struct x86_pmu_lbr *lbr) lbr->from = x86_pmu.lbr_from; lbr->to = x86_pmu.lbr_to; lbr->info = (lbr_fmt == LBR_FORMAT_INFO) ? x86_pmu.lbr_info : 0; - - return 0; } EXPORT_SYMBOL_GPL(x86_perf_get_lbr);
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 9ac46dbe57d48..5d0f6891ae611 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -543,12 +543,12 @@ static inline void perf_check_microcode(void) { }
#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) extern struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr, void *data); -extern int x86_perf_get_lbr(struct x86_pmu_lbr *lbr); +extern void x86_perf_get_lbr(struct x86_pmu_lbr *lbr); #else struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr, void *data); -static inline int x86_perf_get_lbr(struct x86_pmu_lbr *lbr) +static inline void x86_perf_get_lbr(struct x86_pmu_lbr *lbr) { - return -1; + memset(lbr, 0, sizeof(*lbr)); } #endif
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index 07254314f3dd5..479124e49bbda 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -407,7 +407,8 @@ static inline u64 vmx_get_perf_capabilities(void) if (boot_cpu_has(X86_FEATURE_PDCM)) rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap);
- if (x86_perf_get_lbr(&lbr) >= 0 && lbr.nr) + x86_perf_get_lbr(&lbr); + if (lbr.nr) perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT;
if (vmx_pebs_supported()) {
From: Sean Christopherson seanjc@google.com
[ Upstream commit bec46859fb9d797a21c983100b1f425bebe89747 ]
Track KVM's supported PERF_CAPABILITIES in kvm_caps instead of computing the supported capabilities on the fly every time. Using kvm_caps will also allow for future cleanups as the kvm_caps values can be used directly in common x86 code.
Signed-off-by: Sean Christopherson seanjc@google.com Acked-by: Like Xu likexu@tencent.com Message-Id: 20221006000314.73240-6-seanjc@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Stable-dep-of: 098f4c061ea1 ("KVM: x86/pmu: Disallow legacy LBRs if architectural LBRs are available") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/svm/svm.c | 2 ++ arch/x86/kvm/vmx/capabilities.h | 25 ------------------------ arch/x86/kvm/vmx/pmu_intel.c | 2 +- arch/x86/kvm/vmx/vmx.c | 34 +++++++++++++++++++++++++++++---- arch/x86/kvm/x86.h | 1 + 5 files changed, 34 insertions(+), 30 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9599931c7d572..fc1649b5931a4 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2709,6 +2709,7 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr) msr->data |= MSR_AMD64_DE_CFG_LFENCE_SERIALIZE; break; case MSR_IA32_PERF_CAPABILITIES: + msr->data = kvm_caps.supported_perf_cap; return 0; default: return KVM_MSR_RET_INVALID; @@ -4888,6 +4889,7 @@ static __init void svm_set_cpu_caps(void) { kvm_set_cpu_caps();
+ kvm_caps.supported_perf_cap = 0; kvm_caps.supported_xss = 0;
/* CPUID 0x80000001 and 0x8000000A (SVM features) */ diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index 479124e49bbda..cd2ac9536c998 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -395,31 +395,6 @@ static inline bool vmx_pebs_supported(void) return boot_cpu_has(X86_FEATURE_PEBS) && kvm_pmu_cap.pebs_ept; }
-static inline u64 vmx_get_perf_capabilities(void) -{ - u64 perf_cap = PMU_CAP_FW_WRITES; - struct x86_pmu_lbr lbr; - u64 host_perf_cap = 0; - - if (!enable_pmu) - return 0; - - if (boot_cpu_has(X86_FEATURE_PDCM)) - rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap); - - x86_perf_get_lbr(&lbr); - if (lbr.nr) - perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT; - - if (vmx_pebs_supported()) { - perf_cap |= host_perf_cap & PERF_CAP_PEBS_MASK; - if ((perf_cap & PERF_CAP_PEBS_FORMAT) < 4) - perf_cap &= ~PERF_CAP_PEBS_BASELINE; - } - - return perf_cap; -} - static inline bool cpu_has_notify_vmexit(void) { return vmcs_config.cpu_based_2nd_exec_ctrl & diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 10b33da9bd058..9fabfe71fd879 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -631,7 +631,7 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) pmu->fixed_counters[i].current_config = 0; }
- vcpu->arch.perf_capabilities = vmx_get_perf_capabilities(); + vcpu->arch.perf_capabilities = kvm_caps.supported_perf_cap; lbr_desc->records.nr = 0; lbr_desc->event = NULL; lbr_desc->msr_passthrough = false; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4c9116d223df5..8ad5992f61340 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1879,7 +1879,7 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr) return 1; return vmx_get_vmx_msr(&vmcs_config.nested, msr->index, &msr->data); case MSR_IA32_PERF_CAPABILITIES: - msr->data = vmx_get_perf_capabilities(); + msr->data = kvm_caps.supported_perf_cap; return 0; default: return KVM_MSR_RET_INVALID; @@ -2058,7 +2058,7 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated (host_initiated || guest_cpuid_has(vcpu, X86_FEATURE_BUS_LOCK_DETECT))) debugctl |= DEBUGCTLMSR_BUS_LOCK_DETECT;
- if ((vmx_get_perf_capabilities() & PMU_CAP_LBR_FMT) && + if ((kvm_caps.supported_perf_cap & PMU_CAP_LBR_FMT) && (host_initiated || intel_pmu_lbr_is_enabled(vcpu))) debugctl |= DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI;
@@ -2371,14 +2371,14 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; if (data & PMU_CAP_LBR_FMT) { if ((data & PMU_CAP_LBR_FMT) != - (vmx_get_perf_capabilities() & PMU_CAP_LBR_FMT)) + (kvm_caps.supported_perf_cap & PMU_CAP_LBR_FMT)) return 1; if (!cpuid_model_is_consistent(vcpu)) return 1; } if (data & PERF_CAP_PEBS_FORMAT) { if ((data & PERF_CAP_PEBS_MASK) != - (vmx_get_perf_capabilities() & PERF_CAP_PEBS_MASK)) + (kvm_caps.supported_perf_cap & PERF_CAP_PEBS_MASK)) return 1; if (!guest_cpuid_has(vcpu, X86_FEATURE_DS)) return 1; @@ -7702,6 +7702,31 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) vmx_update_exception_bitmap(vcpu); }
+static u64 vmx_get_perf_capabilities(void) +{ + u64 perf_cap = PMU_CAP_FW_WRITES; + struct x86_pmu_lbr lbr; + u64 host_perf_cap = 0; + + if (!enable_pmu) + return 0; + + if (boot_cpu_has(X86_FEATURE_PDCM)) + rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap); + + x86_perf_get_lbr(&lbr); + if (lbr.nr) + perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT; + + if (vmx_pebs_supported()) { + perf_cap |= host_perf_cap & PERF_CAP_PEBS_MASK; + if ((perf_cap & PERF_CAP_PEBS_FORMAT) < 4) + perf_cap &= ~PERF_CAP_PEBS_BASELINE; + } + + return perf_cap; +} + static __init void vmx_set_cpu_caps(void) { kvm_set_cpu_caps(); @@ -7724,6 +7749,7 @@ static __init void vmx_set_cpu_caps(void)
if (!enable_pmu) kvm_cpu_cap_clear(X86_FEATURE_PDCM); + kvm_caps.supported_perf_cap = vmx_get_perf_capabilities();
if (!enable_sgx) { kvm_cpu_cap_clear(X86_FEATURE_SGX); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 829d3134c1eb0..9de72586f4065 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -27,6 +27,7 @@ struct kvm_caps { u64 supported_mce_cap; u64 supported_xcr0; u64 supported_xss; + u64 supported_perf_cap; };
void kvm_spurious_fault(void);
From: Sean Christopherson seanjc@google.com
[ Upstream commit 098f4c061ea10b777033b71c10bd9fd706820ee9 ]
Disallow enabling LBR support if the CPU supports architectural LBRs. Traditional LBR support is absent on CPU models that have architectural LBRs, and KVM doesn't yet support arch LBRs, i.e. KVM will pass through non-existent MSRs if userspace enables LBRs for the guest.
Cc: stable@vger.kernel.org Cc: Yang Weijiang weijiang.yang@intel.com Cc: Like Xu like.xu.linux@gmail.com Reported-by: Paolo Bonzini pbonzini@redhat.com Fixes: be635e34c284 ("KVM: vmx/pmu: Expose LBR_FMT in the MSR_IA32_PERF_CAPABILITIES") Tested-by: Like Xu likexu@tencent.com Link: https://lore.kernel.org/r/20230128001427.2548858-1-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/vmx/vmx.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 8ad5992f61340..5db21d9ef6710 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7714,9 +7714,11 @@ static u64 vmx_get_perf_capabilities(void) if (boot_cpu_has(X86_FEATURE_PDCM)) rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap);
- x86_perf_get_lbr(&lbr); - if (lbr.nr) - perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT; + if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR)) { + x86_perf_get_lbr(&lbr); + if (lbr.nr) + perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT; + }
if (vmx_pebs_supported()) { perf_cap |= host_perf_cap & PERF_CAP_PEBS_MASK;
From: Takahiro Kuwano Takahiro.Kuwano@infineon.com
[ Upstream commit db391efe765cc6cfc0ffc8d8ef146dc8e6816a7e ]
Read, Page Program, and Sector Erase settings are done in SFDP so we can remove NO_SFDP_FLAGS from s28hs512t info. Since the default_init() is no longer called after removing NO_SFDP_FLAGS, the initialization in the default_init() is moved to late_init().
Signed-off-by: Takahiro Kuwano Takahiro.Kuwano@infineon.com Signed-off-by: Tudor Ambarus tudor.ambarus@microchip.com Link: https://lore.kernel.org/r/12e468992f5d0cbd474abff3203100cc8163d4e5.166191556... Stable-dep-of: 9fd0945fe6fa ("mtd: spi-nor: spansion: Enable JFFS2 write buffer for Infineon s28hx SEMPER flash") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/spansion.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c index 7ac2ad1a8d576..6bbbfc9c215b8 100644 --- a/drivers/mtd/spi-nor/spansion.c +++ b/drivers/mtd/spi-nor/spansion.c @@ -280,12 +280,6 @@ static int cypress_nor_octal_dtr_enable(struct spi_nor *nor, bool enable) cypress_nor_octal_dtr_dis(nor); }
-static void s28hs512t_default_init(struct spi_nor *nor) -{ - nor->params->octal_dtr_enable = cypress_nor_octal_dtr_enable; - nor->params->writesize = 16; -} - static void s28hs512t_post_sfdp_fixup(struct spi_nor *nor) { /* @@ -321,10 +315,16 @@ static int s28hs512t_post_bfpt_fixup(struct spi_nor *nor, return cypress_nor_set_page_size(nor); }
+static void s28hs512t_late_init(struct spi_nor *nor) +{ + nor->params->octal_dtr_enable = cypress_nor_octal_dtr_enable; + nor->params->writesize = 16; +} + static const struct spi_nor_fixups s28hs512t_fixups = { - .default_init = s28hs512t_default_init, .post_sfdp = s28hs512t_post_sfdp_fixup, .post_bfpt = s28hs512t_post_bfpt_fixup, + .late_init = s28hs512t_late_init, };
static int @@ -459,8 +459,7 @@ static const struct flash_info spansion_nor_parts[] = { { "cy15x104q", INFO6(0x042cc2, 0x7f7f7f, 512 * 1024, 1) FLAGS(SPI_NOR_NO_ERASE) }, { "s28hs512t", INFO(0x345b1a, 0, 256 * 1024, 256) - NO_SFDP_FLAGS(SECT_4K | SPI_NOR_OCTAL_DTR_READ | - SPI_NOR_OCTAL_DTR_PP) + PARSE_SFDP .fixups = &s28hs512t_fixups, }, };
From: Sudip Mukherjee sudip.mukherjee@sifive.com
[ Upstream commit 1799cd8540b67b88514c82f5fae1c75b986bcbd8 ]
SFDP table of some flash chips do not advertise support of Quad Input Page Program even though it has support. Use flags and add hardware cap for these chips.
Signed-off-by: Sudip Mukherjee sudip.mukherjee@sifive.com [tudor.ambarus@microchip.com: move pp setting in spi_nor_init_default_params] Signed-off-by: Tudor Ambarus tudor.ambarus@microchip.com Link: https://lore.kernel.org/r/20220920184808.44876-2-sudip.mukherjee@sifive.com Stable-dep-of: 9fd0945fe6fa ("mtd: spi-nor: spansion: Enable JFFS2 write buffer for Infineon s28hx SEMPER flash") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/core.c | 6 ++++++ drivers/mtd/spi-nor/core.h | 2 ++ drivers/mtd/spi-nor/issi.c | 1 + 3 files changed, 9 insertions(+)
diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c index 9a7bea365acb7..88da4a125c743 100644 --- a/drivers/mtd/spi-nor/core.c +++ b/drivers/mtd/spi-nor/core.c @@ -2578,6 +2578,12 @@ static void spi_nor_init_default_params(struct spi_nor *nor) params->hwcaps.mask |= SNOR_HWCAPS_PP; spi_nor_set_pp_settings(¶ms->page_programs[SNOR_CMD_PP], SPINOR_OP_PP, SNOR_PROTO_1_1_1); + + if (info->flags & SPI_NOR_QUAD_PP) { + params->hwcaps.mask |= SNOR_HWCAPS_PP_1_1_4; + spi_nor_set_pp_settings(¶ms->page_programs[SNOR_CMD_PP_1_1_4], + SPINOR_OP_PP_1_1_4, SNOR_PROTO_1_1_4); + } }
/** diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h index 00bf0d0e955a0..8a846ad86d298 100644 --- a/drivers/mtd/spi-nor/core.h +++ b/drivers/mtd/spi-nor/core.h @@ -458,6 +458,7 @@ struct spi_nor_fixups { * SPI_NOR_NO_ERASE: no erase command needed. * NO_CHIP_ERASE: chip does not support chip erase. * SPI_NOR_NO_FR: can't do fastread. + * SPI_NOR_QUAD_PP: flash supports Quad Input Page Program. * * @no_sfdp_flags: flags that indicate support that can be discovered via SFDP. * Used when SFDP tables are not defined in the flash. These @@ -507,6 +508,7 @@ struct flash_info { #define SPI_NOR_NO_ERASE BIT(6) #define NO_CHIP_ERASE BIT(7) #define SPI_NOR_NO_FR BIT(8) +#define SPI_NOR_QUAD_PP BIT(9)
u8 no_sfdp_flags; #define SPI_NOR_SKIP_SFDP BIT(0) diff --git a/drivers/mtd/spi-nor/issi.c b/drivers/mtd/spi-nor/issi.c index 89a66a19d754f..7c8eee808dda6 100644 --- a/drivers/mtd/spi-nor/issi.c +++ b/drivers/mtd/spi-nor/issi.c @@ -73,6 +73,7 @@ static const struct flash_info issi_nor_parts[] = { { "is25wp256", INFO(0x9d7019, 0, 64 * 1024, 512) NO_SFDP_FLAGS(SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) FIXUP_FLAGS(SPI_NOR_4B_OPCODES) + FLAGS(SPI_NOR_QUAD_PP) .fixups = &is25lp256_fixups },
/* PMC */
From: Miquel Raynal miquel.raynal@bootlin.com
[ Upstream commit 4eddee70140b3ae183398b246a609756546c51f1 ]
Introduce a new (no SFDP) flag for the feature that we are about to support: Read While Write. This means, if the chip has several banks and supports RWW, once a page of data to write has been transferred into the chip's internal SRAM, another read operation happening on a different bank can be performed during the tPROG delay.
Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/r/20230328154105.448540-7-miquel.raynal@bootlin.com Signed-off-by: Tudor Ambarus tudor.ambarus@linaro.org Stable-dep-of: 9fd0945fe6fa ("mtd: spi-nor: spansion: Enable JFFS2 write buffer for Infineon s28hx SEMPER flash") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/core.c | 3 +++ drivers/mtd/spi-nor/core.h | 3 +++ drivers/mtd/spi-nor/debugfs.c | 1 + 3 files changed, 7 insertions(+)
diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c index 88da4a125c743..621e9ad4bcc39 100644 --- a/drivers/mtd/spi-nor/core.c +++ b/drivers/mtd/spi-nor/core.c @@ -2440,6 +2440,9 @@ static void spi_nor_init_flags(struct spi_nor *nor)
if (flags & NO_CHIP_ERASE) nor->flags |= SNOR_F_NO_OP_CHIP_ERASE; + + if (flags & SPI_NOR_RWW) + nor->flags |= SNOR_F_RWW; }
/** diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h index 8a846ad86d298..f23d1e77199e5 100644 --- a/drivers/mtd/spi-nor/core.h +++ b/drivers/mtd/spi-nor/core.h @@ -130,6 +130,7 @@ enum spi_nor_option_flags { SNOR_F_IO_MODE_EN_VOLATILE = BIT(11), SNOR_F_SOFT_RESET = BIT(12), SNOR_F_SWP_IS_VOLATILE = BIT(13), + SNOR_F_RWW = BIT(14), };
struct spi_nor_read_command { @@ -459,6 +460,7 @@ struct spi_nor_fixups { * NO_CHIP_ERASE: chip does not support chip erase. * SPI_NOR_NO_FR: can't do fastread. * SPI_NOR_QUAD_PP: flash supports Quad Input Page Program. + * SPI_NOR_RWW: flash supports reads while write. * * @no_sfdp_flags: flags that indicate support that can be discovered via SFDP. * Used when SFDP tables are not defined in the flash. These @@ -509,6 +511,7 @@ struct flash_info { #define NO_CHIP_ERASE BIT(7) #define SPI_NOR_NO_FR BIT(8) #define SPI_NOR_QUAD_PP BIT(9) +#define SPI_NOR_RWW BIT(10)
u8 no_sfdp_flags; #define SPI_NOR_SKIP_SFDP BIT(0) diff --git a/drivers/mtd/spi-nor/debugfs.c b/drivers/mtd/spi-nor/debugfs.c index 5f56b23205d8b..8b4922a1aafb9 100644 --- a/drivers/mtd/spi-nor/debugfs.c +++ b/drivers/mtd/spi-nor/debugfs.c @@ -25,6 +25,7 @@ static const char *const snor_f_names[] = { SNOR_F_NAME(IO_MODE_EN_VOLATILE), SNOR_F_NAME(SOFT_RESET), SNOR_F_NAME(SWP_IS_VOLATILE), + SNOR_F_NAME(RWW), }; #undef SNOR_F_NAME
From: Takahiro Kuwano Takahiro.Kuwano@infineon.com
[ Upstream commit 9fd0945fe6fadfb6b54a9cd73be101c02b3e8134 ]
Infineon(Cypress) SEMPER NOR flash family has on-die ECC and its program granularity is 16-byte ECC data unit size. JFFS2 supports write buffer mode for ECC'd NOR flash. Provide a way to clear the MTD_BIT_WRITEABLE flag in order to enable JFFS2 write buffer mode support.
A new SNOR_F_ECC flag is introduced to determine if the part has on-die ECC and if it has, MTD_BIT_WRITEABLE is unset.
In vendor specific driver, a common cypress_nor_ecc_init() helper is added. This helper takes care for ECC related initialization for SEMPER flash family by setting up params->writesize and SNOR_F_ECC.
Fixes: c3266af101f2 ("mtd: spi-nor: spansion: add support for Cypress Semper flash") Suggested-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Takahiro Kuwano Takahiro.Kuwano@infineon.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/d586723f6f12aaff44fbcd7b51e674b47ed554ed.168076074... Signed-off-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/core.c | 3 +++ drivers/mtd/spi-nor/core.h | 1 + drivers/mtd/spi-nor/debugfs.c | 1 + drivers/mtd/spi-nor/spansion.c | 13 ++++++++++++- 4 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c index 621e9ad4bcc39..dc4d86ceee447 100644 --- a/drivers/mtd/spi-nor/core.c +++ b/drivers/mtd/spi-nor/core.c @@ -2942,6 +2942,9 @@ static void spi_nor_set_mtd_info(struct spi_nor *nor) mtd->name = dev_name(dev); mtd->type = MTD_NORFLASH; mtd->flags = MTD_CAP_NORFLASH; + /* Unset BIT_WRITEABLE to enable JFFS2 write buffer for ECC'd NOR */ + if (nor->flags & SNOR_F_ECC) + mtd->flags &= ~MTD_BIT_WRITEABLE; if (nor->info->flags & SPI_NOR_NO_ERASE) mtd->flags |= MTD_NO_ERASE; else diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h index f23d1e77199e5..290613fd63ae7 100644 --- a/drivers/mtd/spi-nor/core.h +++ b/drivers/mtd/spi-nor/core.h @@ -131,6 +131,7 @@ enum spi_nor_option_flags { SNOR_F_SOFT_RESET = BIT(12), SNOR_F_SWP_IS_VOLATILE = BIT(13), SNOR_F_RWW = BIT(14), + SNOR_F_ECC = BIT(15), };
struct spi_nor_read_command { diff --git a/drivers/mtd/spi-nor/debugfs.c b/drivers/mtd/spi-nor/debugfs.c index 8b4922a1aafb9..6d6bd559db8fd 100644 --- a/drivers/mtd/spi-nor/debugfs.c +++ b/drivers/mtd/spi-nor/debugfs.c @@ -26,6 +26,7 @@ static const char *const snor_f_names[] = { SNOR_F_NAME(SOFT_RESET), SNOR_F_NAME(SWP_IS_VOLATILE), SNOR_F_NAME(RWW), + SNOR_F_NAME(ECC), }; #undef SNOR_F_NAME
diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c index 6bbbfc9c215b8..581f209c27347 100644 --- a/drivers/mtd/spi-nor/spansion.c +++ b/drivers/mtd/spi-nor/spansion.c @@ -212,6 +212,17 @@ static int cypress_nor_set_page_size(struct spi_nor *nor) return 0; }
+static void cypress_nor_ecc_init(struct spi_nor *nor) +{ + /* + * Programming is supported only in 16-byte ECC data unit granularity. + * Byte-programming, bit-walking, or multiple program operations to the + * same ECC data unit without an erase are not allowed. + */ + nor->params->writesize = 16; + nor->flags |= SNOR_F_ECC; +} + static int s25hx_t_post_bfpt_fixup(struct spi_nor *nor, const struct sfdp_parameter_header *bfpt_header, @@ -318,7 +329,7 @@ static int s28hs512t_post_bfpt_fixup(struct spi_nor *nor, static void s28hs512t_late_init(struct spi_nor *nor) { nor->params->octal_dtr_enable = cypress_nor_octal_dtr_enable; - nor->params->writesize = 16; + cypress_nor_ecc_init(nor); }
static const struct spi_nor_fixups s28hs512t_fixups = {
From: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org
[ Upstream commit 721d3e91bfc93975c5e1a76c7d588dd8df5d82da ]
Not all Qcom platforms support IRQ mode for ECC handling. For those platforms, the current EDAC driver will not be probed due to missing ECC IRQ in devicetree.
So add support for polling mode so that the EDAC driver can be used on all Qcom platforms supporting LLCC.
The polling delay of 5000ms is chosen based on Qcom downstream/vendor driver.
Reported-by: Luca Weiss luca.weiss@fairphone.com Tested-by: Luca Weiss luca.weiss@fairphone.com Tested-by: Steev Klimaszewski steev@kali.org # Thinkpad X13s Tested-by: Andrew Halaney ahalaney@redhat.com # sa8540p-ride Reviewed-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org Signed-off-by: Bjorn Andersson andersson@kernel.org Link: https://lore.kernel.org/r/20230314080443.64635-14-manivannan.sadhasivam@lina... Stable-dep-of: cca94f1dd6d0 ("soc: qcom: llcc: Do not create EDAC platform device on SDM845") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/edac/qcom_edac.c | 50 +++++++++++++++++++++--------------- drivers/soc/qcom/llcc-qcom.c | 13 +++++----- 2 files changed, 35 insertions(+), 28 deletions(-)
diff --git a/drivers/edac/qcom_edac.c b/drivers/edac/qcom_edac.c index c45519f59dc11..2c91ceff8a9ca 100644 --- a/drivers/edac/qcom_edac.c +++ b/drivers/edac/qcom_edac.c @@ -76,6 +76,8 @@ #define DRP0_INTERRUPT_ENABLE BIT(6) #define SB_DB_DRP_INTERRUPT_ENABLE 0x3
+#define ECC_POLL_MSEC 5000 + enum { LLCC_DRAM_CE = 0, LLCC_DRAM_UE, @@ -285,8 +287,7 @@ dump_syn_reg(struct edac_device_ctl_info *edev_ctl, int err_type, u32 bank) return ret; }
-static irqreturn_t -llcc_ecc_irq_handler(int irq, void *edev_ctl) +static irqreturn_t llcc_ecc_irq_handler(int irq, void *edev_ctl) { struct edac_device_ctl_info *edac_dev_ctl = edev_ctl; struct llcc_drv_data *drv = edac_dev_ctl->dev->platform_data; @@ -332,6 +333,11 @@ llcc_ecc_irq_handler(int irq, void *edev_ctl) return irq_rc; }
+static void llcc_ecc_check(struct edac_device_ctl_info *edev_ctl) +{ + llcc_ecc_irq_handler(0, edev_ctl); +} + static int qcom_llcc_edac_probe(struct platform_device *pdev) { struct llcc_drv_data *llcc_driv_data = pdev->dev.platform_data; @@ -359,29 +365,31 @@ static int qcom_llcc_edac_probe(struct platform_device *pdev) edev_ctl->ctl_name = "llcc"; edev_ctl->panic_on_ue = LLCC_ERP_PANIC_ON_UE;
- rc = edac_device_add_device(edev_ctl); - if (rc) - goto out_mem; - - platform_set_drvdata(pdev, edev_ctl); - - /* Request for ecc irq */ + /* Check if LLCC driver has passed ECC IRQ */ ecc_irq = llcc_driv_data->ecc_irq; - if (ecc_irq < 0) { - rc = -ENODEV; - goto out_dev; - } - rc = devm_request_irq(dev, ecc_irq, llcc_ecc_irq_handler, + if (ecc_irq > 0) { + /* Use interrupt mode if IRQ is available */ + rc = devm_request_irq(dev, ecc_irq, llcc_ecc_irq_handler, IRQF_TRIGGER_HIGH, "llcc_ecc", edev_ctl); - if (rc) - goto out_dev; + if (!rc) { + edac_op_state = EDAC_OPSTATE_INT; + goto irq_done; + } + }
- return rc; + /* Fall back to polling mode otherwise */ + edev_ctl->poll_msec = ECC_POLL_MSEC; + edev_ctl->edac_check = llcc_ecc_check; + edac_op_state = EDAC_OPSTATE_POLL;
-out_dev: - edac_device_del_device(edev_ctl->dev); -out_mem: - edac_device_free_ctl_info(edev_ctl); +irq_done: + rc = edac_device_add_device(edev_ctl); + if (rc) { + edac_device_free_ctl_info(edev_ctl); + return rc; + } + + platform_set_drvdata(pdev, edev_ctl);
return rc; } diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c index 9c6cf2f5d77ce..63a08635fc159 100644 --- a/drivers/soc/qcom/llcc-qcom.c +++ b/drivers/soc/qcom/llcc-qcom.c @@ -850,13 +850,12 @@ static int qcom_llcc_probe(struct platform_device *pdev) goto err;
drv_data->ecc_irq = platform_get_irq_optional(pdev, 0); - if (drv_data->ecc_irq >= 0) { - llcc_edac = platform_device_register_data(&pdev->dev, - "qcom_llcc_edac", -1, drv_data, - sizeof(*drv_data)); - if (IS_ERR(llcc_edac)) - dev_err(dev, "Failed to register llcc edac driver\n"); - } + + llcc_edac = platform_device_register_data(&pdev->dev, + "qcom_llcc_edac", -1, drv_data, + sizeof(*drv_data)); + if (IS_ERR(llcc_edac)) + dev_err(dev, "Failed to register llcc edac driver\n");
return 0; err:
From: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org
[ Upstream commit cca94f1dd6d0a4c7e5c8190672f5747e3c00ddde ]
The platforms based on SDM845 SoC locks the access to EDAC registers in the bootloader. So probing the EDAC driver will result in a crash. Hence, disable the creation of EDAC platform device on all SDM845 devices.
The issue has been observed on Lenovo Yoga C630 and DB845c.
While at it, also sort the members of `struct qcom_llcc_config` to avoid any holes in-between.
Cc: stable@vger.kernel.org # 5.10 Reported-by: Steev Klimaszewski steev@kali.org Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org Signed-off-by: Bjorn Andersson andersson@kernel.org Link: https://lore.kernel.org/r/20230314080443.64635-15-manivannan.sadhasivam@lina... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/qcom/llcc-qcom.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c index 63a08635fc159..d4cba3b3c56c4 100644 --- a/drivers/soc/qcom/llcc-qcom.c +++ b/drivers/soc/qcom/llcc-qcom.c @@ -101,10 +101,11 @@ struct llcc_slice_config {
struct qcom_llcc_config { const struct llcc_slice_config *sct_data; - int size; - bool need_llcc_cfg; const u32 *reg_offset; const struct llcc_edac_reg_offset *edac_reg_offset; + int size; + bool need_llcc_cfg; + bool no_edac; };
enum llcc_reg_offset { @@ -401,6 +402,7 @@ static const struct qcom_llcc_config sdm845_cfg = { .need_llcc_cfg = false, .reg_offset = llcc_v1_reg_offset, .edac_reg_offset = &llcc_v1_edac_reg_offset, + .no_edac = true, };
static const struct qcom_llcc_config sm6350_cfg = { @@ -851,11 +853,19 @@ static int qcom_llcc_probe(struct platform_device *pdev)
drv_data->ecc_irq = platform_get_irq_optional(pdev, 0);
- llcc_edac = platform_device_register_data(&pdev->dev, - "qcom_llcc_edac", -1, drv_data, - sizeof(*drv_data)); - if (IS_ERR(llcc_edac)) - dev_err(dev, "Failed to register llcc edac driver\n"); + /* + * On some platforms, the access to EDAC registers will be locked by + * the bootloader. So probing the EDAC driver will result in a crash. + * Hence, disable the creation of EDAC platform device for the + * problematic platforms. + */ + if (!cfg->no_edac) { + llcc_edac = platform_device_register_data(&pdev->dev, + "qcom_llcc_edac", -1, drv_data, + sizeof(*drv_data)); + if (IS_ERR(llcc_edac)) + dev_err(dev, "Failed to register llcc edac driver\n"); + }
return 0; err:
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
[ Upstream commit 043f85ce81cb1714e14d31c322c5646513dde3fb ]
Using flexible array is more straight forward. It - saves 1 pointer in the 'zynqmp_ipi_pdata' structure - saves an indirection when using this array - saves some LoC and avoids some always spurious pointer arithmetic
Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Signed-off-by: Jassi Brar jaswinder.singh@linaro.org Stable-dep-of: f72f805e7288 ("mailbox: zynqmp: Fix counts of child nodes") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mailbox/zynqmp-ipi-mailbox.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c index e02a4a18e8c29..29f09ded6e739 100644 --- a/drivers/mailbox/zynqmp-ipi-mailbox.c +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c @@ -110,7 +110,7 @@ struct zynqmp_ipi_pdata { unsigned int method; u32 local_id; int num_mboxes; - struct zynqmp_ipi_mbox *ipi_mboxes; + struct zynqmp_ipi_mbox ipi_mboxes[]; };
static struct device_driver zynqmp_ipi_mbox_driver = { @@ -635,7 +635,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev) int num_mboxes, ret = -EINVAL;
num_mboxes = of_get_child_count(np); - pdata = devm_kzalloc(dev, sizeof(*pdata) + (num_mboxes * sizeof(*mbox)), + pdata = devm_kzalloc(dev, struct_size(pdata, ipi_mboxes, num_mboxes), GFP_KERNEL); if (!pdata) return -ENOMEM; @@ -649,8 +649,6 @@ static int zynqmp_ipi_probe(struct platform_device *pdev) }
pdata->num_mboxes = num_mboxes; - pdata->ipi_mboxes = (struct zynqmp_ipi_mbox *) - ((char *)pdata + sizeof(*pdata));
mbox = pdata->ipi_mboxes; for_each_available_child_of_node(np, nc) {
From: Tanmay Shah tanmay.shah@amd.com
[ Upstream commit f72f805e72882c361e2a612c64a6e549f3da7152 ]
If child mailbox node status is disabled it causes crash in interrupt handler. Fix this by assigning only available child node during driver probe.
Fixes: 4981b82ba2ff ("mailbox: ZynqMP IPI mailbox controller") Signed-off-by: Tanmay Shah tanmay.shah@amd.com Acked-by: Michal Simek michal.simek@amd.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230311012407.1292118-2-tanmay.shah@amd.com Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mailbox/zynqmp-ipi-mailbox.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c index 29f09ded6e739..d097f45b0e5f5 100644 --- a/drivers/mailbox/zynqmp-ipi-mailbox.c +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c @@ -634,7 +634,12 @@ static int zynqmp_ipi_probe(struct platform_device *pdev) struct zynqmp_ipi_mbox *mbox; int num_mboxes, ret = -EINVAL;
- num_mboxes = of_get_child_count(np); + num_mboxes = of_get_available_child_count(np); + if (num_mboxes == 0) { + dev_err(dev, "mailbox nodes not available\n"); + return -EINVAL; + } + pdata = devm_kzalloc(dev, struct_size(pdata, ipi_mboxes, num_mboxes), GFP_KERNEL); if (!pdata)
From: Takahiro Kuwano Takahiro.Kuwano@infineon.com
[ Upstream commit 4199c1719e24e73be0acc8b0146fc31ad8af9771 ]
Infineon(Cypress) SEMPER NOR flash family has on-die ECC and its program granularity is 16-byte ECC data unit size. JFFS2 supports write buffer mode for ECC'd NOR flash. Provide a way to clear the MTD_BIT_WRITEABLE flag in order to enable JFFS2 write buffer mode support.
Fixes: b6b23833fc42 ("mtd: spi-nor: spansion: Add s25hl-t/s25hs-t IDs and fixups") Suggested-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Takahiro Kuwano Takahiro.Kuwano@infineon.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/a1cc128e094db4ec141f85bd380127598dfef17e.168076074... Signed-off-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/spansion.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c index 581f209c27347..7e7c68fc7776d 100644 --- a/drivers/mtd/spi-nor/spansion.c +++ b/drivers/mtd/spi-nor/spansion.c @@ -260,13 +260,10 @@ static void s25hx_t_post_sfdp_fixup(struct spi_nor *nor)
static void s25hx_t_late_init(struct spi_nor *nor) { - struct spi_nor_flash_parameter *params = nor->params; - /* Fast Read 4B requires mode cycles */ - params->reads[SNOR_CMD_READ_FAST].num_mode_clocks = 8; + nor->params->reads[SNOR_CMD_READ_FAST].num_mode_clocks = 8;
- /* The writesize should be ECC data unit size */ - params->writesize = 16; + cypress_nor_ecc_init(nor); }
static struct spi_nor_fixups s25hx_t_fixups = {
From: ZhangPeng zhangpeng362@huawei.com
[ Upstream commit 254e69f284d7270e0abdc023ee53b71401c3ba0c ]
Syzbot reported a null-ptr-deref bug:
ntfs3: loop0: Different NTFS' sector size (1024) and media sector size (512) ntfs3: loop0: Mark volume as dirty due to NTFS errors general protection fault, probably for non-canonical address 0xdffffc0000000001: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x0000000000000008-0x000000000000000f] RIP: 0010:d_flags_for_inode fs/dcache.c:1980 [inline] RIP: 0010:__d_add+0x5ce/0x800 fs/dcache.c:2796 Call Trace: <TASK> d_splice_alias+0x122/0x3b0 fs/dcache.c:3191 lookup_open fs/namei.c:3391 [inline] open_last_lookups fs/namei.c:3481 [inline] path_openat+0x10e6/0x2df0 fs/namei.c:3688 do_filp_open+0x264/0x4f0 fs/namei.c:3718 do_sys_openat2+0x124/0x4e0 fs/open.c:1310 do_sys_open fs/open.c:1326 [inline] __do_sys_open fs/open.c:1334 [inline] __se_sys_open fs/open.c:1330 [inline] __x64_sys_open+0x221/0x270 fs/open.c:1330 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
If the MFT record of ntfs inode is not a base record, inode->i_op can be NULL. And a null-ptr-deref may happen:
ntfs_lookup() dir_search_u() # inode->i_op is set to NULL d_splice_alias() __d_add() d_flags_for_inode() # inode->i_op->get_link null-ptr-deref
Fix this by adding a Check on inode->i_op before calling the d_splice_alias() function.
Fixes: 4342306f0f0d ("fs/ntfs3: Add file operations and implementation") Reported-by: syzbot+a8f26a403c169b7593fe@syzkaller.appspotmail.com Signed-off-by: ZhangPeng zhangpeng362@huawei.com Signed-off-by: Konstantin Komarov almaz.alexandrovich@paragon-software.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ntfs3/namei.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/fs/ntfs3/namei.c b/fs/ntfs3/namei.c index bc22cc321a74b..7760aedc06728 100644 --- a/fs/ntfs3/namei.c +++ b/fs/ntfs3/namei.c @@ -86,6 +86,16 @@ static struct dentry *ntfs_lookup(struct inode *dir, struct dentry *dentry, __putname(uni); }
+ /* + * Check for a null pointer + * If the MFT record of ntfs inode is not a base record, inode->i_op can be NULL. + * This causes null pointer dereference in d_splice_alias(). + */ + if (!IS_ERR(inode) && inode->i_op == NULL) { + iput(inode); + inode = ERR_PTR(-EINVAL); + } + return d_splice_alias(inode, dentry); }
From: Ryan Lin tsung-hua.lin@amd.com
[ Upstream commit 1e5d4d8eb8c0f15d90c50e7abd686c980e54e42e ]
[Why] Needs to set the default value of the LTTPR timeout after resume.
[How] Set the default (3.2ms) timeout at resuming if the sink supports LTTPR
Reviewed-by: Jerry Zuo Jerry.Zuo@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Ryan Lin tsung-hua.lin@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 99b99f0b42c06..9f637e360755d 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -39,6 +39,7 @@ #include "dc/dc_edid_parser.h" #include "dc/dc_stat.h" #include "amdgpu_dm_trace.h" +#include "dc/inc/dc_link_ddc.h"
#include "vid.h" #include "amdgpu.h" @@ -2254,6 +2255,14 @@ static void s3_handle_mst(struct drm_device *dev, bool suspend) if (suspend) { drm_dp_mst_topology_mgr_suspend(mgr); } else { + /* if extended timeout is supported in hardware, + * default to LTTPR timeout (3.2ms) first as a W/A for DP link layer + * CTS 4.2.1.1 regression introduced by CTS specs requirement update. + */ + dc_link_aux_try_to_configure_timeout(aconnector->dc_link->ddc, LINK_AUX_DEFAULT_LTTPR_TIMEOUT_PERIOD); + if (!dp_is_lttpr_present(aconnector->dc_link)) + dc_link_aux_try_to_configure_timeout(aconnector->dc_link->ddc, LINK_AUX_DEFAULT_TIMEOUT_PERIOD); + ret = drm_dp_mst_topology_mgr_resume(mgr, true); if (ret < 0) { dm_helpers_dp_mst_stop_top_mgr(aconnector->dc_link->ctx,
From: Paolo Bonzini pbonzini@redhat.com
[ Upstream commit 2fdcc1b324189b5fb20655baebd40cd82e2bdf0c ]
Most of the time, calls to get_guest_pgd result in calling kvm_read_cr3 (the exception is only nested TDP). Hardcode the default instead of using the get_cr3 function, avoiding a retpoline if they are enabled.
Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Mathias Krause minipli@grsecurity.net Link: https://lore.kernel.org/r/20230322013731.102955-2-minipli@grsecurity.net Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net # backport to v6.1.x Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/mmu/mmu.c | 31 ++++++++++++++++++++----------- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 2 files changed, 21 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b6f96d47e596d..f2a10c7d13697 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -232,6 +232,20 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) return regs; }
+static unsigned long get_guest_cr3(struct kvm_vcpu *vcpu) +{ + return kvm_read_cr3(vcpu); +} + +static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + if (IS_ENABLED(CONFIG_RETPOLINE) && mmu->get_guest_pgd == get_guest_cr3) + return kvm_read_cr3(vcpu); + + return mmu->get_guest_pgd(vcpu); +} + static inline bool kvm_available_flush_tlb_with_range(void) { return kvm_x86_ops.tlb_remote_flush_with_range; @@ -3661,7 +3675,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) int quadrant, i, r; hpa_t root;
- root_pgd = mmu->get_guest_pgd(vcpu); + root_pgd = kvm_mmu_get_guest_pgd(vcpu, mmu); root_gfn = root_pgd >> PAGE_SHIFT;
if (mmu_check_root(vcpu, root_gfn)) @@ -4112,7 +4126,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, arch.token = alloc_apf_token(vcpu); arch.gfn = gfn; arch.direct_map = vcpu->arch.mmu->root_role.direct; - arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu); + arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu);
return kvm_setup_async_pf(vcpu, cr2_or_gpa, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch); @@ -4131,7 +4145,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) return;
if (!vcpu->arch.mmu->root_role.direct && - work->arch.cr3 != vcpu->arch.mmu->get_guest_pgd(vcpu)) + work->arch.cr3 != kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu)) return;
kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); @@ -4488,11 +4502,6 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd) } EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd);
-static unsigned long get_cr3(struct kvm_vcpu *vcpu) -{ - return kvm_read_cr3(vcpu); -} - static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, unsigned int access) { @@ -5043,7 +5052,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->page_fault = kvm_tdp_page_fault; context->sync_page = nonpaging_sync_page; context->invlpg = NULL; - context->get_guest_pgd = get_cr3; + context->get_guest_pgd = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault;
@@ -5193,7 +5202,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu,
kvm_init_shadow_mmu(vcpu, cpu_role);
- context->get_guest_pgd = get_cr3; + context->get_guest_pgd = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; } @@ -5207,7 +5216,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, return;
g_context->cpu_role.as_u64 = new_mode.as_u64; - g_context->get_guest_pgd = get_cr3; + g_context->get_guest_pgd = get_guest_cr3; g_context->get_pdptr = kvm_pdptr_read; g_context->inject_page_fault = kvm_inject_page_fault;
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5ab5f94dcb6fd..1f4f5e703f136 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -324,7 +324,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, trace_kvm_mmu_pagetable_walk(addr, access); retry_walk: walker->level = mmu->cpu_role.base.level; - pte = mmu->get_guest_pgd(vcpu); + pte = kvm_mmu_get_guest_pgd(vcpu, mmu); have_ad = PT_HAVE_ACCESSED_DIRTY(mmu);
#if PTTYPE == 64
From: Mathias Krause minipli@grsecurity.net
[ Upstream commit 01b31714bd90be2784f7145bf93b7f78f3d081e1 ]
There is no need to unload the MMU roots with TDP enabled when only CR0.WP has changed -- the paging structures are still valid, only the permission bitmap needs to be updated.
One heavy user of toggling CR0.WP is grsecurity's KERNEXEC feature to implement kernel W^X.
The optimization brings a huge performance gain for this case as the following micro-benchmark running 'ssdd 10 50000' from rt-tests[1] on a grsecurity L1 VM shows (runtime in seconds, lower is better):
legacy TDP shadow kvm-x86/next@d8708b 8.43s 9.45s 70.3s +patch 5.39s 5.63s 70.2s
For legacy MMU this is ~36% faster, for TDP MMU even ~40% faster. Also TDP and legacy MMU now both have a similar runtime which vanishes the need to disable TDP MMU for grsecurity.
Shadow MMU sees no measurable difference and is still slow, as expected.
[1] https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git
Signed-off-by: Mathias Krause minipli@grsecurity.net Link: https://lore.kernel.org/r/20230322013731.102955-3-minipli@grsecurity.net Co-developed-by: Sean Christopherson seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/x86.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3463ef7f30196..d7af225b63d89 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -910,6 +910,18 @@ EXPORT_SYMBOL_GPL(load_pdptrs);
void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned long cr0) { + /* + * CR0.WP is incorporated into the MMU role, but only for non-nested, + * indirect shadow MMUs. If TDP is enabled, the MMU's metadata needs + * to be updated, e.g. so that emulating guest translations does the + * right thing, but there's no need to unload the root as CR0.WP + * doesn't affect SPTEs. + */ + if (tdp_enabled && (cr0 ^ old_cr0) == X86_CR0_WP) { + kvm_init_mmu(vcpu); + return; + } + if ((cr0 ^ old_cr0) & X86_CR0_PG) { kvm_clear_async_pf_completion_queue(vcpu); kvm_async_pf_hash_reset(vcpu);
From: Mathias Krause minipli@grsecurity.net
[ Upstream commit 74cdc836919bf34684ef66f995273f35e2189daf ]
Make use of the kvm_read_cr{0,4}_bits() helper functions when we only want to know the state of certain bits instead of the whole register.
This not only makes the intent cleaner, it also avoids a potential VMREAD in case the tested bits aren't guest owned.
Signed-off-by: Mathias Krause minipli@grsecurity.net Link: https://lore.kernel.org/r/20230322013731.102955-5-minipli@grsecurity.net Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/pmu.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index de1fd73697365..20cd746cf4678 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -418,9 +418,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) if (!pmc) return 1;
- if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) && + if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) && (static_call(kvm_x86_get_cpl)(vcpu) != 0) && - (kvm_read_cr0(vcpu) & X86_CR0_PE)) + (kvm_read_cr0_bits(vcpu, X86_CR0_PE))) return 1;
*data = pmc_read_counter(pmc) & mask; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5db21d9ef6710..4984357c5d441 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5417,7 +5417,7 @@ static int handle_cr(struct kvm_vcpu *vcpu) break; case 3: /* lmsw */ val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f; - trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val); + trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val)); kvm_lmsw(vcpu, val);
return kvm_skip_emulated_instruction(vcpu); @@ -7496,7 +7496,7 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT;
- if (kvm_read_cr0(vcpu) & X86_CR0_CD) { + if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) cache = MTRR_TYPE_WRBACK; else
From: Mathias Krause minipli@grsecurity.net
[ Upstream commit fb509f76acc8d42bed11bca308404f81c2be856a ]
Guests like grsecurity that make heavy use of CR0.WP to implement kernel level W^X will suffer from the implied VMEXITs.
With EPT there is no need to intercept a guest change of CR0.WP, so simply make it a guest owned bit if we can do so.
This implies that a read of a guest's CR0.WP bit might need a VMREAD. However, the only potentially affected user seems to be kvm_init_mmu() which is a heavy operation to begin with. But also most callers already cache the full value of CR0 anyway, so no additional VMREAD is needed. The only exception is nested_vmx_load_cr3().
This change is VMX-specific, as SVM has no such fine grained control register intercept control.
Suggested-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net Link: https://lore.kernel.org/r/20230322013731.102955-7-minipli@grsecurity.net Co-developed-by: Sean Christopherson seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net # backport to v6.1.x Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/kvm_cache_regs.h | 2 +- arch/x86/kvm/vmx/nested.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/vmx/vmx.h | 18 ++++++++++++++++++ 4 files changed, 22 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 3febc342360cc..896cc73949442 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -4,7 +4,7 @@
#include <linux/kvm_host.h>
-#define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS +#define KVM_POSSIBLE_CR0_GUEST_BITS (X86_CR0_TS | X86_CR0_WP) #define KVM_POSSIBLE_CR4_GUEST_BITS \ (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \ | X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD | X86_CR4_FSGSBASE) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 8e56ec6e72e9d..9d683b6067c7b 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4460,7 +4460,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, * CR0_GUEST_HOST_MASK is already set in the original vmcs01 * (KVM doesn't change it); */ - vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmx_set_cr0(vcpu, vmcs12->host_cr0);
/* Same as above - no reason to call set_cr4_guest_host_mask(). */ @@ -4611,7 +4611,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu) */ vmx_set_efer(vcpu, nested_vmx_get_vmcs01_guest_efer(vmx));
- vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmx_set_cr0(vcpu, vmcs_readl(CR0_READ_SHADOW));
vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4984357c5d441..07aab85922441 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4695,7 +4695,7 @@ static void init_vmcs(struct vcpu_vmx *vmx) /* 22.2.1, 20.8.1 */ vm_entry_controls_set(vmx, vmx_vmentry_ctrl());
- vmx->vcpu.arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vmx->vcpu.arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmcs_writel(CR0_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr0_guest_owned_bits);
set_cr4_guest_host_mask(vmx); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index a3da84f4ea456..e2b04f4c0fef3 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -640,6 +640,24 @@ BUILD_CONTROLS_SHADOW(tertiary_exec, TERTIARY_VM_EXEC_CONTROL, 64) (1 << VCPU_EXREG_EXIT_INFO_1) | \ (1 << VCPU_EXREG_EXIT_INFO_2))
+static inline unsigned long vmx_l1_guest_owned_cr0_bits(void) +{ + unsigned long bits = KVM_POSSIBLE_CR0_GUEST_BITS; + + /* + * CR0.WP needs to be intercepted when KVM is shadowing legacy paging + * in order to construct shadow PTEs with the correct protections. + * Note! CR0.WP technically can be passed through to the guest if + * paging is disabled, but checking CR0.PG would generate a cyclical + * dependency of sorts due to forcing the caller to ensure CR0 holds + * the correct value prior to determining which CR0 bits can be owned + * by L1. Keep it simple and limit the optimization to EPT. + */ + if (!enable_ept) + bits &= ~X86_CR0_WP; + return bits; +} + static inline struct kvm_vmx *to_kvm_vmx(struct kvm *kvm) { return container_of(kvm, struct kvm_vmx, kvm);
From: Sean Christopherson seanjc@google.com
[ Upstream commit cf9f4c0eb1699d306e348b1fd0225af7b2c282d3 ]
Refresh the MMU's snapshot of the vCPU's CR0.WP prior to checking for permission faults when emulating a guest memory access and CR0.WP may be guest owned. If the guest toggles only CR0.WP and triggers emulation of a supervisor write, e.g. when KVM is emulating UMIP, KVM may consume a stale CR0.WP, i.e. use stale protection bits metadata.
Note, KVM passes through CR0.WP if and only if EPT is enabled as CR0.WP is part of the MMU role for legacy shadow paging, and SVM (NPT) doesn't support per-bit interception controls for CR0. Don't bother checking for EPT vs. NPT as the "old == new" check will always be true under NPT, i.e. the only cost is the read of vcpu->arch.cr4 (SVM unconditionally grabs CR0 from the VMCB on VM-Exit).
Reported-by: Mathias Krause minipli@grsecurity.net Link: https://lkml.kernel.org/r/677169b4-051f-fcae-756b-9a3e1bb9f8fe%40grsecurity.... Fixes: fb509f76acc8 ("KVM: VMX: Make CR0.WP a guest owned bit") Tested-by: Mathias Krause minipli@grsecurity.net Link: https://lore.kernel.org/r/20230405002608.418442-1-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net # backport to v6.1.x Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/mmu.h | 26 +++++++++++++++++++++++++- arch/x86/kvm/mmu/mmu.c | 15 +++++++++++++++ 2 files changed, 40 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6bdaacb6faa07..59804be91b5b0 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -113,6 +113,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, u64 fault_address, char *insn, int insn_len); +void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu);
int kvm_mmu_load(struct kvm_vcpu *vcpu); void kvm_mmu_unload(struct kvm_vcpu *vcpu); @@ -153,6 +155,24 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu) vcpu->arch.mmu->root_role.level); }
+static inline void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + /* + * When EPT is enabled, KVM may passthrough CR0.WP to the guest, i.e. + * @mmu's snapshot of CR0.WP and thus all related paging metadata may + * be stale. Refresh CR0.WP and the metadata on-demand when checking + * for permission faults. Exempt nested MMUs, i.e. MMUs for shadowing + * nEPT and nNPT, as CR0.WP is ignored in both cases. Note, KVM does + * need to refresh nested_mmu, a.k.a. the walker used to translate L2 + * GVAs to GPAs, as that "MMU" needs to honor L2's CR0.WP. + */ + if (!tdp_enabled || mmu == &vcpu->arch.guest_mmu) + return; + + __kvm_mmu_refresh_passthrough_bits(vcpu, mmu); +} + /* * Check if a given access (described through the I/D, W/R and U/S bits of a * page fault error code pfec) causes a permission fault with the given PTE @@ -184,8 +204,12 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, u64 implicit_access = access & PFERR_IMPLICIT_ACCESS; bool not_smap = ((rflags & X86_EFLAGS_AC) | implicit_access) == X86_EFLAGS_AC; int index = (pfec + (not_smap << PFERR_RSVD_BIT)) >> 1; - bool fault = (mmu->permissions[index] >> pte_access) & 1; u32 errcode = PFERR_PRESENT_MASK; + bool fault; + + kvm_mmu_refresh_passthrough_bits(vcpu, mmu); + + fault = (mmu->permissions[index] >> pte_access) & 1;
WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK)); if (unlikely(mmu->pkru_mask)) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f2a10c7d13697..230108a90cf39 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5005,6 +5005,21 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) return role; }
+void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + const bool cr0_wp = !!kvm_read_cr0_bits(vcpu, X86_CR0_WP); + + BUILD_BUG_ON((KVM_MMU_CR0_ROLE_BITS & KVM_POSSIBLE_CR0_GUEST_BITS) != X86_CR0_WP); + BUILD_BUG_ON((KVM_MMU_CR4_ROLE_BITS & KVM_POSSIBLE_CR4_GUEST_BITS)); + + if (is_cr0_wp(mmu) == cr0_wp) + return; + + mmu->cpu_role.base.cr0_wp = cr0_wp; + reset_guest_paging_metadata(vcpu, mmu); +} + static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu) { /* tdp_root_level is architecture forced level, use it if nonzero */
From: Hans de Goede hdegoede@redhat.com
[ Upstream commit c963e2ec095cb3f855890be53f56f5a6c6fbe371 ]
Commit 7e1d728a94ca ("ASoC: Intel: soc-acpi-byt: Add new WM5102 ACPI HID") added an extra HID to wm5102_comp_ids.codecs, but it forgot to bump wm5102_comp_ids.num_codecs, causing the last codec HID in the codecs list to no longer work.
Bump wm5102_comp_ids.num_codecs to fix this.
Fixes: 7e1d728a94ca ("ASoC: Intel: soc-acpi-byt: Add new WM5102 ACPI HID") Signed-off-by: Hans de Goede hdegoede@redhat.com Acked-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Link: https://lore.kernel.org/r/20230421183714.35186-1-hdegoede@redhat.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/intel/common/soc-acpi-intel-byt-match.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sound/soc/intel/common/soc-acpi-intel-byt-match.c b/sound/soc/intel/common/soc-acpi-intel-byt-match.c index db5a92b9875a8..87c44f284971a 100644 --- a/sound/soc/intel/common/soc-acpi-intel-byt-match.c +++ b/sound/soc/intel/common/soc-acpi-intel-byt-match.c @@ -124,7 +124,7 @@ static const struct snd_soc_acpi_codecs rt5640_comp_ids = { };
static const struct snd_soc_acpi_codecs wm5102_comp_ids = { - .num_codecs = 2, + .num_codecs = 3, .codecs = { "10WM5102", "WM510204", "WM510205"}, };
From: Zheng Wang zyytlz.wz@163.com
[ Upstream commit c5749639f2d0a1f6cbe187d05f70c2e7c544d748 ]
In qedi_probe() we call __qedi_probe() which initializes &qedi->recovery_work with qedi_recovery_handler() and &qedi->board_disable_work with qedi_board_disable_work().
When qedi_schedule_recovery_handler() is called, schedule_delayed_work() will finally start the work.
In qedi_remove(), which is called to remove the driver, the following sequence may be observed:
Fix this by finishing the work before cleanup in qedi_remove().
CPU0 CPU1
|qedi_recovery_handler qedi_remove | __qedi_remove | iscsi_host_free | scsi_host_put | //free shost | |iscsi_host_for_each_session |//use qedi->shost
Cancel recovery_work and board_disable_work in __qedi_remove().
Fixes: 4b1068f5d74b ("scsi: qedi: Add MFW error recovery process") Signed-off-by: Zheng Wang zyytlz.wz@163.com Link: https://lore.kernel.org/r/20230413033422.28003-1-zyytlz.wz@163.com Acked-by: Manish Rangankar mrangankar@marvell.com Reviewed-by: Mike Christie michael.christie@oracle.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/qedi/qedi_main.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c index df2fe7bd26d1b..f530bb0364939 100644 --- a/drivers/scsi/qedi/qedi_main.c +++ b/drivers/scsi/qedi/qedi_main.c @@ -2450,6 +2450,9 @@ static void __qedi_remove(struct pci_dev *pdev, int mode) qedi_ops->ll2->stop(qedi->cdev); }
+ cancel_delayed_work_sync(&qedi->recovery_work); + cancel_delayed_work_sync(&qedi->board_disable_work); + qedi_free_iscsi_pf_param(qedi);
rval = qedi_ops->common->update_drv_state(qedi->cdev, false);
From: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
[ Upstream commit bbfbf09d193ac831c40db50ef4b31d11548a9eef ]
As part of the programming expectation for using DML functions, DC requires that any DML function invoked outside DML uses:
DC_FP_START(); ... dml function ... DC_FP_END();
Additionally, all the DML functions that can be invoked outside the DML folder call the function dc_assert_fp_enabled(), which is responsible for triggering a warning in the case that the DML function was not guarded by the DC_FP_START/END. For this reason, call DC_FP_START/END inside DML is wrong, and this commit removes all of those references.
Tested-by: Mark Broadworth mark.broadworth@amd.com Reviewed-by: Nevenko Stupar Nevenko.Stupar@amd.com Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: 822b84ecfc64 ("drm/amd/display: Add missing WA and MCLK validation") Signed-off-by: Sasha Levin sashal@kernel.org --- .../drm/amd/display/dc/dml/dcn30/dcn30_fpu.c | 2 -- .../drm/amd/display/dc/dml/dcn32/dcn32_fpu.c | 17 +---------------- 2 files changed, 1 insertion(+), 18 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c index 990dbd736e2ce..4fa6363647937 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c @@ -520,9 +520,7 @@ void dcn30_fpu_calculate_wm_and_dlg( pipe_idx++; }
- DC_FP_START(); dcn20_calculate_dlg_params(dc, context, pipes, pipe_cnt, vlevel); - DC_FP_END();
if (!pstate_en) /* Restore full p-state latency */ diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c index e22b4b3880af9..d2b184fdd7e02 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c @@ -1200,9 +1200,7 @@ static void dcn32_full_validate_bw_helper(struct dc *dc, } } else { // Most populate phantom DLG params before programming hardware / timing for phantom pipe - DC_FP_START(); dcn32_helper_populate_phantom_dlg_params(dc, context, pipes, *pipe_cnt); - DC_FP_END();
/* Call validate_apply_pipe_split flags after calling DML getters for * phantom dlg params, or some of the VBA params indicating pipe split @@ -1503,11 +1501,8 @@ bool dcn32_internal_validate_bw(struct dc *dc,
dml_log_pipe_params(&context->bw_ctx.dml, pipes, pipe_cnt);
- if (!fast_validate) { - DC_FP_START(); + if (!fast_validate) dcn32_full_validate_bw_helper(dc, context, pipes, &vlevel, split, merge, &pipe_cnt); - DC_FP_END(); - }
if (fast_validate || (dc->debug.dml_disallow_alternate_prefetch_modes && @@ -2172,9 +2167,7 @@ static int build_synthetic_soc_states(struct clk_bw_params *bw_params, entry.fabricclk_mhz = 0; entry.dram_speed_mts = 0;
- DC_FP_START(); insert_entry_into_table_sorted(table, num_entries, &entry); - DC_FP_END(); }
// Insert the max DCFCLK @@ -2182,9 +2175,7 @@ static int build_synthetic_soc_states(struct clk_bw_params *bw_params, entry.fabricclk_mhz = 0; entry.dram_speed_mts = 0;
- DC_FP_START(); insert_entry_into_table_sorted(table, num_entries, &entry); - DC_FP_END();
// Insert the UCLK DPMS for (i = 0; i < num_uclk_dpms; i++) { @@ -2192,9 +2183,7 @@ static int build_synthetic_soc_states(struct clk_bw_params *bw_params, entry.fabricclk_mhz = 0; entry.dram_speed_mts = bw_params->clk_table.entries[i].memclk_mhz * 16;
- DC_FP_START(); insert_entry_into_table_sorted(table, num_entries, &entry); - DC_FP_END(); }
// If FCLK is coarse grained, insert individual DPMs. @@ -2204,9 +2193,7 @@ static int build_synthetic_soc_states(struct clk_bw_params *bw_params, entry.fabricclk_mhz = bw_params->clk_table.entries[i].fclk_mhz; entry.dram_speed_mts = 0;
- DC_FP_START(); insert_entry_into_table_sorted(table, num_entries, &entry); - DC_FP_END(); } } // If FCLK fine grained, only insert max @@ -2215,9 +2202,7 @@ static int build_synthetic_soc_states(struct clk_bw_params *bw_params, entry.fabricclk_mhz = max_fclk_mhz; entry.dram_speed_mts = 0;
- DC_FP_START(); insert_entry_into_table_sorted(table, num_entries, &entry); - DC_FP_END(); }
// At this point, the table contains all "points of interest" based on
From: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
[ Upstream commit 822b84ecfc646da0f87fd947fa00dc3be5e45ecc ]
When the commit fff7eb56b376 ("drm/amd/display: Don't set dram clock change requirement for SubVP") was merged, we missed some parts associated with the MCLK switch. This commit adds all the missing parts.
Fixes: fff7eb56b376 ("drm/amd/display: Don't set dram clock change requirement for SubVP") Reviewed-by: Aurabindo Pillai aurabindo.pillai@amd.com Signed-off-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c | 1 + .../drm/amd/display/dc/dcn32/dcn32_resource.c | 2 +- .../drm/amd/display/dc/dml/dcn30/dcn30_fpu.c | 18 +++++++++++++++++- 3 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c index 1a85509c12f23..e9188bce62e0b 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c @@ -984,6 +984,7 @@ void dcn32_init_hw(struct dc *dc) if (dc->ctx->dmub_srv) { dc_dmub_srv_query_caps_cmd(dc->ctx->dmub_srv->dmub); dc->caps.dmub_caps.psr = dc->ctx->dmub_srv->dmub->feature_caps.psr; + dc->caps.dmub_caps.mclk_sw = dc->ctx->dmub_srv->dmub->feature_caps.fw_assisted_mclk_switch; }
/* Enable support for ODM and windowed MPO if policy flag is set */ diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c index a942e2812183a..903f80a8c200c 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c @@ -1984,7 +1984,7 @@ int dcn32_populate_dml_pipes_from_context( // In general cases we want to keep the dram clock change requirement // (prefer configs that support MCLK switch). Only override to false // for SubVP - if (subvp_in_use) + if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || subvp_in_use) context->bw_ctx.dml.soc.dram_clock_change_requirement_final = false; else context->bw_ctx.dml.soc.dram_clock_change_requirement_final = true; diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c index 4fa6363647937..fdfb19337ea6e 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c @@ -368,7 +368,9 @@ void dcn30_fpu_update_soc_for_wm_a(struct dc *dc, struct dc_state *context) dc_assert_fp_enabled();
if (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].valid) { - context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us; + if (!context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || + context->bw_ctx.dml.soc.dram_clock_change_latency_us == 0) + context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us; context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.sr_enter_plus_exit_time_us; context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.sr_exit_time_us; } @@ -520,6 +522,20 @@ void dcn30_fpu_calculate_wm_and_dlg( pipe_idx++; }
+ // WA: restrict FPO to use first non-strobe mode (NV24 BW issue) + if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching && + dc->dml.soc.num_chans <= 4 && + context->bw_ctx.dml.vba.DRAMSpeed <= 1700 && + context->bw_ctx.dml.vba.DRAMSpeed >= 1500) { + + for (i = 0; i < dc->dml.soc.num_states; i++) { + if (dc->dml.soc.clock_limits[i].dram_speed_mts > 1700) { + context->bw_ctx.dml.vba.DRAMSpeed = dc->dml.soc.clock_limits[i].dram_speed_mts; + break; + } + } + } + dcn20_calculate_dlg_params(dc, context, pipes, pipe_cnt, vlevel);
if (!pstate_en)
From: Hersen Wu hersenxs.wu@amd.com
[ Upstream commit dd24662d9dfbad281bbf030f06d68c7938fa0c66 ]
[Why&How] We were not returning -EINVAL on DSC atomic check fail. Add it.
Fixes: 71be4b16d39a ("drm/amd/display: dsc validate fail not pass to atomic check") Reviewed-by: Aurabindo Pillai aurabindo.pillai@amd.com Signed-off-by: Hersen Wu hersenxs.wu@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 1 + drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c | 1 + 2 files changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 9f637e360755d..8584049158510 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -9746,6 +9746,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, ret = compute_mst_dsc_configs_for_state(state, dm_state->context, vars); if (ret) { DRM_DEBUG_DRIVER("compute_mst_dsc_configs_for_state() failed\n"); + ret = -EINVAL; goto fail; }
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c index df74bc88e4600..e2f9141d6d938 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c @@ -1368,6 +1368,7 @@ int pre_validate_dsc(struct drm_atomic_state *state, ret = pre_compute_mst_dsc_configs_for_state(state, local_dc_state, vars); if (ret != 0) { DRM_INFO_ONCE("pre_compute_mst_dsc_configs_for_state() failed\n"); + ret = -EINVAL; goto clean_exit; }
From: Aurabindo Pillai aurabindo.pillai@amd.com
[ Upstream commit d1c5c3e252b8a911a524e6ee33b82aca81397745 ]
[Why&How] Fix CLK MGR early initialization and add logging.
Fixes: 265280b99822 ("drm/amd/display: add CLKMGR changes for DCN32/321") Reviewed-by: Leo Li sunpeng.li@amd.com Reviewed-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Aurabindo Pillai aurabindo.pillai@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c index 9eb9fe5b8d2c5..1d84a04acb3f0 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c @@ -756,6 +756,8 @@ void dcn32_clk_mgr_construct( struct pp_smu_funcs *pp_smu, struct dccg *dccg) { + struct clk_log_info log_info = {0}; + clk_mgr->base.ctx = ctx; clk_mgr->base.funcs = &dcn32_funcs; if (ASICREV_IS_GC_11_0_2(clk_mgr->base.ctx->asic_id.hw_internal_rev)) { @@ -789,6 +791,7 @@ void dcn32_clk_mgr_construct( clk_mgr->base.clks.ref_dtbclk_khz = 268750; }
+ /* integer part is now VCO frequency in kHz */ clk_mgr->base.dentist_vco_freq_khz = dcn32_get_vco_frequency_from_reg(clk_mgr);
@@ -796,6 +799,8 @@ void dcn32_clk_mgr_construct( if (clk_mgr->base.dentist_vco_freq_khz == 0) clk_mgr->base.dentist_vco_freq_khz = 4300000; /* Updated as per HW docs */
+ dcn32_dump_clk_registers(&clk_mgr->base.boot_snapshot, &clk_mgr->base, &log_info); + if (ctx->dc->debug.disable_dtb_ref_clk_switch && clk_mgr->base.clks.ref_dtbclk_khz != clk_mgr->base.boot_snapshot.dtbclk) { clk_mgr->base.clks.ref_dtbclk_khz = clk_mgr->base.boot_snapshot.dtbclk;
From: Cruise Hung Cruise.Hung@amd.com
[ Upstream commit 425afa0ac99a05b39e6cd00704fa0e3e925cee2b ]
[Why & How] We missed resetting OUTBOX0 mailbox r/w pointer on DMUB reset. Fix it.
Fixes: 6ecf9773a503 ("drm/amd/display: Fix DMUB outbox trace in S4 (#4465)") Signed-off-by: Cruise Hung Cruise.Hung@amd.com Acked-by: Aurabindo Pillai aurabindo.pillai@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c index a76da0131addd..b0adbf783aae9 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c @@ -130,6 +130,8 @@ void dmub_dcn32_reset(struct dmub_srv *dmub) REG_WRITE(DMCUB_INBOX1_WPTR, 0); REG_WRITE(DMCUB_OUTBOX1_RPTR, 0); REG_WRITE(DMCUB_OUTBOX1_WPTR, 0); + REG_WRITE(DMCUB_OUTBOX0_RPTR, 0); + REG_WRITE(DMCUB_OUTBOX0_WPTR, 0); REG_WRITE(DMCUB_SCRATCH0, 0); }
From: Aurabindo Pillai aurabindo.pillai@amd.com
[ Upstream commit 99d92eaca5d915763b240aae24669f5bf3227ecf ]
[Why & How] There's no need to clear GPINT register for DMUB when releasing it from reset. Fix that.
Fixes: ac2e555e0a7f ("drm/amd/display: Add DMCUB source files and changes for DCN32/321") Reviewed-by: Leo Li sunpeng.li@amd.com Signed-off-by: Aurabindo Pillai aurabindo.pillai@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c index b0adbf783aae9..9c20516be066c 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c @@ -137,7 +137,6 @@ void dmub_dcn32_reset(struct dmub_srv *dmub)
void dmub_dcn32_reset_release(struct dmub_srv *dmub) { - REG_WRITE(DMCUB_GPINT_DATAIN1, 0); REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 0); REG_WRITE(DMCUB_SCRATCH15, dmub->psp_version & 0x001100FF); REG_UPDATE_2(DMCUB_CNTL, DMCUB_ENABLE, 1, DMCUB_TRACEPORT_EN, 1);
From: Aurabindo Pillai aurabindo.pillai@amd.com
[ Upstream commit 989cd3e76a4aab76fe7dd50090ac3fa501c537f6 ]
[Why&how]
Update bounding box values as per hardware spec
Fixes: 197485c69543 ("drm/amd/display: Create dcn321_fpu file") Acked-by: Leo Li sunpeng.li@amd.com Signed-off-by: Aurabindo Pillai aurabindo.pillai@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../amd/display/dc/dml/dcn321/dcn321_fpu.c | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c index b80cef70fa60f..383a409a3f54c 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c @@ -106,16 +106,16 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_21_soc = { .clock_limits = { { .state = 0, - .dcfclk_mhz = 1564.0, - .fabricclk_mhz = 400.0, - .dispclk_mhz = 2150.0, - .dppclk_mhz = 2150.0, + .dcfclk_mhz = 1434.0, + .fabricclk_mhz = 2250.0, + .dispclk_mhz = 1720.0, + .dppclk_mhz = 1720.0, .phyclk_mhz = 810.0, .phyclk_d18_mhz = 667.0, - .phyclk_d32_mhz = 625.0, + .phyclk_d32_mhz = 313.0, .socclk_mhz = 1200.0, - .dscclk_mhz = 716.667, - .dram_speed_mts = 1600.0, + .dscclk_mhz = 573.333, + .dram_speed_mts = 16000.0, .dtbclk_mhz = 1564.0, }, }, @@ -125,14 +125,14 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_21_soc = { .sr_exit_z8_time_us = 285.0, .sr_enter_plus_exit_z8_time_us = 320, .writeback_latency_us = 12.0, - .round_trip_ping_latency_dcfclk_cycles = 263, + .round_trip_ping_latency_dcfclk_cycles = 207, .urgent_latency_pixel_data_only_us = 4, .urgent_latency_pixel_mixed_with_vm_data_us = 4, .urgent_latency_vm_data_only_us = 4, - .fclk_change_latency_us = 20, - .usr_retraining_latency_us = 2, - .smn_latency_us = 2, - .mall_allocated_for_dcn_mbytes = 64, + .fclk_change_latency_us = 7, + .usr_retraining_latency_us = 0, + .smn_latency_us = 0, + .mall_allocated_for_dcn_mbytes = 32, .urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096, .urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096, .urgent_out_of_order_return_per_channel_vm_only_bytes = 4096,
From: John Hickey jjh@daedalian.us
[ Upstream commit c23ae5091a8b3e50fe755257df020907e7c029bb ]
Commit 4fe815850bdc ("ixgbe: let the xdpdrv work with more than 64 cpus") adds support to allow XDP programs to run on systems with more than 64 CPUs by locking the XDP TX rings and indexing them using cpu % 64 (IXGBE_MAX_XDP_QS).
Upon trying this out patch on a system with more than 64 cores, the kernel paniced with an array-index-out-of-bounds at the return in ixgbe_determine_xdp_ring in ixgbe.h, which means ixgbe_determine_xdp_q_idx was just returning the cpu instead of cpu % IXGBE_MAX_XDP_QS. An example splat:
========================================================================== UBSAN: array-index-out-of-bounds in /var/lib/dkms/ixgbe/5.18.6+focal-1/build/src/ixgbe.h:1147:26 index 65 is out of range for type 'ixgbe_ring *[64]' ========================================================================== BUG: kernel NULL pointer dereference, address: 0000000000000058 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 0 P4D 0 Oops: 0000 [#1] SMP NOPTI CPU: 65 PID: 408 Comm: ksoftirqd/65 Tainted: G IOE 5.15.0-48-generic #54~20.04.1-Ubuntu Hardware name: Dell Inc. PowerEdge R640/0W23H8, BIOS 2.5.4 01/13/2020 RIP: 0010:ixgbe_xmit_xdp_ring+0x1b/0x1c0 [ixgbe] Code: 3b 52 d4 cf e9 42 f2 ff ff 66 0f 1f 44 00 00 0f 1f 44 00 00 55 b9 00 00 00 00 48 89 e5 41 57 41 56 41 55 41 54 53 48 83 ec 08 <44> 0f b7 47 58 0f b7 47 5a 0f b7 57 54 44 0f b7 76 08 66 41 39 c0 RSP: 0018:ffffbc3fcd88fcb0 EFLAGS: 00010282 RAX: ffff92a253260980 RBX: ffffbc3fe68b00a0 RCX: 0000000000000000 RDX: ffff928b5f659000 RSI: ffff928b5f659000 RDI: 0000000000000000 RBP: ffffbc3fcd88fce0 R08: ffff92b9dfc20580 R09: 0000000000000001 R10: 3d3d3d3d3d3d3d3d R11: 3d3d3d3d3d3d3d3d R12: 0000000000000000 R13: ffff928b2f0fa8c0 R14: ffff928b9be20050 R15: 000000000000003c FS: 0000000000000000(0000) GS:ffff92b9dfc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000058 CR3: 000000011dd6a002 CR4: 00000000007706e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: <TASK> ixgbe_poll+0x103e/0x1280 [ixgbe] ? sched_clock_cpu+0x12/0xe0 __napi_poll+0x30/0x160 net_rx_action+0x11c/0x270 __do_softirq+0xda/0x2ee run_ksoftirqd+0x2f/0x50 smpboot_thread_fn+0xb7/0x150 ? sort_range+0x30/0x30 kthread+0x127/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x1f/0x30 </TASK>
I think this is how it happens:
Upon loading the first XDP program on a system with more than 64 CPUs, ixgbe_xdp_locking_key is incremented in ixgbe_xdp_setup. However, immediately after this, the rings are reconfigured by ixgbe_setup_tc. ixgbe_setup_tc calls ixgbe_clear_interrupt_scheme which calls ixgbe_free_q_vectors which calls ixgbe_free_q_vector in a loop. ixgbe_free_q_vector decrements ixgbe_xdp_locking_key once per call if it is non-zero. Commenting out the decrement in ixgbe_free_q_vector stopped my system from panicing.
I suspect to make the original patch work, I would need to load an XDP program and then replace it in order to get ixgbe_xdp_locking_key back above 0 since ixgbe_setup_tc is only called when transitioning between XDP and non-XDP ring configurations, while ixgbe_xdp_locking_key is incremented every time ixgbe_xdp_setup is called.
Also, ixgbe_setup_tc can be called via ethtool --set-channels, so this becomes another path to decrement ixgbe_xdp_locking_key to 0 on systems with more than 64 CPUs.
Since ixgbe_xdp_locking_key only protects the XDP_TX path and is tied to the number of CPUs present, there is no reason to disable it upon unloading an XDP program. To avoid confusion, I have moved enabling ixgbe_xdp_locking_key into ixgbe_sw_init, which is part of the probe path.
Fixes: 4fe815850bdc ("ixgbe: let the xdpdrv work with more than 64 cpus") Signed-off-by: John Hickey jjh@daedalian.us Reviewed-by: Maciej Fijalkowski maciej.fijalkowski@intel.com Tested-by: Chandan Kumar Rout chandanx.rout@intel.com (A Contingent Worker at Intel) Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Link: https://lore.kernel.org/r/20230425170308.2522429-1-anthony.l.nguyen@intel.co... Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 3 --- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 6 ++++-- 2 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c index f8156fe4b1dc4..0ee943db3dc92 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c @@ -1035,9 +1035,6 @@ static void ixgbe_free_q_vector(struct ixgbe_adapter *adapter, int v_idx) adapter->q_vector[v_idx] = NULL; __netif_napi_del(&q_vector->napi);
- if (static_key_enabled(&ixgbe_xdp_locking_key)) - static_branch_dec(&ixgbe_xdp_locking_key); - /* * after a call to __netif_napi_del() napi may still be used and * ixgbe_get_stats64() might access the rings on this vector, diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index faf3a094ac540..9b8848daeb430 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -6495,6 +6495,10 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter, set_bit(0, adapter->fwd_bitmask); set_bit(__IXGBE_DOWN, &adapter->state);
+ /* enable locking for XDP_TX if we have more CPUs than queues */ + if (nr_cpu_ids > IXGBE_MAX_XDP_QS) + static_branch_enable(&ixgbe_xdp_locking_key); + return 0; }
@@ -10288,8 +10292,6 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog) */ if (nr_cpu_ids > IXGBE_MAX_XDP_QS * 2) return -ENOMEM; - else if (nr_cpu_ids > IXGBE_MAX_XDP_QS) - static_branch_inc(&ixgbe_xdp_locking_key);
old_prog = xchg(&adapter->xdp_prog, prog); need_reset = (!!prog != !!old_prog);
From: Geetha sowjanya gakula@marvell.com
[ Upstream commit c222b292a3568754828ffd30338d2909b14ed160 ]
For each lmac port, MCS has two MCS_TOP_SLAVE_CHANNEL_CONFIGX registers. For CN10KB both register need to be configured for the port level mcs bypass to work. This patch also sets bitmap of flowid/secy entry reserved for default bypass so that these entries can be shown in debugfs.
Fixes: bd69476e86fc ("octeontx2-af: cn10k: mcs: Install a default TCAM for normal traffic") Signed-off-by: Geetha sowjanya gakula@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/mcs.c | 11 ++++++++++- .../net/ethernet/marvell/octeontx2/af/rvu_debugfs.c | 5 +++-- 2 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c index f68a6a0e3aa41..492baa0b594ce 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c @@ -494,6 +494,9 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs)
/* Flow entry */ flow_id = mcs->hw->tcam_entries - MCS_RSRC_RSVD_CNT; + __set_bit(flow_id, mcs->rx.flow_ids.bmap); + __set_bit(flow_id, mcs->tx.flow_ids.bmap); + for (reg_id = 0; reg_id < 4; reg_id++) { reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id); mcs_reg_write(mcs, reg, GENMASK_ULL(63, 0)); @@ -504,6 +507,8 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs) } /* secy */ secy_id = mcs->hw->secy_entries - MCS_RSRC_RSVD_CNT; + __set_bit(secy_id, mcs->rx.secy.bmap); + __set_bit(secy_id, mcs->tx.secy.bmap);
/* Set validate frames to NULL and enable control port */ plcy = 0x7ull; @@ -528,6 +533,7 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs) /* Enable Flowid entry */ mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_RX, true); mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_TX, true); + return 0; }
@@ -1325,8 +1331,11 @@ void mcs_reset_port(struct mcs *mcs, u8 port_id, u8 reset) void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode) { u64 reg; + int id = lmac_id * 2;
- reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(lmac_id * 2); + reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(id); + mcs_reg_write(mcs, reg, (u64)mode); + reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG((id + 1)); mcs_reg_write(mcs, reg, (u64)mode); }
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c index abef0fd4259a3..0cab27448399c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c @@ -497,8 +497,9 @@ static int rvu_dbg_mcs_rx_secy_stats_display(struct seq_file *filp, void *unused stats.octet_validated_cnt); seq_printf(filp, "secy%d: Pkts on disable port: %lld\n", secy_id, stats.pkt_port_disabled_cnt); - seq_printf(filp, "secy%d: Octets validated: %lld\n", secy_id, stats.pkt_badtag_cnt); - seq_printf(filp, "secy%d: Octets validated: %lld\n", secy_id, stats.pkt_nosa_cnt); + seq_printf(filp, "secy%d: Pkts with badtag: %lld\n", secy_id, stats.pkt_badtag_cnt); + seq_printf(filp, "secy%d: Pkts with no SA(sectag.tci.c=0): %lld\n", secy_id, + stats.pkt_nosa_cnt); seq_printf(filp, "secy%d: Pkts with nosaerror: %lld\n", secy_id, stats.pkt_nosaerror_cnt); seq_printf(filp, "secy%d: Tagged ctrl pkts: %lld\n", secy_id,
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit b51612198603fce33d6cf57b4864e3018a1cd9b8 ]
As per hardware errata on CN10KB, all the four TCAM_DATA and TCAM_MASK registers has to be written at once otherwise write to individual registers will fail. Hence write to all TCAM_DATA registers and then to all TCAM_MASK registers.
Fixes: cfc14181d497 ("octeontx2-af: cn10k: mcs: Manage the MCS block hardware resources") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/mcs.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c index 492baa0b594ce..148417d633a56 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c @@ -473,6 +473,8 @@ void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int flow_id, for (reg_id = 0; reg_id < 4; reg_id++) { reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id); mcs_reg_write(mcs, reg, data[reg_id]); + } + for (reg_id = 0; reg_id < 4; reg_id++) { reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id); mcs_reg_write(mcs, reg, mask[reg_id]); } @@ -480,6 +482,8 @@ void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int flow_id, for (reg_id = 0; reg_id < 4; reg_id++) { reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id); mcs_reg_write(mcs, reg, data[reg_id]); + } + for (reg_id = 0; reg_id < 4; reg_id++) { reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id); mcs_reg_write(mcs, reg, mask[reg_id]); }
From: Geetha sowjanya gakula@marvell.com
[ Upstream commit 65cdc2b637a5749c7dec0ce14fe2c48f1f91f671 ]
When ptp timestamp is enabled in RPM, RPM will append 8B timestamp header for all RX traffic. MCS need to skip these 8 bytes header while parsing the packet header, so that correct tcam key is created for lookup. This patch fixes the mcs parser configuration to skip this 8B header for ptp packets.
Fixes: ca7f49ff8846 ("octeontx2-af: cn10k: Introduce driver for macsec block.") Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../ethernet/marvell/octeontx2/af/mcs_reg.h | 1 + .../marvell/octeontx2/af/mcs_rvu_if.c | 37 +++++++++++++++++++ .../net/ethernet/marvell/octeontx2/af/rvu.h | 1 + .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 2 + 4 files changed, 41 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h index c95a8b8f5eaf7..7427e3b1490f4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h @@ -97,6 +97,7 @@ #define MCSX_PEX_TX_SLAVE_VLAN_CFGX(a) (0x46f8ull + (a) * 0x8ull) #define MCSX_PEX_TX_SLAVE_CUSTOM_TAG_REL_MODE_SEL(a) (0x788ull + (a) * 0x8ull) #define MCSX_PEX_TX_SLAVE_PORT_CONFIG(a) (0x4738ull + (a) * 0x8ull) +#define MCSX_PEX_RX_SLAVE_PORT_CFGX(a) (0x3b98ull + (a) * 0x8ull) #define MCSX_PEX_RX_SLAVE_RULE_ETYPE_CFGX(a) ({ \ u64 offset; \ \ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c index eb25e458266ca..dfd23580e3b8e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c @@ -11,6 +11,7 @@
#include "mcs.h" #include "rvu.h" +#include "mcs_reg.h" #include "lmac_common.h"
#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ @@ -32,6 +33,42 @@ static struct _req_type __maybe_unused \ MBOX_UP_MCS_MESSAGES #undef M
+void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena) +{ + struct mcs *mcs; + u64 cfg; + u8 port; + + if (!rvu->mcs_blk_cnt) + return; + + /* When ptp is enabled, RPM appends 8B header for all + * RX packets. MCS PEX need to configure to skip 8B + * during packet parsing. + */ + + /* CNF10K-B */ + if (rvu->mcs_blk_cnt > 1) { + mcs = mcs_get_pdata(rpm_id); + cfg = mcs_reg_read(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION); + if (ena) + cfg |= BIT_ULL(lmac_id); + else + cfg &= ~BIT_ULL(lmac_id); + mcs_reg_write(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION, cfg); + return; + } + /* CN10KB */ + mcs = mcs_get_pdata(0); + port = (rpm_id * rvu->hw->lmac_per_cgx) + lmac_id; + cfg = mcs_reg_read(mcs, MCSX_PEX_RX_SLAVE_PORT_CFGX(port)); + if (ena) + cfg |= BIT_ULL(0); + else + cfg &= ~BIT_ULL(0); + mcs_reg_write(mcs, MCSX_PEX_RX_SLAVE_PORT_CFGX(port), cfg); +} + int rvu_mbox_handler_mcs_set_lmac_mode(struct rvu *rvu, struct mcs_set_lmac_mode *req, struct msg_rsp *rsp) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index b07c6f51b461b..3ca8d2e5f979b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -892,6 +892,7 @@ int rvu_get_hwvf(struct rvu *rvu, int pcifunc); /* CN10K MCS */ int rvu_mcs_init(struct rvu *rvu); int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc); +void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena); void rvu_mcs_exit(struct rvu *rvu);
#endif /* RVU_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index addc69f4b65c6..9eca38547b783 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -761,6 +761,8 @@ static int rvu_cgx_ptp_rx_cfg(struct rvu *rvu, u16 pcifunc, bool enable) /* This flag is required to clean up CGX conf if app gets killed */ pfvf->hw_rx_tstamp_en = enable;
+ /* Inform MCS about 8B RX header */ + rvu_mcs_ptp_cfg(rvu, cgx_id, lmac_id, enable); return 0; }
From: Geetha sowjanya gakula@marvell.com
[ Upstream commit b8aebeaaf9ffb1e99c642eb3751e28981f9be475 ]
On CN10KB, MCS IP vector number, BBE and PAB interrupt mask got changed to support more block level interrupts. To address this changes, this patch fixes the bbe and pab interrupt handlers.
Fixes: 6c635f78c474 ("octeontx2-af: cn10k: mcs: Handle MCS block interrupts") Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/mcs.c | 95 ++++++++----------- .../net/ethernet/marvell/octeontx2/af/mcs.h | 26 +++-- .../marvell/octeontx2/af/mcs_cnf10kb.c | 63 ++++++++++++ .../ethernet/marvell/octeontx2/af/mcs_reg.h | 5 +- 4 files changed, 119 insertions(+), 70 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c index 148417d633a56..c43f19dfbd744 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c @@ -936,60 +936,42 @@ static void mcs_tx_misc_intr_handler(struct mcs *mcs, u64 intr) mcs_add_intr_wq_entry(mcs, &event); }
-static void mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir) +void cn10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, + enum mcs_direction dir) { - struct mcs_intr_event event = { 0 }; - int i; + u64 val, reg; + int lmac;
- if (!(intr & MCS_BBE_INT_MASK)) + if (!(intr & 0x6ULL)) return;
- event.mcs_id = mcs->mcs_id; - event.pcifunc = mcs->pf_map[0]; + if (intr & BIT_ULL(1)) + reg = (dir == MCS_RX) ? MCSX_BBE_RX_SLAVE_DFIFO_OVERFLOW_0 : + MCSX_BBE_TX_SLAVE_DFIFO_OVERFLOW_0; + else + reg = (dir == MCS_RX) ? MCSX_BBE_RX_SLAVE_PLFIFO_OVERFLOW_0 : + MCSX_BBE_TX_SLAVE_PLFIFO_OVERFLOW_0; + val = mcs_reg_read(mcs, reg);
- for (i = 0; i < MCS_MAX_BBE_INT; i++) { - if (!(intr & BIT_ULL(i))) + /* policy/data over flow occurred */ + for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++) { + if (!(val & BIT_ULL(lmac))) continue; - - /* Lower nibble denotes data fifo overflow interrupts and - * upper nibble indicates policy fifo overflow interrupts. - */ - if (intr & 0xFULL) - event.intr_mask = (dir == MCS_RX) ? - MCS_BBE_RX_DFIFO_OVERFLOW_INT : - MCS_BBE_TX_DFIFO_OVERFLOW_INT; - else - event.intr_mask = (dir == MCS_RX) ? - MCS_BBE_RX_PLFIFO_OVERFLOW_INT : - MCS_BBE_TX_PLFIFO_OVERFLOW_INT; - - /* Notify the lmac_id info which ran into BBE fatal error */ - event.lmac_id = i & 0x3ULL; - mcs_add_intr_wq_entry(mcs, &event); + dev_warn(mcs->dev, "BEE:Policy or data overflow occurred on lmac:%d\n", lmac); } }
-static void mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir) +void cn10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, + enum mcs_direction dir) { - struct mcs_intr_event event = { 0 }; - int i; + int lmac;
- if (!(intr & MCS_PAB_INT_MASK)) + if (!(intr & 0xFFFFFULL)) return;
- event.mcs_id = mcs->mcs_id; - event.pcifunc = mcs->pf_map[0]; - - for (i = 0; i < MCS_MAX_PAB_INT; i++) { - if (!(intr & BIT_ULL(i))) - continue; - - event.intr_mask = (dir == MCS_RX) ? MCS_PAB_RX_CHAN_OVERFLOW_INT : - MCS_PAB_TX_CHAN_OVERFLOW_INT; - - /* Notify the lmac_id info which ran into PAB fatal error */ - event.lmac_id = i; - mcs_add_intr_wq_entry(mcs, &event); + for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++) { + if (intr & BIT_ULL(lmac)) + dev_warn(mcs->dev, "PAB: overflow occurred on lmac:%d\n", lmac); } }
@@ -998,9 +980,8 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq) struct mcs *mcs = (struct mcs *)mcs_irq; u64 intr, cpm_intr, bbe_intr, pab_intr;
- /* Disable and clear the interrupt */ + /* Disable the interrupt */ mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1C, BIT_ULL(0)); - mcs_reg_write(mcs, MCSX_IP_INT, BIT_ULL(0));
/* Check which block has interrupt*/ intr = mcs_reg_read(mcs, MCSX_TOP_SLAVE_INT_SUM); @@ -1047,7 +1028,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq) /* BBE RX */ if (intr & MCS_BBE_RX_INT_ENA) { bbe_intr = mcs_reg_read(mcs, MCSX_BBE_RX_SLAVE_BBE_INT); - mcs_bbe_intr_handler(mcs, bbe_intr, MCS_RX); + mcs->mcs_ops->mcs_bbe_intr_handler(mcs, bbe_intr, MCS_RX);
/* Clear the interrupt */ mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_INTR_RW, 0); @@ -1057,7 +1038,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq) /* BBE TX */ if (intr & MCS_BBE_TX_INT_ENA) { bbe_intr = mcs_reg_read(mcs, MCSX_BBE_TX_SLAVE_BBE_INT); - mcs_bbe_intr_handler(mcs, bbe_intr, MCS_TX); + mcs->mcs_ops->mcs_bbe_intr_handler(mcs, bbe_intr, MCS_TX);
/* Clear the interrupt */ mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_INTR_RW, 0); @@ -1067,7 +1048,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq) /* PAB RX */ if (intr & MCS_PAB_RX_INT_ENA) { pab_intr = mcs_reg_read(mcs, MCSX_PAB_RX_SLAVE_PAB_INT); - mcs_pab_intr_handler(mcs, pab_intr, MCS_RX); + mcs->mcs_ops->mcs_pab_intr_handler(mcs, pab_intr, MCS_RX);
/* Clear the interrupt */ mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_INTR_RW, 0); @@ -1077,14 +1058,15 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq) /* PAB TX */ if (intr & MCS_PAB_TX_INT_ENA) { pab_intr = mcs_reg_read(mcs, MCSX_PAB_TX_SLAVE_PAB_INT); - mcs_pab_intr_handler(mcs, pab_intr, MCS_TX); + mcs->mcs_ops->mcs_pab_intr_handler(mcs, pab_intr, MCS_TX);
/* Clear the interrupt */ mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_INTR_RW, 0); mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT, pab_intr); }
- /* Enable the interrupt */ + /* Clear and enable the interrupt */ + mcs_reg_write(mcs, MCSX_IP_INT, BIT_ULL(0)); mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1S, BIT_ULL(0));
return IRQ_HANDLED; @@ -1166,7 +1148,7 @@ static int mcs_register_interrupts(struct mcs *mcs) return ret; }
- ret = request_irq(pci_irq_vector(mcs->pdev, MCS_INT_VEC_IP), + ret = request_irq(pci_irq_vector(mcs->pdev, mcs->hw->ip_vec), mcs_ip_intr_handler, 0, "MCS_IP", mcs); if (ret) { dev_err(mcs->dev, "MCS IP irq registration failed\n"); @@ -1185,11 +1167,11 @@ static int mcs_register_interrupts(struct mcs *mcs) mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_TX_INT_ENB, 0x7ULL); mcs_reg_write(mcs, MCSX_CPM_RX_SLAVE_RX_INT_ENB, 0x7FULL);
- mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_ENB, 0xff); - mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_ENB, 0xff); + mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_ENB, 0xFFULL); + mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_ENB, 0xFFULL);
- mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_ENB, 0xff); - mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_ENB, 0xff); + mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_ENB, 0xFFFFFULL); + mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_ENB, 0xFFFFFULL);
mcs->tx_sa_active = alloc_mem(mcs, mcs->hw->sc_entries); if (!mcs->tx_sa_active) { @@ -1200,7 +1182,7 @@ static int mcs_register_interrupts(struct mcs *mcs) return ret;
free_irq: - free_irq(pci_irq_vector(mcs->pdev, MCS_INT_VEC_IP), mcs); + free_irq(pci_irq_vector(mcs->pdev, mcs->hw->ip_vec), mcs); exit: pci_free_irq_vectors(mcs->pdev); mcs->num_vec = 0; @@ -1497,6 +1479,7 @@ void cn10kb_mcs_set_hw_capabilities(struct mcs *mcs) hw->lmac_cnt = 20; /* lmacs/ports per mcs block */ hw->mcs_x2p_intf = 5; /* x2p clabration intf */ hw->mcs_blks = 1; /* MCS blocks */ + hw->ip_vec = MCS_CN10KB_INT_VEC_IP; /* IP vector */ }
static struct mcs_ops cn10kb_mcs_ops = { @@ -1505,6 +1488,8 @@ static struct mcs_ops cn10kb_mcs_ops = { .mcs_tx_sa_mem_map_write = cn10kb_mcs_tx_sa_mem_map_write, .mcs_rx_sa_mem_map_write = cn10kb_mcs_rx_sa_mem_map_write, .mcs_flowid_secy_map = cn10kb_mcs_flowid_secy_map, + .mcs_bbe_intr_handler = cn10kb_mcs_bbe_intr_handler, + .mcs_pab_intr_handler = cn10kb_mcs_pab_intr_handler, };
static int mcs_probe(struct pci_dev *pdev, const struct pci_device_id *id) @@ -1605,7 +1590,7 @@ static void mcs_remove(struct pci_dev *pdev)
/* Set MCS to external bypass */ mcs_set_external_bypass(mcs, true); - free_irq(pci_irq_vector(pdev, MCS_INT_VEC_IP), mcs); + free_irq(pci_irq_vector(pdev, mcs->hw->ip_vec), mcs); pci_free_irq_vectors(pdev); pci_release_regions(pdev); pci_disable_device(pdev); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h index 64dc2b80e15dd..0f89dcb764654 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h @@ -43,24 +43,15 @@ /* Reserved resources for default bypass entry */ #define MCS_RSRC_RSVD_CNT 1
-/* MCS Interrupt Vector Enumeration */ -enum mcs_int_vec_e { - MCS_INT_VEC_MIL_RX_GBL = 0x0, - MCS_INT_VEC_MIL_RX_LMACX = 0x1, - MCS_INT_VEC_MIL_TX_LMACX = 0x5, - MCS_INT_VEC_HIL_RX_GBL = 0x9, - MCS_INT_VEC_HIL_RX_LMACX = 0xa, - MCS_INT_VEC_HIL_TX_GBL = 0xe, - MCS_INT_VEC_HIL_TX_LMACX = 0xf, - MCS_INT_VEC_IP = 0x13, - MCS_INT_VEC_CNT = 0x14, -}; +/* MCS Interrupt Vector */ +#define MCS_CNF10KB_INT_VEC_IP 0x13 +#define MCS_CN10KB_INT_VEC_IP 0x53
#define MCS_MAX_BBE_INT 8ULL #define MCS_BBE_INT_MASK 0xFFULL
-#define MCS_MAX_PAB_INT 4ULL -#define MCS_PAB_INT_MASK 0xFULL +#define MCS_MAX_PAB_INT 8ULL +#define MCS_PAB_INT_MASK 0xFULL
#define MCS_BBE_RX_INT_ENA BIT_ULL(0) #define MCS_BBE_TX_INT_ENA BIT_ULL(1) @@ -137,6 +128,7 @@ struct hwinfo { u8 lmac_cnt; u8 mcs_blks; unsigned long lmac_bmap; /* bitmap of enabled mcs lmac */ + u16 ip_vec; };
struct mcs { @@ -165,6 +157,8 @@ struct mcs_ops { void (*mcs_tx_sa_mem_map_write)(struct mcs *mcs, struct mcs_tx_sc_sa_map *map); void (*mcs_rx_sa_mem_map_write)(struct mcs *mcs, struct mcs_rx_sc_sa_map *map); void (*mcs_flowid_secy_map)(struct mcs *mcs, struct secy_mem_map *map, int dir); + void (*mcs_bbe_intr_handler)(struct mcs *mcs, u64 intr, enum mcs_direction dir); + void (*mcs_pab_intr_handler)(struct mcs *mcs, u64 intr, enum mcs_direction dir); };
extern struct pci_driver mcs_driver; @@ -219,6 +213,8 @@ void cn10kb_mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *ma void cn10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir); void cn10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map); void cn10kb_mcs_parser_cfg(struct mcs *mcs); +void cn10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir); +void cn10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
/* CNF10K-B APIs */ struct mcs_ops *cnf10kb_get_mac_ops(void); @@ -229,6 +225,8 @@ void cnf10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *m void cnf10kb_mcs_parser_cfg(struct mcs *mcs); void cnf10kb_mcs_tx_pn_thresh_reached_handler(struct mcs *mcs); void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs); +void cnf10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir); +void cnf10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
/* Stats APIs */ void mcs_get_sc_stats(struct mcs *mcs, struct mcs_sc_stats *stats, int id, int dir); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c index 7b62054144286..9f9b904ab2cd0 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c @@ -13,6 +13,8 @@ static struct mcs_ops cnf10kb_mcs_ops = { .mcs_tx_sa_mem_map_write = cnf10kb_mcs_tx_sa_mem_map_write, .mcs_rx_sa_mem_map_write = cnf10kb_mcs_rx_sa_mem_map_write, .mcs_flowid_secy_map = cnf10kb_mcs_flowid_secy_map, + .mcs_bbe_intr_handler = cnf10kb_mcs_bbe_intr_handler, + .mcs_pab_intr_handler = cnf10kb_mcs_pab_intr_handler, };
struct mcs_ops *cnf10kb_get_mac_ops(void) @@ -31,6 +33,7 @@ void cnf10kb_mcs_set_hw_capabilities(struct mcs *mcs) hw->lmac_cnt = 4; /* lmacs/ports per mcs block */ hw->mcs_x2p_intf = 1; /* x2p clabration intf */ hw->mcs_blks = 7; /* MCS blocks */ + hw->ip_vec = MCS_CNF10KB_INT_VEC_IP; /* IP vector */ }
void cnf10kb_mcs_parser_cfg(struct mcs *mcs) @@ -212,3 +215,63 @@ void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs) mcs_add_intr_wq_entry(mcs, &event); } } + +void cnf10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, + enum mcs_direction dir) +{ + struct mcs_intr_event event = { 0 }; + int i; + + if (!(intr & MCS_BBE_INT_MASK)) + return; + + event.mcs_id = mcs->mcs_id; + event.pcifunc = mcs->pf_map[0]; + + for (i = 0; i < MCS_MAX_BBE_INT; i++) { + if (!(intr & BIT_ULL(i))) + continue; + + /* Lower nibble denotes data fifo overflow interrupts and + * upper nibble indicates policy fifo overflow interrupts. + */ + if (intr & 0xFULL) + event.intr_mask = (dir == MCS_RX) ? + MCS_BBE_RX_DFIFO_OVERFLOW_INT : + MCS_BBE_TX_DFIFO_OVERFLOW_INT; + else + event.intr_mask = (dir == MCS_RX) ? + MCS_BBE_RX_PLFIFO_OVERFLOW_INT : + MCS_BBE_TX_PLFIFO_OVERFLOW_INT; + + /* Notify the lmac_id info which ran into BBE fatal error */ + event.lmac_id = i & 0x3ULL; + mcs_add_intr_wq_entry(mcs, &event); + } +} + +void cnf10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, + enum mcs_direction dir) +{ + struct mcs_intr_event event = { 0 }; + int i; + + if (!(intr & MCS_PAB_INT_MASK)) + return; + + event.mcs_id = mcs->mcs_id; + event.pcifunc = mcs->pf_map[0]; + + for (i = 0; i < MCS_MAX_PAB_INT; i++) { + if (!(intr & BIT_ULL(i))) + continue; + + event.intr_mask = (dir == MCS_RX) ? + MCS_PAB_RX_CHAN_OVERFLOW_INT : + MCS_PAB_TX_CHAN_OVERFLOW_INT; + + /* Notify the lmac_id info which ran into PAB fatal error */ + event.lmac_id = i; + mcs_add_intr_wq_entry(mcs, &event); + } +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h index 7427e3b1490f4..f3ab01fc363c8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h @@ -276,7 +276,10 @@ #define MCSX_BBE_RX_SLAVE_CAL_ENTRY 0x180ull #define MCSX_BBE_RX_SLAVE_CAL_LEN 0x188ull #define MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(a) (0x290ull + (a) * 0x40ull) - +#define MCSX_BBE_RX_SLAVE_DFIFO_OVERFLOW_0 0xe20 +#define MCSX_BBE_TX_SLAVE_DFIFO_OVERFLOW_0 0x1298 +#define MCSX_BBE_RX_SLAVE_PLFIFO_OVERFLOW_0 0xe40 +#define MCSX_BBE_TX_SLAVE_PLFIFO_OVERFLOW_0 0x12b8 #define MCSX_BBE_RX_SLAVE_BBE_INT ({ \ u64 offset; \ \
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 699af748c61574125d269db260dabbe20436d74e ]
When system is rebooted after creating macsec interface below NULL pointer dereference crashes occurred. This patch fixes those crashes by using correct order of teardown
[ 3324.406942] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 [ 3324.415726] Mem abort info: [ 3324.418510] ESR = 0x96000006 [ 3324.421557] EC = 0x25: DABT (current EL), IL = 32 bits [ 3324.426865] SET = 0, FnV = 0 [ 3324.429913] EA = 0, S1PTW = 0 [ 3324.433047] Data abort info: [ 3324.435921] ISV = 0, ISS = 0x00000006 [ 3324.439748] CM = 0, WnR = 0 .... [ 3324.575915] Call trace: [ 3324.578353] cn10k_mdo_del_secy+0x24/0x180 [ 3324.582440] macsec_common_dellink+0xec/0x120 [ 3324.586788] macsec_notify+0x17c/0x1c0 [ 3324.590529] raw_notifier_call_chain+0x50/0x70 [ 3324.594965] call_netdevice_notifiers_info+0x34/0x7c [ 3324.599921] rollback_registered_many+0x354/0x5bc [ 3324.604616] unregister_netdevice_queue+0x88/0x10c [ 3324.609399] unregister_netdev+0x20/0x30 [ 3324.613313] otx2_remove+0x8c/0x310 [ 3324.616794] pci_device_shutdown+0x30/0x70 [ 3324.620882] device_shutdown+0x11c/0x204
[ 966.664930] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 [ 966.673712] Mem abort info: [ 966.676497] ESR = 0x96000006 [ 966.679543] EC = 0x25: DABT (current EL), IL = 32 bits [ 966.684848] SET = 0, FnV = 0 [ 966.687895] EA = 0, S1PTW = 0 [ 966.691028] Data abort info: [ 966.693900] ISV = 0, ISS = 0x00000006 [ 966.697729] CM = 0, WnR = 0 [ 966.833467] Call trace: [ 966.835904] cn10k_mdo_stop+0x20/0xa0 [ 966.839557] macsec_dev_stop+0xe8/0x11c [ 966.843384] __dev_close_many+0xbc/0x140 [ 966.847298] dev_close_many+0x84/0x120 [ 966.851039] rollback_registered_many+0x114/0x5bc [ 966.855735] unregister_netdevice_many.part.0+0x14/0xa0 [ 966.860952] unregister_netdevice_many+0x18/0x24 [ 966.865560] macsec_notify+0x1ac/0x1c0 [ 966.869303] raw_notifier_call_chain+0x50/0x70 [ 966.873738] call_netdevice_notifiers_info+0x34/0x7c [ 966.878694] rollback_registered_many+0x354/0x5bc [ 966.883390] unregister_netdevice_queue+0x88/0x10c [ 966.888173] unregister_netdev+0x20/0x30 [ 966.892090] otx2_remove+0x8c/0x310 [ 966.895571] pci_device_shutdown+0x30/0x70 [ 966.899660] device_shutdown+0x11c/0x204 [ 966.903574] __do_sys_reboot+0x208/0x290 [ 966.907487] __arm64_sys_reboot+0x20/0x30 [ 966.911489] el0_svc_handler+0x80/0x1c0 [ 966.915316] el0_svc+0x8/0x180 [ 966.918362] Code: f9400000 f9400a64 91220014 f94b3403 (f9400060) [ 966.924448] ---[ end trace 341778e799c3d8d7 ]---
Fixes: c54ffc73601c ("octeontx2-pf: mcs: Introduce MACSEC hardware offloading") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 303930499a4c0..2221220abcaae 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -3069,8 +3069,6 @@ static void otx2_remove(struct pci_dev *pdev) otx2_config_pause_frm(pf); }
- cn10k_mcs_free(pf); - #ifdef CONFIG_DCB /* Disable PFC config */ if (pf->pfc_en) { @@ -3084,6 +3082,7 @@ static void otx2_remove(struct pci_dev *pdev)
otx2_unregister_dl(pf); unregister_netdev(netdev); + cn10k_mcs_free(pf); otx2_sriov_disable(pf->pdev); otx2_sriov_vfcfg_cleanup(pf); if (pf->otx2_wq)
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 57d00d4364f314485092667d2a48718985515deb ]
On CN10KB silicon a single hardware macsec block is present and offloads macsec operations for all the ethernet LMACs. TCAM match with macsec ethertype 0x88e5 alone at RX side is not sufficient to distinguish all the macsec interfaces created on top of netdevs. Hence append the DMAC of the macsec interface too. Otherwise the first created macsec interface only receives all the macsec traffic.
Fixes: c54ffc73601c ("octeontx2-pf: mcs: Introduce MACSEC hardware offloading") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c index 9ec5f38d38a84..f699209978fef 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c @@ -9,6 +9,7 @@ #include <net/macsec.h> #include "otx2_common.h"
+#define MCS_TCAM0_MAC_DA_MASK GENMASK_ULL(47, 0) #define MCS_TCAM0_MAC_SA_MASK GENMASK_ULL(63, 48) #define MCS_TCAM1_MAC_SA_MASK GENMASK_ULL(31, 0) #define MCS_TCAM1_ETYPE_MASK GENMASK_ULL(47, 32) @@ -237,8 +238,10 @@ static int cn10k_mcs_write_rx_flowid(struct otx2_nic *pfvf, struct cn10k_mcs_rxsc *rxsc, u8 hw_secy_id) { struct macsec_rx_sc *sw_rx_sc = rxsc->sw_rxsc; + struct macsec_secy *secy = rxsc->sw_secy; struct mcs_flowid_entry_write_req *req; struct mbox *mbox = &pfvf->mbox; + u64 mac_da; int ret;
mutex_lock(&mbox->lock); @@ -249,11 +252,16 @@ static int cn10k_mcs_write_rx_flowid(struct otx2_nic *pfvf, goto fail; }
+ mac_da = ether_addr_to_u64(secy->netdev->dev_addr); + + req->data[0] = FIELD_PREP(MCS_TCAM0_MAC_DA_MASK, mac_da); + req->mask[0] = ~0ULL; + req->mask[0] = ~MCS_TCAM0_MAC_DA_MASK; + req->data[1] = FIELD_PREP(MCS_TCAM1_ETYPE_MASK, ETH_P_MACSEC); req->mask[1] = ~0ULL; req->mask[1] &= ~MCS_TCAM1_ETYPE_MASK;
- req->mask[0] = ~0ULL; req->mask[2] = ~0ULL; req->mask[3] = ~0ULL;
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 815debbbf7b52026462c37eea3be70d6377a7a9a ]
When freeing MCS hardware resources like SecY, SC and SA the corresponding stats needs to be cleared. Otherwise previous stats are shown in newly created macsec interfaces.
Fixes: c54ffc73601c ("octeontx2-pf: mcs: Introduce MACSEC hardware offloading") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c index f699209978fef..13faca9add9f4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c @@ -150,11 +150,20 @@ static void cn10k_mcs_free_rsrc(struct otx2_nic *pfvf, enum mcs_direction dir, enum mcs_rsrc_type type, u16 hw_rsrc_id, bool all) { + struct mcs_clear_stats *clear_req; struct mbox *mbox = &pfvf->mbox; struct mcs_free_rsrc_req *req;
mutex_lock(&mbox->lock);
+ clear_req = otx2_mbox_alloc_msg_mcs_clear_stats(mbox); + if (!clear_req) + goto fail; + + clear_req->id = hw_rsrc_id; + clear_req->type = type; + clear_req->dir = dir; + req = otx2_mbox_alloc_msg_mcs_free_resources(mbox); if (!req) goto fail;
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 9bdfe61054fb2b989eb58df20bf99c0cf67e3038 ]
Macsec stats like InPktsLate and InPktsDelayed share same counter in hardware. If SecY replay_protect is true then counter represents InPktsLate otherwise InPktsDelayed. This mode change was tracked based on protect_frames instead of replay_protect mistakenly. Similarly InPktsUnchecked and InPktsOk share same counter and mode change was tracked based on validate_check instead of validate_disabled. This patch fixes those problems.
Fixes: c54ffc73601c ("octeontx2-pf: mcs: Introduce MACSEC hardware offloading") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../ethernet/marvell/octeontx2/nic/cn10k_macsec.c | 14 +++++++------- .../ethernet/marvell/octeontx2/nic/otx2_common.h | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c index 13faca9add9f4..3ad8d7ef20be6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c @@ -1014,7 +1014,7 @@ static void cn10k_mcs_sync_stats(struct otx2_nic *pfvf, struct macsec_secy *secy
/* Check if sync is really needed */ if (secy->validate_frames == txsc->last_validate_frames && - secy->protect_frames == txsc->last_protect_frames) + secy->replay_protect == txsc->last_replay_protect) return;
cn10k_mcs_secy_stats(pfvf, txsc->hw_secy_id_rx, &rx_rsp, MCS_RX, true); @@ -1036,19 +1036,19 @@ static void cn10k_mcs_sync_stats(struct otx2_nic *pfvf, struct macsec_secy *secy rxsc->stats.InPktsInvalid += sc_rsp.pkt_invalid_cnt; rxsc->stats.InPktsNotValid += sc_rsp.pkt_notvalid_cnt;
- if (txsc->last_protect_frames) + if (txsc->last_replay_protect) rxsc->stats.InPktsLate += sc_rsp.pkt_late_cnt; else rxsc->stats.InPktsDelayed += sc_rsp.pkt_late_cnt;
- if (txsc->last_validate_frames == MACSEC_VALIDATE_CHECK) + if (txsc->last_validate_frames == MACSEC_VALIDATE_DISABLED) rxsc->stats.InPktsUnchecked += sc_rsp.pkt_unchecked_cnt; else rxsc->stats.InPktsOK += sc_rsp.pkt_unchecked_cnt; }
txsc->last_validate_frames = secy->validate_frames; - txsc->last_protect_frames = secy->protect_frames; + txsc->last_replay_protect = secy->replay_protect; }
static int cn10k_mdo_open(struct macsec_context *ctx) @@ -1117,7 +1117,7 @@ static int cn10k_mdo_add_secy(struct macsec_context *ctx) txsc->sw_secy = secy; txsc->encoding_sa = secy->tx_sc.encoding_sa; txsc->last_validate_frames = secy->validate_frames; - txsc->last_protect_frames = secy->protect_frames; + txsc->last_replay_protect = secy->replay_protect;
list_add(&txsc->entry, &cfg->txsc_list);
@@ -1538,12 +1538,12 @@ static int cn10k_mdo_get_rx_sc_stats(struct macsec_context *ctx) rxsc->stats.InPktsInvalid += rsp.pkt_invalid_cnt; rxsc->stats.InPktsNotValid += rsp.pkt_notvalid_cnt;
- if (secy->protect_frames) + if (secy->replay_protect) rxsc->stats.InPktsLate += rsp.pkt_late_cnt; else rxsc->stats.InPktsDelayed += rsp.pkt_late_cnt;
- if (secy->validate_frames == MACSEC_VALIDATE_CHECK) + if (secy->validate_frames == MACSEC_VALIDATE_DISABLED) rxsc->stats.InPktsUnchecked += rsp.pkt_unchecked_cnt; else rxsc->stats.InPktsOK += rsp.pkt_unchecked_cnt; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 712715a49d201..634ca4140346b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -386,7 +386,7 @@ struct cn10k_mcs_txsc { struct cn10k_txsc_stats stats; struct list_head entry; enum macsec_validation_type last_validate_frames; - bool last_protect_frames; + bool last_replay_protect; u16 hw_secy_id_tx; u16 hw_secy_id_rx; u16 hw_flow_id;
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 3c99bace4ad08ad0264285ba8ad73117560992c2 ]
After creating SecYs, SCs and SAs a SecY can be modified to change attributes like validation mode, protect frames mode etc. During this SecY update, packet number is reset to initial user given value by mistake. Hence do not reset PN when updating SecY parameters.
Fixes: c54ffc73601c ("octeontx2-pf: mcs: Introduce MACSEC hardware offloading") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../ethernet/marvell/octeontx2/nic/cn10k_macsec.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c index 3ad8d7ef20be6..a487a98eac88c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c @@ -1134,6 +1134,7 @@ static int cn10k_mdo_upd_secy(struct macsec_context *ctx) struct macsec_secy *secy = ctx->secy; struct macsec_tx_sa *sw_tx_sa; struct cn10k_mcs_txsc *txsc; + bool active; u8 sa_num; int err;
@@ -1141,15 +1142,19 @@ static int cn10k_mdo_upd_secy(struct macsec_context *ctx) if (!txsc) return -ENOENT;
- txsc->encoding_sa = secy->tx_sc.encoding_sa; - - sa_num = txsc->encoding_sa; - sw_tx_sa = rcu_dereference_bh(secy->tx_sc.sa[sa_num]); + /* Encoding SA got changed */ + if (txsc->encoding_sa != secy->tx_sc.encoding_sa) { + txsc->encoding_sa = secy->tx_sc.encoding_sa; + sa_num = txsc->encoding_sa; + sw_tx_sa = rcu_dereference_bh(secy->tx_sc.sa[sa_num]); + active = sw_tx_sa ? sw_tx_sa->active : false; + cn10k_mcs_link_tx_sa2sc(pfvf, secy, txsc, sa_num, active); + }
if (netif_running(secy->netdev)) { cn10k_mcs_sync_stats(pfvf, secy, txsc);
- err = cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, sw_tx_sa, sa_num); + err = cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, NULL, 0); if (err) return err; }
From: Cosmo Chou chou.cosmo@gmail.com
[ Upstream commit 6f75cd166a5a3c0bc50441faa8b8304f60522fdd ]
ncsi_channel_is_tx() determines whether a given channel should be used for Tx or not. However, when reconfiguring the channel by handling a Configuration Required AEN, there is a misjudgment that the channel Tx has already been enabled, which results in the Enable Channel Network Tx command not being sent.
Clear the channel Tx enable flag before reconfiguring the channel to avoid the misjudgment.
Fixes: 8d951a75d022 ("net/ncsi: Configure multi-package, multi-channel modes with failover") Signed-off-by: Cosmo Chou chou.cosmo@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ncsi/ncsi-aen.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/net/ncsi/ncsi-aen.c b/net/ncsi/ncsi-aen.c index b635c194f0a85..62fb1031763d1 100644 --- a/net/ncsi/ncsi-aen.c +++ b/net/ncsi/ncsi-aen.c @@ -165,6 +165,7 @@ static int ncsi_aen_handler_cr(struct ncsi_dev_priv *ndp, nc->state = NCSI_CHANNEL_INACTIVE; list_add_tail_rcu(&nc->link, &ndp->channel_queue); spin_unlock_irqrestore(&ndp->lock, flags); + nc->modes[NCSI_MODE_TX_ENABLE].enable = 0;
return ncsi_process_next_channel(ndp); }
From: Eric Dumazet edumazet@google.com
[ Upstream commit 7e692df3933628d974acb9f5b334d2b3e885e2a6 ]
David Ahern reported crashes in skb_copy_ubufs() caused by TCP tx zerocopy using hugepages, and skb length bigger than ~68 KB.
skb_copy_ubufs() assumed it could copy all payload using up to MAX_SKB_FRAGS order-0 pages.
This assumption broke when BIG TCP was able to put up to 512 KB per skb.
We did not hit this bug at Google because we use CONFIG_MAX_SKB_FRAGS=45 and limit gso_max_size to 180000.
A solution is to use higher order pages if needed.
v2: add missing __GFP_COMP, or we leak memory.
Fixes: 7c4e983c4f3c ("net: allow gso_max_size to exceed 65536") Reported-by: David Ahern dsahern@kernel.org Link: https://lore.kernel.org/netdev/c70000f6-baa4-4a05-46d0-4b3e0dc1ccc8@gmail.co... Signed-off-by: Eric Dumazet edumazet@google.com Cc: Xin Long lucien.xin@gmail.com Cc: Willem de Bruijn willemb@google.com Cc: Coco Li lixiaoyan@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/skbuff.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 597c1f17d3889..ccfd9053754a9 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1544,7 +1544,7 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) { int num_frags = skb_shinfo(skb)->nr_frags; struct page *page, *head = NULL; - int i, new_frags; + int i, order, psize, new_frags; u32 d_off;
if (skb_shared(skb) || skb_unclone(skb, gfp_mask)) @@ -1553,9 +1553,17 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) if (!num_frags) goto release;
- new_frags = (__skb_pagelen(skb) + PAGE_SIZE - 1) >> PAGE_SHIFT; + /* We might have to allocate high order pages, so compute what minimum + * page order is needed. + */ + order = 0; + while ((PAGE_SIZE << order) * MAX_SKB_FRAGS < __skb_pagelen(skb)) + order++; + psize = (PAGE_SIZE << order); + + new_frags = (__skb_pagelen(skb) + psize - 1) >> (PAGE_SHIFT + order); for (i = 0; i < new_frags; i++) { - page = alloc_page(gfp_mask); + page = alloc_pages(gfp_mask | __GFP_COMP, order); if (!page) { while (head) { struct page *next = (struct page *)page_private(head); @@ -1582,11 +1590,11 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) vaddr = kmap_atomic(p);
while (done < p_len) { - if (d_off == PAGE_SIZE) { + if (d_off == psize) { d_off = 0; page = (struct page *)page_private(page); } - copy = min_t(u32, PAGE_SIZE - d_off, p_len - done); + copy = min_t(u32, psize - d_off, p_len - done); memcpy(page_address(page) + d_off, vaddr + p_off + done, copy); done += copy; @@ -1602,7 +1610,7 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
/* skb frags point to kernel buffers */ for (i = 0; i < new_frags - 1; i++) { - __skb_fill_page_desc(skb, i, head, 0, PAGE_SIZE); + __skb_fill_page_desc(skb, i, head, 0, psize); head = (struct page *)page_private(head); } __skb_fill_page_desc(skb, new_frags - 1, head, 0, d_off);
From: Vlad Buslov vladbu@nvidia.com
[ Upstream commit da94a7781fc3c92e7df7832bc2746f4d39bc624e ]
Error handler of tcf_block_bind() frees the whole bo->cb_list on error. However, by that time the flow_block_cb instances are already in the driver list because driver ndo_setup_tc() callback is called before that up the call chain in tcf_block_offload_cmd(). This leaves dangling pointers to freed objects in the list and causes use-after-free[0]. Fix it by also removing flow_block_cb instances from driver_list before deallocating them.
[0]: [ 279.868433] ================================================================== [ 279.869964] BUG: KASAN: slab-use-after-free in flow_block_cb_setup_simple+0x631/0x7c0 [ 279.871527] Read of size 8 at addr ffff888147e2bf20 by task tc/2963
[ 279.873151] CPU: 6 PID: 2963 Comm: tc Not tainted 6.3.0-rc6+ #4 [ 279.874273] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [ 279.876295] Call Trace: [ 279.876882] <TASK> [ 279.877413] dump_stack_lvl+0x33/0x50 [ 279.878198] print_report+0xc2/0x610 [ 279.878987] ? flow_block_cb_setup_simple+0x631/0x7c0 [ 279.879994] kasan_report+0xae/0xe0 [ 279.880750] ? flow_block_cb_setup_simple+0x631/0x7c0 [ 279.881744] ? mlx5e_tc_reoffload_flows_work+0x240/0x240 [mlx5_core] [ 279.883047] flow_block_cb_setup_simple+0x631/0x7c0 [ 279.884027] tcf_block_offload_cmd.isra.0+0x189/0x2d0 [ 279.885037] ? tcf_block_setup+0x6b0/0x6b0 [ 279.885901] ? mutex_lock+0x7d/0xd0 [ 279.886669] ? __mutex_unlock_slowpath.constprop.0+0x2d0/0x2d0 [ 279.887844] ? ingress_init+0x1c0/0x1c0 [sch_ingress] [ 279.888846] tcf_block_get_ext+0x61c/0x1200 [ 279.889711] ingress_init+0x112/0x1c0 [sch_ingress] [ 279.890682] ? clsact_init+0x2b0/0x2b0 [sch_ingress] [ 279.891701] qdisc_create+0x401/0xea0 [ 279.892485] ? qdisc_tree_reduce_backlog+0x470/0x470 [ 279.893473] tc_modify_qdisc+0x6f7/0x16d0 [ 279.894344] ? tc_get_qdisc+0xac0/0xac0 [ 279.895213] ? mutex_lock+0x7d/0xd0 [ 279.896005] ? __mutex_lock_slowpath+0x10/0x10 [ 279.896910] rtnetlink_rcv_msg+0x5fe/0x9d0 [ 279.897770] ? rtnl_calcit.isra.0+0x2b0/0x2b0 [ 279.898672] ? __sys_sendmsg+0xb5/0x140 [ 279.899494] ? do_syscall_64+0x3d/0x90 [ 279.900302] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 279.901337] ? kasan_save_stack+0x2e/0x40 [ 279.902177] ? kasan_save_stack+0x1e/0x40 [ 279.903058] ? kasan_set_track+0x21/0x30 [ 279.903913] ? kasan_save_free_info+0x2a/0x40 [ 279.904836] ? ____kasan_slab_free+0x11a/0x1b0 [ 279.905741] ? kmem_cache_free+0x179/0x400 [ 279.906599] netlink_rcv_skb+0x12c/0x360 [ 279.907450] ? rtnl_calcit.isra.0+0x2b0/0x2b0 [ 279.908360] ? netlink_ack+0x1550/0x1550 [ 279.909192] ? rhashtable_walk_peek+0x170/0x170 [ 279.910135] ? kmem_cache_alloc_node+0x1af/0x390 [ 279.911086] ? _copy_from_iter+0x3d6/0xc70 [ 279.912031] netlink_unicast+0x553/0x790 [ 279.912864] ? netlink_attachskb+0x6a0/0x6a0 [ 279.913763] ? netlink_recvmsg+0x416/0xb50 [ 279.914627] netlink_sendmsg+0x7a1/0xcb0 [ 279.915473] ? netlink_unicast+0x790/0x790 [ 279.916334] ? iovec_from_user.part.0+0x4d/0x220 [ 279.917293] ? netlink_unicast+0x790/0x790 [ 279.918159] sock_sendmsg+0xc5/0x190 [ 279.918938] ____sys_sendmsg+0x535/0x6b0 [ 279.919813] ? import_iovec+0x7/0x10 [ 279.920601] ? kernel_sendmsg+0x30/0x30 [ 279.921423] ? __copy_msghdr+0x3c0/0x3c0 [ 279.922254] ? import_iovec+0x7/0x10 [ 279.923041] ___sys_sendmsg+0xeb/0x170 [ 279.923854] ? copy_msghdr_from_user+0x110/0x110 [ 279.924797] ? ___sys_recvmsg+0xd9/0x130 [ 279.925630] ? __perf_event_task_sched_in+0x183/0x470 [ 279.926656] ? ___sys_sendmsg+0x170/0x170 [ 279.927529] ? ctx_sched_in+0x530/0x530 [ 279.928369] ? update_curr+0x283/0x4f0 [ 279.929185] ? perf_event_update_userpage+0x570/0x570 [ 279.930201] ? __fget_light+0x57/0x520 [ 279.931023] ? __switch_to+0x53d/0xe70 [ 279.931846] ? sockfd_lookup_light+0x1a/0x140 [ 279.932761] __sys_sendmsg+0xb5/0x140 [ 279.933560] ? __sys_sendmsg_sock+0x20/0x20 [ 279.934436] ? fpregs_assert_state_consistent+0x1d/0xa0 [ 279.935490] do_syscall_64+0x3d/0x90 [ 279.936300] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 279.937311] RIP: 0033:0x7f21c814f887 [ 279.938085] Code: 0a 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b9 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 89 54 24 1c 48 89 74 24 10 [ 279.941448] RSP: 002b:00007fff11efd478 EFLAGS: 00000246 ORIG_RAX: 000000000000002e [ 279.942964] RAX: ffffffffffffffda RBX: 0000000064401979 RCX: 00007f21c814f887 [ 279.944337] RDX: 0000000000000000 RSI: 00007fff11efd4e0 RDI: 0000000000000003 [ 279.945660] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000 [ 279.947003] R10: 00007f21c8008708 R11: 0000000000000246 R12: 0000000000000001 [ 279.948345] R13: 0000000000409980 R14: 000000000047e538 R15: 0000000000485400 [ 279.949690] </TASK>
[ 279.950706] Allocated by task 2960: [ 279.951471] kasan_save_stack+0x1e/0x40 [ 279.952338] kasan_set_track+0x21/0x30 [ 279.953165] __kasan_kmalloc+0x77/0x90 [ 279.954006] flow_block_cb_setup_simple+0x3dd/0x7c0 [ 279.955001] tcf_block_offload_cmd.isra.0+0x189/0x2d0 [ 279.956020] tcf_block_get_ext+0x61c/0x1200 [ 279.956881] ingress_init+0x112/0x1c0 [sch_ingress] [ 279.957873] qdisc_create+0x401/0xea0 [ 279.958656] tc_modify_qdisc+0x6f7/0x16d0 [ 279.959506] rtnetlink_rcv_msg+0x5fe/0x9d0 [ 279.960392] netlink_rcv_skb+0x12c/0x360 [ 279.961216] netlink_unicast+0x553/0x790 [ 279.962044] netlink_sendmsg+0x7a1/0xcb0 [ 279.962906] sock_sendmsg+0xc5/0x190 [ 279.963702] ____sys_sendmsg+0x535/0x6b0 [ 279.964534] ___sys_sendmsg+0xeb/0x170 [ 279.965343] __sys_sendmsg+0xb5/0x140 [ 279.966132] do_syscall_64+0x3d/0x90 [ 279.966908] entry_SYSCALL_64_after_hwframe+0x46/0xb0
[ 279.968407] Freed by task 2960: [ 279.969114] kasan_save_stack+0x1e/0x40 [ 279.969929] kasan_set_track+0x21/0x30 [ 279.970729] kasan_save_free_info+0x2a/0x40 [ 279.971603] ____kasan_slab_free+0x11a/0x1b0 [ 279.972483] __kmem_cache_free+0x14d/0x280 [ 279.973337] tcf_block_setup+0x29d/0x6b0 [ 279.974173] tcf_block_offload_cmd.isra.0+0x226/0x2d0 [ 279.975186] tcf_block_get_ext+0x61c/0x1200 [ 279.976080] ingress_init+0x112/0x1c0 [sch_ingress] [ 279.977065] qdisc_create+0x401/0xea0 [ 279.977857] tc_modify_qdisc+0x6f7/0x16d0 [ 279.978695] rtnetlink_rcv_msg+0x5fe/0x9d0 [ 279.979562] netlink_rcv_skb+0x12c/0x360 [ 279.980388] netlink_unicast+0x553/0x790 [ 279.981214] netlink_sendmsg+0x7a1/0xcb0 [ 279.982043] sock_sendmsg+0xc5/0x190 [ 279.982827] ____sys_sendmsg+0x535/0x6b0 [ 279.983703] ___sys_sendmsg+0xeb/0x170 [ 279.984510] __sys_sendmsg+0xb5/0x140 [ 279.985298] do_syscall_64+0x3d/0x90 [ 279.986076] entry_SYSCALL_64_after_hwframe+0x46/0xb0
[ 279.987532] The buggy address belongs to the object at ffff888147e2bf00 which belongs to the cache kmalloc-192 of size 192 [ 279.989747] The buggy address is located 32 bytes inside of freed 192-byte region [ffff888147e2bf00, ffff888147e2bfc0)
[ 279.992367] The buggy address belongs to the physical page: [ 279.993430] page:00000000550f405c refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x147e2a [ 279.995182] head:00000000550f405c order:1 entire_mapcount:0 nr_pages_mapped:0 pincount:0 [ 279.996713] anon flags: 0x200000000010200(slab|head|node=0|zone=2) [ 279.997878] raw: 0200000000010200 ffff888100042a00 0000000000000000 dead000000000001 [ 279.999384] raw: 0000000000000000 0000000000200020 00000001ffffffff 0000000000000000 [ 280.000894] page dumped because: kasan: bad access detected
[ 280.002386] Memory state around the buggy address: [ 280.003338] ffff888147e2be00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 280.004781] ffff888147e2be80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc [ 280.006224] >ffff888147e2bf00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 280.007700] ^ [ 280.008592] ffff888147e2bf80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc [ 280.010035] ffff888147e2c000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 280.011564] ==================================================================
Fixes: 59094b1e5094 ("net: sched: use flow block API") Signed-off-by: Vlad Buslov vladbu@nvidia.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/cls_api.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 50566db45949b..7b2aa04a7cdfd 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -1483,6 +1483,7 @@ static int tcf_block_bind(struct tcf_block *block,
err_unroll: list_for_each_entry_safe(block_cb, next, &bo->cb_list, list) { + list_del(&block_cb->driver_list); if (i-- > 0) { list_del(&block_cb->list); tcf_block_playback_offloads(block, block_cb->cb,
From: Cong Wang cong.wang@bytedance.com
[ Upstream commit c88f8d5cd95fd039cff95d682b8e71100c001df0 ]
When a tunnel device is bound with the underlying device, its dev->needed_headroom needs to be updated properly. IPv4 tunnels already do the same in ip_tunnel_bind_dev(). Otherwise we may not have enough header room for skb, especially after commit b17f709a2401 ("gue: TX support for using remote checksum offload option").
Fixes: 32b8a8e59c9c ("sit: add IPv4 over IPv4 support") Reported-by: Palash Oswal oswalpalash@gmail.com Link: https://lore.kernel.org/netdev/CAGyP=7fDcSPKu6nttbGwt7RXzE3uyYxLjCSE97J64pRx... Cc: Kuniyuki Iwashima kuniyu@amazon.com Cc: Eric Dumazet edumazet@google.com Signed-off-by: Cong Wang cong.wang@bytedance.com Reviewed-by: Eric Dumazet edumazet@google.com Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/sit.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c index 70d81bba50939..3ffb6a5b1f82a 100644 --- a/net/ipv6/sit.c +++ b/net/ipv6/sit.c @@ -1095,12 +1095,13 @@ static netdev_tx_t sit_tunnel_xmit(struct sk_buff *skb,
static void ipip6_tunnel_bind_dev(struct net_device *dev) { + struct ip_tunnel *tunnel = netdev_priv(dev); + int t_hlen = tunnel->hlen + sizeof(struct iphdr); struct net_device *tdev = NULL; - struct ip_tunnel *tunnel; + int hlen = LL_MAX_HEADER; const struct iphdr *iph; struct flowi4 fl4;
- tunnel = netdev_priv(dev); iph = &tunnel->parms.iph;
if (iph->daddr) { @@ -1123,14 +1124,15 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev) tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link);
if (tdev && !netif_is_l3_master(tdev)) { - int t_hlen = tunnel->hlen + sizeof(struct iphdr); int mtu;
mtu = tdev->mtu - t_hlen; if (mtu < IPV6_MIN_MTU) mtu = IPV6_MIN_MTU; WRITE_ONCE(dev->mtu, mtu); + hlen = tdev->hard_header_len + tdev->needed_headroom; } + dev->needed_headroom = t_hlen + hlen; }
static void ipip6_tunnel_update(struct ip_tunnel *t, struct ip_tunnel_parm *p,
From: Andrea Mayer andrea.mayer@uniroma2.it
[ Upstream commit 46ef24c60f8ee70662968ac55325297ed4624d61 ]
On some distributions, the rp_filter is automatically set (=1) by default on a netdev basis (also on VRFs). In an SRv6 End.DT46 behavior, decapsulated IPv4 packets are routed using the table associated with the VRF bound to that tunnel. During lookup operations, the rp_filter can lead to packet loss when activated on the VRF. Therefore, we chose to make this selftest more robust by explicitly disabling the rp_filter during tests (as it is automatically set by some Linux distributions).
Fixes: 03a0b567a03d ("selftests: seg6: add selftest for SRv6 End.DT46 Behavior") Reported-by: Hangbin Liu liuhangbin@gmail.com Signed-off-by: Andrea Mayer andrea.mayer@uniroma2.it Tested-by: Hangbin Liu liuhangbin@gmail.com Reviewed-by: David Ahern dsahern@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../testing/selftests/net/srv6_end_dt46_l3vpn_test.sh | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh b/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh index aebaab8ce44cb..441eededa0312 100755 --- a/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh +++ b/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh @@ -292,6 +292,11 @@ setup_hs() ip netns exec ${hsname} sysctl -wq net.ipv6.conf.all.accept_dad=0 ip netns exec ${hsname} sysctl -wq net.ipv6.conf.default.accept_dad=0
+ # disable the rp_filter otherwise the kernel gets confused about how + # to route decap ipv4 packets. + ip netns exec ${rtname} sysctl -wq net.ipv4.conf.all.rp_filter=0 + ip netns exec ${rtname} sysctl -wq net.ipv4.conf.default.rp_filter=0 + ip -netns ${hsname} link add veth0 type veth peer name ${rtveth} ip -netns ${hsname} link set ${rtveth} netns ${rtname} ip -netns ${hsname} addr add ${IPv6_HS_NETWORK}::${hs}/64 dev veth0 nodad @@ -316,11 +321,6 @@ setup_hs() ip netns exec ${rtname} sysctl -wq net.ipv6.conf.${rtveth}.proxy_ndp=1 ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.proxy_arp=1
- # disable the rp_filter otherwise the kernel gets confused about how - # to route decap ipv4 packets. - ip netns exec ${rtname} sysctl -wq net.ipv4.conf.all.rp_filter=0 - ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.rp_filter=0 - ip netns exec ${rtname} sh -c "echo 1 > /proc/sys/net/vrf/strict_mode" }
From: Antoine Tenart atenart@kernel.org
[ Upstream commit dc6456e938e938d64ffb6383a286b2ac9790a37f ]
The skb hash comes from sk->sk_txhash when using TCP, except for some IPv6 RST packets. This is because in tcp_v6_send_reset when not in TIME_WAIT the hash is taken from sk->sk_hash, while it should come from sk->sk_txhash as those two hashes are not computed the same way.
Packetdrill script to test the above,
0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 +0 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 +0 connect(3, ..., ...) = -1 EINPROGRESS (Operation now in progress)
+0 > (flowlabel 0x1) S 0:0(0) <...>
// Wrong ack seq, trigger a rst. +0 < S. 0:0(0) ack 0 win 4000
// Check the flowlabel matches prior one from SYN. +0 > (flowlabel 0x1) R 0:0(0) <...>
Fixes: 9258b8b1be2e ("ipv6: tcp: send consistent autoflowlabel in RST packets") Signed-off-by: Antoine Tenart atenart@kernel.org Reviewed-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/tcp_ipv6.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index c563a84d67b46..8d61efeab9c99 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -1065,7 +1065,7 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb) if (np->repflow) label = ip6_flowlabel(ipv6h); priority = sk->sk_priority; - txhash = sk->sk_hash; + txhash = sk->sk_txhash; } if (sk->sk_state == TCP_TIME_WAIT) { label = cpu_to_be32(inet_twsk(sk)->tw_flowlabel);
From: Angelo Dureghello angelo.dureghello@timesys.com
[ Upstream commit 6686317855c6997671982d4489ccdd946f644957 ]
Add rsvd2cpu capability for mv88e6321 model, to allow proper bpdu processing.
Signed-off-by: Angelo Dureghello angelo.dureghello@timesys.com Fixes: 51c901a775621 ("net: dsa: mv88e6xxx: distinguish Global 2 Rsvd2CPU") Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/mv88e6xxx/chip.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c index f1d9ee2a78b0f..12175195d3968 100644 --- a/drivers/net/dsa/mv88e6xxx/chip.c +++ b/drivers/net/dsa/mv88e6xxx/chip.c @@ -5109,6 +5109,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = { .set_cpu_port = mv88e6095_g1_set_cpu_port, .set_egress_port = mv88e6095_g1_set_egress_port, .watchdog_ops = &mv88e6390_watchdog_ops, + .mgmt_rsvd2cpu = mv88e6352_g2_mgmt_rsvd2cpu, .reset = mv88e6352_g1_reset, .vtu_getnext = mv88e6185_g1_vtu_getnext, .vtu_loadpurge = mv88e6185_g1_vtu_loadpurge,
From: Maxim Korotkov korotkov.maxim.s@gmail.com
[ Upstream commit 3e46c89c74f2c38e5337d2cf44b0b551adff1cb4 ]
the variable 'history' is of type u16, it may be an error that the hweight32 macro was used for it I guess macro hweight16 should be used
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: 2a81490811d0 ("writeback: implement foreign cgroup inode detection") Signed-off-by: Maxim Korotkov korotkov.maxim.s@gmail.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230119104443.3002-1-korotkov.maxim.s@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- fs/fs-writeback.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index aa33c39be1829..d387708977a50 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -827,7 +827,7 @@ void wbc_detach_inode(struct writeback_control *wbc) * is okay. The main goal is avoiding keeping an inode on * the wrong wb for an extended period of time. */ - if (hweight32(history) > WB_FRN_HIST_THR_SLOTS) + if (hweight16(history) > WB_FRN_HIST_THR_SLOTS) inode_switch_wbs(inode, max_id); }
From: Tao Su tao1.su@linux.intel.com
[ Upstream commit 8176080d59e6d4ff9fc97ae534063073b4f7a715 ]
Kernel hang in blkg_destroy_all() when total blkg greater than BLKG_DESTROY_BATCH_SIZE, because of not removing destroyed blkg in blkg_list. So the size of blkg_list is same after destroying a batch of blkg, and the infinite 'restart' occurs.
Since blkg should stay on the queue list until blkg_free_workfn(), skip destroyed blkg when restart a new round, which will solve this kernel hang issue and satisfy the previous will to restart.
Reported-by: Xiangfei Ma xiangfeix.ma@intel.com Tested-by: Xiangfei Ma xiangfeix.ma@intel.com Tested-by: Farrah Chen farrah.chen@intel.com Signed-off-by: Tao Su tao1.su@linux.intel.com Fixes: f1c006f1c685 ("blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy()") Suggested-and-reviewed-by: Yu Kuai yukuai3@huawei.com Link: https://lore.kernel.org/r/20230428045149.1310073-1-tao1.su@linux.intel.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-cgroup.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 7c91d9195da8d..60f366f98fa2b 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -468,6 +468,9 @@ static void blkg_destroy_all(struct gendisk *disk) list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) { struct blkcg *blkcg = blkg->blkcg;
+ if (hlist_unhashed(&blkg->blkcg_node)) + continue; + spin_lock(&blkcg->lock); blkg_destroy(blkg); spin_unlock(&blkcg->lock);
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
[ Upstream commit 7f5390750645756bd5da2b24fac285f2654dd922 ]
The commit in Fixes has only updated the remove function and missed the error handling path of the probe.
Add the missing reset_control_assert() call.
Fixes: 65a3b6935d92 ("watchdog: dw_wdt: get reset lines from dt") Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Reviewed-by: Philipp Zabel p.zabel@pengutronix.de Reviewed-by: Guenter Roeck linux@roeck-us.net Link: https://lore.kernel.org/r/fbb650650bbb33a8fa2fd028c23157bedeed50e1.168249186... Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Wim Van Sebroeck wim@linux-watchdog.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/watchdog/dw_wdt.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/watchdog/dw_wdt.c b/drivers/watchdog/dw_wdt.c index 52962e8d11a6f..61af5d1332ac6 100644 --- a/drivers/watchdog/dw_wdt.c +++ b/drivers/watchdog/dw_wdt.c @@ -635,7 +635,7 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
ret = dw_wdt_init_timeouts(dw_wdt, dev); if (ret) - goto out_disable_clk; + goto out_assert_rst;
wdd = &dw_wdt->wdd; wdd->ops = &dw_wdt_ops; @@ -666,12 +666,15 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
ret = watchdog_register_device(wdd); if (ret) - goto out_disable_pclk; + goto out_assert_rst;
dw_wdt_dbgfs_init(dw_wdt);
return 0;
+out_assert_rst: + reset_control_assert(dw_wdt->rst); + out_disable_pclk: clk_disable_unprepare(dw_wdt->pclk);
From: Sia Jee Heng jeeheng.sia@starfivetech.com
[ Upstream commit a15c90b67a662c75f469822a7f95c7aaa049e28f ]
Currently kernel_page_present() function doesn't support huge page detection causes the function to mistakenly return false to the hibernation core.
Add huge page detection to the function to solve the problem.
Fixes: 9e953cda5cdf ("riscv: Introduce huge page support for 32/64bit kernel") Signed-off-by: Sia Jee Heng jeeheng.sia@starfivetech.com Reviewed-by: Ley Foon Tan leyfoon.tan@starfivetech.com Reviewed-by: Mason Huo mason.huo@starfivetech.com Reviewed-by: Andrew Jones ajones@ventanamicro.com Reviewed-by: Alexandre Ghiti alexghiti@rivosinc.com Link: https://lore.kernel.org/r/20230330064321.1008373-4-jeeheng.sia@starfivetech.... Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/mm/pageattr.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 86c56616e5dea..ea3d61de065b3 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -217,18 +217,26 @@ bool kernel_page_present(struct page *page) pgd = pgd_offset_k(addr); if (!pgd_present(*pgd)) return false; + if (pgd_leaf(*pgd)) + return true;
p4d = p4d_offset(pgd, addr); if (!p4d_present(*p4d)) return false; + if (p4d_leaf(*p4d)) + return true;
pud = pud_offset(p4d, addr); if (!pud_present(*pud)) return false; + if (pud_leaf(*pud)) + return true;
pmd = pmd_offset(pud, addr); if (!pmd_present(*pmd)) return false; + if (pmd_leaf(*pmd)) + return true;
pte = pte_offset_kernel(pmd, addr); return pte_present(*pte);
From: Akhil R akhilrajeev@nvidia.com
[ Upstream commit 9f855779a3874eee70e9f6be57b5f7774f14e510 ]
Update the msg->len value correctly for SMBUS block read. The discrepancy went unnoticed as msg->len is used in SMBUS transfers only when a PEC byte is added.
Fixes: d7583c8a5748 ("i2c: tegra: Add SMBus block read function") Signed-off-by: Akhil R akhilrajeev@nvidia.com Acked-by: Thierry Reding treding@nvidia.com Signed-off-by: Wolfram Sang wsa@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i2c/busses/i2c-tegra.c | 40 +++++++++++++++++++++++----------- 1 file changed, 27 insertions(+), 13 deletions(-)
diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c index 3869c258a5296..2bc40f957e509 100644 --- a/drivers/i2c/busses/i2c-tegra.c +++ b/drivers/i2c/busses/i2c-tegra.c @@ -242,9 +242,10 @@ struct tegra_i2c_hw_feature { * @is_dvc: identifies the DVC I2C controller, has a different register layout * @is_vi: identifies the VI I2C controller, has a different register layout * @msg_complete: transfer completion notifier + * @msg_buf_remaining: size of unsent data in the message buffer + * @msg_len: length of message in current transfer * @msg_err: error code for completed message * @msg_buf: pointer to current message data - * @msg_buf_remaining: size of unsent data in the message buffer * @msg_read: indicates that the transfer is a read access * @timings: i2c timings information like bus frequency * @multimaster_mode: indicates that I2C controller is in multi-master mode @@ -277,6 +278,7 @@ struct tegra_i2c_dev {
struct completion msg_complete; size_t msg_buf_remaining; + unsigned int msg_len; int msg_err; u8 *msg_buf;
@@ -1169,7 +1171,7 @@ static void tegra_i2c_push_packet_header(struct tegra_i2c_dev *i2c_dev, else i2c_writel(i2c_dev, packet_header, I2C_TX_FIFO);
- packet_header = msg->len - 1; + packet_header = i2c_dev->msg_len - 1;
if (i2c_dev->dma_mode && !i2c_dev->msg_read) *dma_buf++ = packet_header; @@ -1242,20 +1244,32 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev, return err;
i2c_dev->msg_buf = msg->buf; + i2c_dev->msg_len = msg->len;
- /* The condition true implies smbus block read and len is already read */ - if (msg->flags & I2C_M_RECV_LEN && end_state != MSG_END_CONTINUE) - i2c_dev->msg_buf = msg->buf + 1; - - i2c_dev->msg_buf_remaining = msg->len; i2c_dev->msg_err = I2C_ERR_NONE; i2c_dev->msg_read = !!(msg->flags & I2C_M_RD); reinit_completion(&i2c_dev->msg_complete);
+ /* + * For SMBUS block read command, read only 1 byte in the first transfer. + * Adjust that 1 byte for the next transfer in the msg buffer and msg + * length. + */ + if (msg->flags & I2C_M_RECV_LEN) { + if (end_state == MSG_END_CONTINUE) { + i2c_dev->msg_len = 1; + } else { + i2c_dev->msg_buf += 1; + i2c_dev->msg_len -= 1; + } + } + + i2c_dev->msg_buf_remaining = i2c_dev->msg_len; + if (i2c_dev->msg_read) - xfer_size = msg->len; + xfer_size = i2c_dev->msg_len; else - xfer_size = msg->len + I2C_PACKET_HEADER_SIZE; + xfer_size = i2c_dev->msg_len + I2C_PACKET_HEADER_SIZE;
xfer_size = ALIGN(xfer_size, BYTES_PER_FIFO_WORD);
@@ -1295,7 +1309,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev, if (!i2c_dev->msg_read) { if (i2c_dev->dma_mode) { memcpy(i2c_dev->dma_buf + I2C_PACKET_HEADER_SIZE, - msg->buf, msg->len); + msg->buf, i2c_dev->msg_len);
dma_sync_single_for_device(i2c_dev->dma_dev, i2c_dev->dma_phys, @@ -1352,7 +1366,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev, i2c_dev->dma_phys, xfer_size, DMA_FROM_DEVICE);
- memcpy(i2c_dev->msg_buf, i2c_dev->dma_buf, msg->len); + memcpy(i2c_dev->msg_buf, i2c_dev->dma_buf, i2c_dev->msg_len); } }
@@ -1408,8 +1422,8 @@ static int tegra_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], ret = tegra_i2c_xfer_msg(i2c_dev, &msgs[i], MSG_END_CONTINUE); if (ret) break; - /* Set the read byte as msg len */ - msgs[i].len = msgs[i].buf[0]; + /* Set the msg length from first byte */ + msgs[i].len += msgs[i].buf[0]; dev_dbg(i2c_dev->dev, "reading %d bytes\n", msgs[i].len); } ret = tegra_i2c_xfer_msg(i2c_dev, &msgs[i], end_type);
From: Victor Nogueira victor@mojatatu.com
[ Upstream commit 526f28bd0fbdc699cda31426928802650c1528e5 ]
There are cases where the device is adminstratively UP, but operationally down. For example, we have a physical device (Nvidia ConnectX-6 Dx, 25Gbps) who's cable was pulled out, here is its ip link output:
5: ens2f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000 link/ether b8:ce:f6:4b:68:35 brd ff:ff:ff:ff:ff:ff altname enp179s0f1np1
As you can see, it's administratively UP but operationally down. In this case, sending a packet to this port caused a nasty kernel hang (so nasty that we were unable to capture it). Aborting a transmit based on operational status (in addition to administrative status) fixes the issue.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Acked-by: Jamal Hadi Salim jhs@mojatatu.com Signed-off-by: Victor Nogueira victor@mojatatu.com v1->v2: Add fixes tag v2->v3: Remove blank line between tags + add change log, suggested by Leon Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/act_mirred.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c index baeae5e5c8f0c..36395e5db3b40 100644 --- a/net/sched/act_mirred.c +++ b/net/sched/act_mirred.c @@ -262,7 +262,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, goto out; }
- if (unlikely(!(dev->flags & IFF_UP))) { + if (unlikely(!(dev->flags & IFF_UP)) || !netif_carrier_ok(dev)) { net_notice_ratelimited("tc mirred to Houston: device %s is down\n", dev->name); goto out;
From: Hayes Wang hayeswang@realtek.com
[ Upstream commit 8ceda6d5a1e5402fd852e6cc59a286ce3dc545ee ]
The feature of flow control becomes abnormal, if the device sends a pause frame and the tx/rx is disabled before sending a release frame. It causes the lost of packets.
Set PLA_RX_FIFO_FULL and PLA_RX_FIFO_EMPTY to zeros before disabling the tx/rx. And, toggle FC_PATCH_TASK before enabling tx/rx to reset the flow control patch and timer. Then, the hardware could clear the state and the flow control becomes normal after enabling tx/rx.
Besides, remove inline for fc_pause_on_auto() and fc_pause_off_auto().
Fixes: 195aae321c82 ("r8152: support new chips") Signed-off-by: Hayes Wang hayeswang@realtek.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/usb/r8152.c | 56 ++++++++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 20 deletions(-)
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index 23da1d9dafd1f..b0ce524ef1a50 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -5986,6 +5986,25 @@ static void rtl8153_disable(struct r8152 *tp) r8153_aldps_en(tp, true); }
+static u32 fc_pause_on_auto(struct r8152 *tp) +{ + return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 6 * 1024); +} + +static u32 fc_pause_off_auto(struct r8152 *tp) +{ + return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 14 * 1024); +} + +static void r8156_fc_parameter(struct r8152 *tp) +{ + u32 pause_on = tp->fc_pause_on ? tp->fc_pause_on : fc_pause_on_auto(tp); + u32 pause_off = tp->fc_pause_off ? tp->fc_pause_off : fc_pause_off_auto(tp); + + ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 16); + ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 16); +} + static int rtl8156_enable(struct r8152 *tp) { u32 ocp_data; @@ -5994,6 +6013,7 @@ static int rtl8156_enable(struct r8152 *tp) if (test_bit(RTL8152_UNPLUG, &tp->flags)) return -ENODEV;
+ r8156_fc_parameter(tp); set_tx_qlen(tp); rtl_set_eee_plus(tp); r8153_set_rx_early_timeout(tp); @@ -6025,9 +6045,24 @@ static int rtl8156_enable(struct r8152 *tp) ocp_write_word(tp, MCU_TYPE_USB, USB_L1_CTRL, ocp_data); }
+ ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_FW_TASK); + ocp_data &= ~FC_PATCH_TASK; + ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data); + usleep_range(1000, 2000); + ocp_data |= FC_PATCH_TASK; + ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data); + return rtl_enable(tp); }
+static void rtl8156_disable(struct r8152 *tp) +{ + ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, 0); + ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, 0); + + rtl8153_disable(tp); +} + static int rtl8156b_enable(struct r8152 *tp) { u32 ocp_data; @@ -6429,25 +6464,6 @@ static void rtl8153c_up(struct r8152 *tp) r8153b_u1u2en(tp, true); }
-static inline u32 fc_pause_on_auto(struct r8152 *tp) -{ - return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 6 * 1024); -} - -static inline u32 fc_pause_off_auto(struct r8152 *tp) -{ - return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 14 * 1024); -} - -static void r8156_fc_parameter(struct r8152 *tp) -{ - u32 pause_on = tp->fc_pause_on ? tp->fc_pause_on : fc_pause_on_auto(tp); - u32 pause_off = tp->fc_pause_off ? tp->fc_pause_off : fc_pause_off_auto(tp); - - ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 16); - ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 16); -} - static void rtl8156_change_mtu(struct r8152 *tp) { u32 rx_max_size = mtu_to_size(tp->netdev->mtu); @@ -9377,7 +9393,7 @@ static int rtl_ops_init(struct r8152 *tp) case RTL_VER_10: ops->init = r8156_init; ops->enable = rtl8156_enable; - ops->disable = rtl8153_disable; + ops->disable = rtl8156_disable; ops->up = rtl8156_up; ops->down = rtl8156_down; ops->unload = rtl8153_unload;
From: Hayes Wang hayeswang@realtek.com
[ Upstream commit 61b0ad6f58e2066e054c6d4839d67974d2861a7d ]
Fix the poor throughput for 2.5G devices, when changing the speed from auto mode to force mode. This patch is used to notify the MAC when the mode is changed.
Fixes: 195aae321c82 ("r8152: support new chips") Signed-off-by: Hayes Wang hayeswang@realtek.com Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/usb/r8152.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index b0ce524ef1a50..a7665accc81c8 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -199,6 +199,7 @@ #define OCP_EEE_AR 0xa41a #define OCP_EEE_DATA 0xa41c #define OCP_PHY_STATUS 0xa420 +#define OCP_INTR_EN 0xa424 #define OCP_NCTL_CFG 0xa42c #define OCP_POWER_CFG 0xa430 #define OCP_EEE_CFG 0xa432 @@ -620,6 +621,9 @@ enum spd_duplex { #define PHY_STAT_LAN_ON 3 #define PHY_STAT_PWRDN 5
+/* OCP_INTR_EN */ +#define INTR_SPEED_FORCE BIT(3) + /* OCP_NCTL_CFG */ #define PGA_RETURN_EN BIT(1)
@@ -7554,6 +7558,11 @@ static void r8156_hw_phy_cfg(struct r8152 *tp) ((swap_a & 0x1f) << 8) | ((swap_a >> 8) & 0x1f)); } + + /* Notify the MAC when the speed is changed to force mode. */ + data = ocp_reg_read(tp, OCP_INTR_EN); + data |= INTR_SPEED_FORCE; + ocp_reg_write(tp, OCP_INTR_EN, data); break; default: break; @@ -7949,6 +7958,11 @@ static void r8156b_hw_phy_cfg(struct r8152 *tp) break; }
+ /* Notify the MAC when the speed is changed to force mode. */ + data = ocp_reg_read(tp, OCP_INTR_EN); + data |= INTR_SPEED_FORCE; + ocp_reg_write(tp, OCP_INTR_EN, data); + if (rtl_phy_patch_request(tp, true, true)) return;
From: Hayes Wang hayeswang@realtek.com
[ Upstream commit cce8334f4aacd9936309a002d4a4de92a07cd2c2 ]
Move setting r8153b_rx_agg_chg_indicate() for 2.5G devices. The r8153b_rx_agg_chg_indicate() has to be called after enabling tx/rx. Otherwise, the coalescing settings are useless.
Fixes: 195aae321c82 ("r8152: support new chips") Signed-off-by: Hayes Wang hayeswang@realtek.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/usb/r8152.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index a7665accc81c8..059d610901d84 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -3027,12 +3027,16 @@ static int rtl_enable(struct r8152 *tp) ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CR, ocp_data);
switch (tp->version) { - case RTL_VER_08: - case RTL_VER_09: - case RTL_VER_14: - r8153b_rx_agg_chg_indicate(tp); + case RTL_VER_01: + case RTL_VER_02: + case RTL_VER_03: + case RTL_VER_04: + case RTL_VER_05: + case RTL_VER_06: + case RTL_VER_07: break; default: + r8153b_rx_agg_chg_indicate(tp); break; }
@@ -3086,7 +3090,6 @@ static void r8153_set_rx_early_timeout(struct r8152 *tp) 640 / 8); ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EXTRA_AGGR_TMR, ocp_data); - r8153b_rx_agg_chg_indicate(tp); break;
default: @@ -3120,7 +3123,6 @@ static void r8153_set_rx_early_size(struct r8152 *tp) case RTL_VER_15: ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EARLY_SIZE, ocp_data / 8); - r8153b_rx_agg_chg_indicate(tp); break; default: WARN_ON_ONCE(1);
From: Andy Moreton andy.moreton@amd.com
[ Upstream commit 281900a923d4c50df109b52a22ae3cdac150159b ]
The sfc driver does not report QSFP module EEPROM contents correctly as only the first page is fetched from hardware.
Commit 0e1a2a3e6e7d ("ethtool: Add SFF-8436 and SFF-8636 max EEPROM length definitions") added ETH_MODULE_SFF_8436_MAX_LEN for the overall size of the EEPROM info, so use that to report the full EEPROM contents.
Fixes: 9b17010da57a ("sfc: Add ethtool -m support for QSFP modules") Signed-off-by: Andy Moreton andy.moreton@amd.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/sfc/mcdi_port_common.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/sfc/mcdi_port_common.c b/drivers/net/ethernet/sfc/mcdi_port_common.c index 899cc16710048..0ab14f3d01d4d 100644 --- a/drivers/net/ethernet/sfc/mcdi_port_common.c +++ b/drivers/net/ethernet/sfc/mcdi_port_common.c @@ -972,12 +972,15 @@ static u32 efx_mcdi_phy_module_type(struct efx_nic *efx)
/* A QSFP+ NIC may actually have an SFP+ module attached. * The ID is page 0, byte 0. + * QSFP28 is of type SFF_8636, however, this is treated + * the same by ethtool, so we can also treat them the same. */ switch (efx_mcdi_phy_get_module_eeprom_byte(efx, 0, 0)) { - case 0x3: + case 0x3: /* SFP */ return MC_CMD_MEDIA_SFP_PLUS; - case 0xc: - case 0xd: + case 0xc: /* QSFP */ + case 0xd: /* QSFP+ */ + case 0x11: /* QSFP28 */ return MC_CMD_MEDIA_QSFP_PLUS; default: return 0; @@ -1075,7 +1078,7 @@ int efx_mcdi_phy_get_module_info(struct efx_nic *efx, struct ethtool_modinfo *mo
case MC_CMD_MEDIA_QSFP_PLUS: modinfo->type = ETH_MODULE_SFF_8436; - modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN; + modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN; break;
default:
From: David Howells dhowells@redhat.com
[ Upstream commit 0d098d83c5d9e107b2df7f5e11f81492f56d2fe7 ]
The hard call timeout is specified in the RXRPC_SET_CALL_TIMEOUT cmsg in seconds, so fix the point at which sendmsg() applies it to the call to convert to jiffies from seconds, not milliseconds.
Fixes: a158bdd3247b ("rxrpc: Fix timeout of a call that hasn't yet been granted a channel") Signed-off-by: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: "David S. Miller" davem@davemloft.net cc: Eric Dumazet edumazet@google.com cc: Jakub Kicinski kuba@kernel.org cc: Paolo Abeni pabeni@redhat.com cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/rxrpc/sendmsg.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index d4e4e94f4f987..71e40f91dd398 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -736,7 +736,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) fallthrough; case 1: if (p.call.timeouts.hard > 0) { - j = msecs_to_jiffies(p.call.timeouts.hard); + j = p.call.timeouts.hard * HZ; now = jiffies; j += now; WRITE_ONCE(call->expect_term_by, j);
From: Guo Ren guoren@linux.alibaba.com
[ Upstream commit f9c4bbddece7eff1155c70d48e3c9c2a01b9d778 ]
../arch/riscv/kernel/compat_syscall_table.c:12:41: warning: initialized field overwritten [-Woverride-init] 12 | #define __SYSCALL(nr, call) [nr] = (call), | ^ ../include/uapi/asm-generic/unistd.h:567:1: note: in expansion of macro '__SYSCALL' 567 | __SYSCALL(__NR_semget, sys_semget)
Fixes: 59c10c52f573 ("riscv: compat: syscall: Add compat_sys_call_table implementation") Reviewed-by: Conor Dooley conor.dooley@microchip.com Reported-by: kernel test robot lkp@intel.com Tested-by: Jisheng Zhang jszhang@kernel.org Signed-off-by: Guo Ren guoren@linux.alibaba.com Signed-off-by: Guo Ren guoren@kernel.org Signed-off-by: Drew Fustini dfustini@baylibre.com Link: https://lore.kernel.org/r/20230501223353.2833899-1-dfustini@baylibre.com Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/kernel/Makefile | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index db6e4b1294ba3..ab333cb792fd9 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -9,6 +9,7 @@ CFLAGS_REMOVE_patch.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE) endif CFLAGS_syscall_table.o += $(call cc-option,-Wno-override-init,) +CFLAGS_compat_syscall_table.o += $(call cc-option,-Wno-override-init,)
ifdef CONFIG_KEXEC AFLAGS_kexec_relocate.o := -mcmodel=medany $(call cc-option,-mno-relax)
From: Radhakrishna Sripada radhakrishna.sripada@intel.com
[ Upstream commit 6ece90e3665a9b7fb2637fcca26cebd42991580b ]
CPU transcoder mask is used to iterate over the available CPU transcoders in the macro for_each_cpu_transcoder().
The macro is broken on MTL and got highlighted when audio state was being tracked for each transcoder added in [1].
Add the missing CPU transcoder mask which is similar to ADL-P mask but without DSI transcoders.
[1]: https://patchwork.freedesktop.org/patch/523723/
Fixes: 7835303982d1 ("drm/i915/mtl: Add MeteorLake PCI IDs") Cc: Ville Syrjälä ville.syrjala@linux.intel.com Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Acked-by: Haridhar Kalvala haridhar.kalvala@intel.com Reviewed-by: Gustavo Sousa gustavo.sousa@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230420221248.2511314-1-radha... (cherry picked from commit bddc18913bd44adae5c828fd514d570f43ba1576) Signed-off-by: Joonas Lahtinen joonas.lahtinen@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/i915_pci.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c index fe4f279aaeb3e..a2efc0b9d50c8 100644 --- a/drivers/gpu/drm/i915/i915_pci.c +++ b/drivers/gpu/drm/i915/i915_pci.c @@ -1133,6 +1133,8 @@ static const struct intel_gt_definition xelpmp_extra_gt[] = { static const struct intel_device_info mtl_info = { XE_HP_FEATURES, XE_LPDP_FEATURES, + .__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | + BIT(TRANSCODER_C) | BIT(TRANSCODER_D), /* * Real graphics IP version will be obtained from hardware GMD_ID * register. Value provided here is just for sanity checking.
From: Jeremy Sowden jeremy@azazel.net
[ Upstream commit de4773f0235acf74554f6a64ea60adc0d7b01895 ]
1. Don't hard-code pkg-config 2. Remove distro-specific default for CFLAGS 3. Use pkg-config for LDLIBS
Fixes: a50a88f026fb ("selftests: netfilter: fix a build error on openSUSE") Suggested-by: Jan Engelhardt jengelh@inai.de Signed-off-by: Jeremy Sowden jeremy@azazel.net Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/netfilter/Makefile | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/netfilter/Makefile b/tools/testing/selftests/netfilter/Makefile index 4504ee07be08d..3686bfa6c58d7 100644 --- a/tools/testing/selftests/netfilter/Makefile +++ b/tools/testing/selftests/netfilter/Makefile @@ -8,8 +8,11 @@ TEST_PROGS := nft_trans_stress.sh nft_fib.sh nft_nat.sh bridge_brouter.sh \ ipip-conntrack-mtu.sh conntrack_tcp_unreplied.sh \ conntrack_vrf.sh nft_synproxy.sh rpath.sh
-CFLAGS += $(shell pkg-config --cflags libmnl 2>/dev/null || echo "-I/usr/include/libmnl") -LDLIBS = -lmnl +HOSTPKG_CONFIG := pkg-config + +CFLAGS += $(shell $(HOSTPKG_CONFIG) --cflags libmnl 2>/dev/null) +LDLIBS += $(shell $(HOSTPKG_CONFIG) --libs libmnl 2>/dev/null || echo -lmnl) + TEST_GEN_FILES = nf-queue connect_close
include ../lib.mk
From: Geetha sowjanya gakula@marvell.com
[ Upstream commit 048486f81d01db4d100af021ee2ea211d19732a0 ]
APR table contains the lmtst base address of PF/VFs. These entries are updated by the PF/VF during the device probe. The lmtst address is fetched from HW using "TXN_REQ" and "ADDR_RSP_STS" registers. The lock tries to protect these registers from getting overwritten when multiple PFs invokes rvu_get_lmtaddr() simultaneously.
For example, if PF1 submit the request and got permitted before it reads the response and PF2 got scheduled submit the request then the response of PF1 is overwritten by the PF2 response.
Fixes: 893ae97214c3 ("octeontx2-af: cn10k: Support configurable LMTST regions") Signed-off-by: Geetha sowjanya gakula@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/rvu_cn10k.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c index 7dbbc115cde42..f9faa5b23bb9d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c @@ -60,13 +60,14 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc, u64 iova, u64 *lmt_addr) { u64 pa, val, pf; - int err; + int err = 0;
if (!iova) { dev_err(rvu->dev, "%s Requested Null address for transulation\n", __func__); return -EINVAL; }
+ mutex_lock(&rvu->rsrc_lock); rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_REQ, iova); pf = rvu_get_pf(pcifunc) & 0x1F; val = BIT_ULL(63) | BIT_ULL(14) | BIT_ULL(13) | pf << 8 | @@ -76,12 +77,13 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc, err = rvu_poll_reg(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS, BIT_ULL(0), false); if (err) { dev_err(rvu->dev, "%s LMTLINE iova transulation failed\n", __func__); - return err; + goto exit; } val = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS); if (val & ~0x1ULL) { dev_err(rvu->dev, "%s LMTLINE iova transulation failed err:%llx\n", __func__, val); - return -EIO; + err = -EIO; + goto exit; } /* PA[51:12] = RVU_AF_SMMU_TLN_FLIT0[57:18] * PA[11:0] = IOVA[11:0] @@ -89,8 +91,9 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc, pa = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TLN_FLIT0) >> 18; pa &= GENMASK_ULL(39, 0); *lmt_addr = (pa << 12) | (iova & 0xFFF); - - return 0; +exit: + mutex_unlock(&rvu->rsrc_lock); + return err; }
static int rvu_update_lmtaddr(struct rvu *rvu, u16 pcifunc, u64 lmt_addr)
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit c60a6b90e7890453f09e0d2163d6acadabe3415b ]
In the current driver, NPC exact match feature was not getting enabled as configured bit was not read properly. for_each_set_bit_from() need end bit as one bit post position in the bit map to read NPC exact nibble enable bits properly. This patch fixes the same.
Fixes: b747923afff8 ("octeontx2-af: Exact match support") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c index 7c4e1acd0f77b..67c85382eef62 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c @@ -553,8 +553,7 @@ static int npc_scan_kex(struct rvu *rvu, int blkaddr, u8 intf) */ masked_cfg = cfg & NPC_EXACT_NIBBLE; bitnr = NPC_EXACT_NIBBLE_START; - for_each_set_bit_from(bitnr, (unsigned long *)&masked_cfg, - NPC_EXACT_NIBBLE_START) { + for_each_set_bit_from(bitnr, (unsigned long *)&masked_cfg, NPC_EXACT_NIBBLE_END + 1) { npc_scan_exact_result(mcam, bitnr, key_nibble, intf); key_nibble++; }
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit 60999cb83554ebcf6cfff8894bc2c3d99ea858ba ]
In current driver, NPC cam and mem table sizes are read from wrong register offset. This patch fixes the register offset so that correct values are populated on read.
Fixes: b747923afff8 ("octeontx2-af: Exact match support") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c index 594029007f85d..5aed3dbf3b991 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c @@ -1879,9 +1879,9 @@ int rvu_npc_exact_init(struct rvu *rvu) rvu->hw->table = table;
/* Read table size, ways and depth */ - table->mem_table.depth = FIELD_GET(GENMASK_ULL(31, 24), npc_const3); table->mem_table.ways = FIELD_GET(GENMASK_ULL(19, 16), npc_const3); - table->cam_table.depth = FIELD_GET(GENMASK_ULL(15, 0), npc_const3); + table->mem_table.depth = FIELD_GET(GENMASK_ULL(15, 0), npc_const3); + table->cam_table.depth = FIELD_GET(GENMASK_ULL(31, 24), npc_const3);
dev_dbg(rvu->dev, "%s: NPC exact match 4way_2k table(ways=%d, depth=%d)\n", __func__, table->mem_table.ways, table->cam_table.depth);
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit 2a6eecc592b4d59a04d513aa25fc0f30d52100cd ]
CN10kb supports large number of dmac filter flows to be inserted. Increase the field size to accommodate the same
Fixes: b747923afff8 ("octeontx2-af: Exact match support") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 634ca4140346b..241016ca64d05 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -332,11 +332,11 @@ struct otx2_flow_config { #define OTX2_PER_VF_VLAN_FLOWS 2 /* Rx + Tx per VF */ #define OTX2_VF_VLAN_RX_INDEX 0 #define OTX2_VF_VLAN_TX_INDEX 1 - u16 max_flows; - u8 dmacflt_max_flows; u32 *bmap_to_dmacindex; unsigned long *dmacflt_bmap; struct list_head flow_list; + u32 dmacflt_max_flows; + u16 max_flows; };
struct otx2_tc_info {
From: Suman Ghosh sumang@marvell.com
[ Upstream commit 2cee6401c4eaa562abc1d437d5d03e80bbee79c1 ]
1. It is possible to have custom mkex profiles which do not extract DMAC at all into the key. Hence allow mkex profiles which do not have DMAC to be loaded into MCAM hardware. This patch also adds debugging prints needed to identify profiles with wrong configuration.
2. If a mkex profile set "l2l3mb" field for Rx interface, then Rx multicast and broadcast entry should be configured.
Signed-off-by: Suman Ghosh sumang@marvell.com Link: https://lore.kernel.org/r/20221031090856.1404303-1-sumang@marvell.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 406bed11fb91 ("octeontx2-af: Update/Fix NPC field hash extract feature") Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/npc.h | 1 + .../marvell/octeontx2/af/rvu_debugfs.c | 6 ++ .../marvell/octeontx2/af/rvu_npc_fs.c | 81 ++++++++++++++----- 3 files changed, 70 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h index f187293e3e084..d027c23b8ef8e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h @@ -620,6 +620,7 @@ struct rvu_npc_mcam_rule { bool vfvlan_cfg; u16 chan; u16 chan_mask; + u8 lxmb; };
#endif /* NPC_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c index 0cab27448399c..aadc352c2ffbd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c @@ -2759,6 +2759,12 @@ static void rvu_dbg_npc_mcam_show_flows(struct seq_file *s, for_each_set_bit(bit, (unsigned long *)&rule->features, 64) { seq_printf(s, "\t%s ", npc_get_field_name(bit)); switch (bit) { + case NPC_LXMB: + if (rule->lxmb == 1) + seq_puts(s, "\tL2M nibble is set\n"); + else + seq_puts(s, "\tL2B nibble is set\n"); + break; case NPC_DMAC: seq_printf(s, "%pM ", rule->packet.dmac); seq_printf(s, "mask %pM\n", rule->mask.dmac); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c index 67c85382eef62..282d85846647a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c @@ -43,6 +43,7 @@ static const char * const npc_flow_names[] = { [NPC_DPORT_UDP] = "udp destination port", [NPC_SPORT_SCTP] = "sctp source port", [NPC_DPORT_SCTP] = "sctp destination port", + [NPC_LXMB] = "Mcast/Bcast header ", [NPC_UNKNOWN] = "unknown", };
@@ -340,8 +341,10 @@ static void npc_handle_multi_layer_fields(struct rvu *rvu, int blkaddr, u8 intf) vlan_tag2 = &key_fields[NPC_VLAN_TAG2];
/* if key profile programmed does not extract Ethertype at all */ - if (!etype_ether->nr_kws && !etype_tag1->nr_kws && !etype_tag2->nr_kws) + if (!etype_ether->nr_kws && !etype_tag1->nr_kws && !etype_tag2->nr_kws) { + dev_err(rvu->dev, "mkex: Ethertype is not extracted.\n"); goto vlan_tci; + }
/* if key profile programmed extracts Ethertype from one layer */ if (etype_ether->nr_kws && !etype_tag1->nr_kws && !etype_tag2->nr_kws) @@ -354,35 +357,45 @@ static void npc_handle_multi_layer_fields(struct rvu *rvu, int blkaddr, u8 intf) /* if key profile programmed extracts Ethertype from multiple layers */ if (etype_ether->nr_kws && etype_tag1->nr_kws) { for (i = 0; i < NPC_MAX_KWS_IN_KEY; i++) { - if (etype_ether->kw_mask[i] != etype_tag1->kw_mask[i]) + if (etype_ether->kw_mask[i] != etype_tag1->kw_mask[i]) { + dev_err(rvu->dev, "mkex: Etype pos is different for untagged and tagged pkts.\n"); goto vlan_tci; + } } key_fields[NPC_ETYPE] = *etype_tag1; } if (etype_ether->nr_kws && etype_tag2->nr_kws) { for (i = 0; i < NPC_MAX_KWS_IN_KEY; i++) { - if (etype_ether->kw_mask[i] != etype_tag2->kw_mask[i]) + if (etype_ether->kw_mask[i] != etype_tag2->kw_mask[i]) { + dev_err(rvu->dev, "mkex: Etype pos is different for untagged and double tagged pkts.\n"); goto vlan_tci; + } } key_fields[NPC_ETYPE] = *etype_tag2; } if (etype_tag1->nr_kws && etype_tag2->nr_kws) { for (i = 0; i < NPC_MAX_KWS_IN_KEY; i++) { - if (etype_tag1->kw_mask[i] != etype_tag2->kw_mask[i]) + if (etype_tag1->kw_mask[i] != etype_tag2->kw_mask[i]) { + dev_err(rvu->dev, "mkex: Etype pos is different for tagged and double tagged pkts.\n"); goto vlan_tci; + } } key_fields[NPC_ETYPE] = *etype_tag2; }
/* check none of higher layers overwrite Ethertype */ start_lid = key_fields[NPC_ETYPE].layer_mdata.lid + 1; - if (npc_check_overlap(rvu, blkaddr, NPC_ETYPE, start_lid, intf)) + if (npc_check_overlap(rvu, blkaddr, NPC_ETYPE, start_lid, intf)) { + dev_err(rvu->dev, "mkex: Ethertype is overwritten by higher layers.\n"); goto vlan_tci; + } *features |= BIT_ULL(NPC_ETYPE); vlan_tci: /* if key profile does not extract outer vlan tci at all */ - if (!vlan_tag1->nr_kws && !vlan_tag2->nr_kws) + if (!vlan_tag1->nr_kws && !vlan_tag2->nr_kws) { + dev_err(rvu->dev, "mkex: Outer vlan tci is not extracted.\n"); goto done; + }
/* if key profile extracts outer vlan tci from one layer */ if (vlan_tag1->nr_kws && !vlan_tag2->nr_kws) @@ -393,15 +406,19 @@ static void npc_handle_multi_layer_fields(struct rvu *rvu, int blkaddr, u8 intf) /* if key profile extracts outer vlan tci from multiple layers */ if (vlan_tag1->nr_kws && vlan_tag2->nr_kws) { for (i = 0; i < NPC_MAX_KWS_IN_KEY; i++) { - if (vlan_tag1->kw_mask[i] != vlan_tag2->kw_mask[i]) + if (vlan_tag1->kw_mask[i] != vlan_tag2->kw_mask[i]) { + dev_err(rvu->dev, "mkex: Out vlan tci pos is different for tagged and double tagged pkts.\n"); goto done; + } } key_fields[NPC_OUTER_VID] = *vlan_tag2; } /* check none of higher layers overwrite outer vlan tci */ start_lid = key_fields[NPC_OUTER_VID].layer_mdata.lid + 1; - if (npc_check_overlap(rvu, blkaddr, NPC_OUTER_VID, start_lid, intf)) + if (npc_check_overlap(rvu, blkaddr, NPC_OUTER_VID, start_lid, intf)) { + dev_err(rvu->dev, "mkex: Outer vlan tci is overwritten by higher layers.\n"); goto done; + } *features |= BIT_ULL(NPC_OUTER_VID); done: return; @@ -522,6 +539,10 @@ static void npc_set_features(struct rvu *rvu, int blkaddr, u8 intf) if (npc_check_field(rvu, blkaddr, NPC_LB, intf)) *features |= BIT_ULL(NPC_VLAN_ETYPE_CTAG) | BIT_ULL(NPC_VLAN_ETYPE_STAG); + + /* for L2M/L2B/L3M/L3B, check if the type is present in the key */ + if (npc_check_field(rvu, blkaddr, NPC_LXMB, intf)) + *features |= BIT_ULL(NPC_LXMB); }
/* Scan key extraction profile and record how fields of our interest @@ -598,16 +619,6 @@ static int npc_scan_verify_kex(struct rvu *rvu, int blkaddr) dev_err(rvu->dev, "Channel cannot be overwritten\n"); return -EINVAL; } - /* DMAC should be present in key for unicast filter to work */ - if (!npc_is_field_present(rvu, NPC_DMAC, NIX_INTF_RX)) { - dev_err(rvu->dev, "DMAC not present in Key\n"); - return -EINVAL; - } - /* check that none of the fields overwrite DMAC */ - if (npc_check_overlap(rvu, blkaddr, NPC_DMAC, 0, NIX_INTF_RX)) { - dev_err(rvu->dev, "DMAC cannot be overwritten\n"); - return -EINVAL; - }
npc_set_features(rvu, blkaddr, NIX_INTF_TX); npc_set_features(rvu, blkaddr, NIX_INTF_RX); @@ -850,6 +861,11 @@ static void npc_update_flow(struct rvu *rvu, struct mcam_entry *entry, npc_update_entry(rvu, NPC_LE, entry, NPC_LT_LE_ESP, 0, ~0ULL, 0, intf);
+ if (features & BIT_ULL(NPC_LXMB)) { + output->lxmb = is_broadcast_ether_addr(pkt->dmac) ? 2 : 1; + npc_update_entry(rvu, NPC_LXMB, entry, output->lxmb, 0, + output->lxmb, 0, intf); + } #define NPC_WRITE_FLOW(field, member, val_lo, val_hi, mask_lo, mask_hi) \ do { \ if (features & BIT_ULL((field))) { \ @@ -1152,6 +1168,7 @@ static int npc_install_flow(struct rvu *rvu, int blkaddr, u16 target, rule->chan_mask = write_req.entry_data.kw_mask[0] & NPC_KEX_CHAN_MASK; rule->chan = write_req.entry_data.kw[0] & NPC_KEX_CHAN_MASK; rule->chan &= rule->chan_mask; + rule->lxmb = dummy.lxmb; if (is_npc_intf_tx(req->intf)) rule->intf = pfvf->nix_tx_intf; else @@ -1214,6 +1231,34 @@ int rvu_mbox_handler_npc_install_flow(struct rvu *rvu, if (!is_npc_interface_valid(rvu, req->intf)) return NPC_FLOW_INTF_INVALID;
+ /* If DMAC is not extracted in MKEX, rules installed by AF + * can rely on L2MB bit set by hardware protocol checker for + * broadcast and multicast addresses. + */ + if (npc_check_field(rvu, blkaddr, NPC_DMAC, req->intf)) + goto process_flow; + + if (is_pffunc_af(req->hdr.pcifunc)) { + if (is_unicast_ether_addr(req->packet.dmac)) { + dev_err(rvu->dev, + "%s: mkex profile does not support ucast flow\n", + __func__); + return NPC_FLOW_NOT_SUPPORTED; + } + + if (!npc_is_field_present(rvu, NPC_LXMB, req->intf)) { + dev_err(rvu->dev, + "%s: mkex profile does not support bcast/mcast flow", + __func__); + return NPC_FLOW_NOT_SUPPORTED; + } + + /* Modify feature to use LXMB instead of DMAC */ + req->features &= ~BIT_ULL(NPC_DMAC); + req->features |= BIT_ULL(NPC_LXMB); + } + +process_flow: if (from_vf && req->default_rule) return NPC_FLOW_VF_PERM_DENIED;
From: Suman Ghosh sumang@marvell.com
[ Upstream commit 674b3e164238a31f236ac63f82d5d160f7d4c201 ]
1. If a profile does not support DMAC extraction then avoid installing NPC flow rules for unicast. Similarly, if LXMB(L2 and L3) extraction is not supported by the profile then avoid installing broadcast and multicast rules. 2. Allow MCAM entry insertion for promiscuous mode. 3. For the profiles where DMAC is not extracted in MKEX key default unicast entry installed by AF is not valid. Hence do not use action from the AF installed default unicast entry for such cases. 4. Adjacent packet header fields in a packet like IP header source and destination addresses or UDP/TCP header source port and destination can be extracted together in MKEX profile. Therefore MKEX profile can be configured to in two ways: a. Total of 4 bytes from start of UDP header(src port + destination port) or b. Two bytes from start and two bytes from offset 2
Signed-off-by: Suman Ghosh sumang@marvell.com Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Link: https://lore.kernel.org/r/20221118053329.2288486-1-sumang@marvell.com Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: 406bed11fb91 ("octeontx2-af: Update/Fix NPC field hash extract feature") Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 14 ++++ .../net/ethernet/marvell/octeontx2/af/rvu.h | 1 + .../ethernet/marvell/octeontx2/af/rvu_npc.c | 22 ++++++ .../marvell/octeontx2/af/rvu_npc_fs.c | 76 +++++++++++++++---- .../marvell/octeontx2/nic/otx2_flows.c | 27 ++++++- 5 files changed, 124 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 8d5d5a0f68c44..c7c92c7510fa6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -245,6 +245,9 @@ M(NPC_MCAM_GET_STATS, 0x6012, npc_mcam_entry_stats, \ M(NPC_GET_SECRET_KEY, 0x6013, npc_get_secret_key, \ npc_get_secret_key_req, \ npc_get_secret_key_rsp) \ +M(NPC_GET_FIELD_STATUS, 0x6014, npc_get_field_status, \ + npc_get_field_status_req, \ + npc_get_field_status_rsp) \ /* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \ M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc, \ nix_lf_alloc_req, nix_lf_alloc_rsp) \ @@ -1541,6 +1544,17 @@ struct ptp_rsp { u64 clk; };
+struct npc_get_field_status_req { + struct mbox_msghdr hdr; + u8 intf; + u8 field; +}; + +struct npc_get_field_status_rsp { + struct mbox_msghdr hdr; + u8 enable; +}; + struct set_vf_perm { struct mbox_msghdr hdr; u16 vf; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 3ca8d2e5f979b..d493b533cf76e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -851,6 +851,7 @@ int npc_install_mcam_drop_rule(struct rvu *rvu, int mcam_idx, u16 *counter_idx, u64 chan_val, u64 chan_mask, u64 exact_val, u64 exact_mask, u64 bcast_mcast_val, u64 bcast_mcast_mask); void npc_mcam_rsrcs_reserve(struct rvu *rvu, int blkaddr, int entry_idx); +bool npc_is_feature_supported(struct rvu *rvu, u64 features, u8 intf);
/* CPT APIs */ int rvu_cpt_register_interrupts(struct rvu *rvu); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c index 1e348fd0d930e..16cfc802e348d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c @@ -617,6 +617,12 @@ void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc, if (blkaddr < 0) return;
+ /* Ucast rule should not be installed if DMAC + * extraction is not supported by the profile. + */ + if (!npc_is_feature_supported(rvu, BIT_ULL(NPC_DMAC), pfvf->nix_rx_intf)) + return; + index = npc_get_nixlf_mcam_index(mcam, pcifunc, nixlf, NIXLF_UCAST_ENTRY);
@@ -778,6 +784,14 @@ void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc, /* Get 'pcifunc' of PF device */ pcifunc = pcifunc & ~RVU_PFVF_FUNC_MASK; pfvf = rvu_get_pfvf(rvu, pcifunc); + + /* Bcast rule should not be installed if both DMAC + * and LXMB extraction is not supported by the profile. + */ + if (!npc_is_feature_supported(rvu, BIT_ULL(NPC_DMAC), pfvf->nix_rx_intf) && + !npc_is_feature_supported(rvu, BIT_ULL(NPC_LXMB), pfvf->nix_rx_intf)) + return; + index = npc_get_nixlf_mcam_index(mcam, pcifunc, nixlf, NIXLF_BCAST_ENTRY);
@@ -848,6 +862,14 @@ void rvu_npc_install_allmulti_entry(struct rvu *rvu, u16 pcifunc, int nixlf, vf_func = pcifunc & RVU_PFVF_FUNC_MASK; pcifunc = pcifunc & ~RVU_PFVF_FUNC_MASK; pfvf = rvu_get_pfvf(rvu, pcifunc); + + /* Mcast rule should not be installed if both DMAC + * and LXMB extraction is not supported by the profile. + */ + if (!npc_is_feature_supported(rvu, BIT_ULL(NPC_DMAC), pfvf->nix_rx_intf) && + !npc_is_feature_supported(rvu, BIT_ULL(NPC_LXMB), pfvf->nix_rx_intf)) + return; + index = npc_get_nixlf_mcam_index(mcam, pcifunc, nixlf, NIXLF_ALLMULTI_ENTRY);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c index 282d85846647a..0a3b591c1ef5a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c @@ -47,6 +47,19 @@ static const char * const npc_flow_names[] = { [NPC_UNKNOWN] = "unknown", };
+bool npc_is_feature_supported(struct rvu *rvu, u64 features, u8 intf) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + u64 mcam_features; + u64 unsupported; + + mcam_features = is_npc_intf_tx(intf) ? mcam->tx_features : mcam->rx_features; + unsupported = (mcam_features ^ features) & ~mcam_features; + + /* Return false if at least one of the input flows is not extracted */ + return !unsupported; +} + const char *npc_get_field_name(u8 hdr) { if (hdr >= ARRAY_SIZE(npc_flow_names)) @@ -436,8 +449,6 @@ static void npc_scan_ldata(struct rvu *rvu, int blkaddr, u8 lid, nr_bytes = FIELD_GET(NPC_BYTESM, cfg) + 1; hdr = FIELD_GET(NPC_HDR_OFFSET, cfg); key = FIELD_GET(NPC_KEY_OFFSET, cfg); - start_kwi = key / 8; - offset = (key * 8) % 64;
/* For Tx, Layer A has NIX_INST_HDR_S(64 bytes) preceding * ethernet header. @@ -452,13 +463,18 @@ static void npc_scan_ldata(struct rvu *rvu, int blkaddr, u8 lid,
#define NPC_SCAN_HDR(name, hlid, hlt, hstart, hlen) \ do { \ + start_kwi = key / 8; \ + offset = (key * 8) % 64; \ if (lid == (hlid) && lt == (hlt)) { \ if ((hstart) >= hdr && \ ((hstart) + (hlen)) <= (hdr + nr_bytes)) { \ bit_offset = (hdr + nr_bytes - (hstart) - (hlen)) * 8; \ npc_set_layer_mdata(mcam, (name), cfg, lid, lt, intf); \ + offset += bit_offset; \ + start_kwi += offset / 64; \ + offset %= 64; \ npc_set_kw_masks(mcam, (name), (hlen) * 8, \ - start_kwi, offset + bit_offset, intf);\ + start_kwi, offset, intf); \ } \ } \ } while (0) @@ -649,9 +665,9 @@ static int npc_check_unsupported_flows(struct rvu *rvu, u64 features, u8 intf)
unsupported = (*mcam_features ^ features) & ~(*mcam_features); if (unsupported) { - dev_info(rvu->dev, "Unsupported flow(s):\n"); + dev_warn(rvu->dev, "Unsupported flow(s):\n"); for_each_set_bit(bit, (unsigned long *)&unsupported, 64) - dev_info(rvu->dev, "%s ", npc_get_field_name(bit)); + dev_warn(rvu->dev, "%s ", npc_get_field_name(bit)); return -EOPNOTSUPP; }
@@ -1006,8 +1022,20 @@ static void npc_update_rx_entry(struct rvu *rvu, struct rvu_pfvf *pfvf, action.match_id = req->match_id; action.flow_key_alg = req->flow_key_alg;
- if (req->op == NIX_RX_ACTION_DEFAULT && pfvf->def_ucast_rule) - action = pfvf->def_ucast_rule->rx_action; + if (req->op == NIX_RX_ACTION_DEFAULT) { + if (pfvf->def_ucast_rule) { + action = pfvf->def_ucast_rule->rx_action; + } else { + /* For profiles which do not extract DMAC, the default + * unicast entry is unused. Hence modify action for the + * requests which use same action as default unicast + * entry + */ + *(u64 *)&action = 0; + action.pf_func = target; + action.op = NIX_RX_ACTIONOP_UCAST; + } + }
entry->action = *(u64 *)&action;
@@ -1238,18 +1266,19 @@ int rvu_mbox_handler_npc_install_flow(struct rvu *rvu, if (npc_check_field(rvu, blkaddr, NPC_DMAC, req->intf)) goto process_flow;
- if (is_pffunc_af(req->hdr.pcifunc)) { + if (is_pffunc_af(req->hdr.pcifunc) && + req->features & BIT_ULL(NPC_DMAC)) { if (is_unicast_ether_addr(req->packet.dmac)) { - dev_err(rvu->dev, - "%s: mkex profile does not support ucast flow\n", - __func__); + dev_warn(rvu->dev, + "%s: mkex profile does not support ucast flow\n", + __func__); return NPC_FLOW_NOT_SUPPORTED; }
if (!npc_is_field_present(rvu, NPC_LXMB, req->intf)) { - dev_err(rvu->dev, - "%s: mkex profile does not support bcast/mcast flow", - __func__); + dev_warn(rvu->dev, + "%s: mkex profile does not support bcast/mcast flow", + __func__); return NPC_FLOW_NOT_SUPPORTED; }
@@ -1602,3 +1631,22 @@ int npc_install_mcam_drop_rule(struct rvu *rvu, int mcam_idx, u16 *counter_idx,
return 0; } + +int rvu_mbox_handler_npc_get_field_status(struct rvu *rvu, + struct npc_get_field_status_req *req, + struct npc_get_field_status_rsp *rsp) +{ + int blkaddr; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + if (!is_npc_interface_valid(rvu, req->intf)) + return NPC_FLOW_INTF_INVALID; + + if (npc_check_field(rvu, blkaddr, req->field, req->intf)) + rsp->enable = 1; + + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c index 0f7345a96965b..d0554f6d26731 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c @@ -164,6 +164,8 @@ EXPORT_SYMBOL(otx2_alloc_mcam_entries); static int otx2_mcam_entry_init(struct otx2_nic *pfvf) { struct otx2_flow_config *flow_cfg = pfvf->flow_cfg; + struct npc_get_field_status_req *freq; + struct npc_get_field_status_rsp *frsp; struct npc_mcam_alloc_entry_req *req; struct npc_mcam_alloc_entry_rsp *rsp; int vf_vlan_max_flows; @@ -214,8 +216,29 @@ static int otx2_mcam_entry_init(struct otx2_nic *pfvf) flow_cfg->rx_vlan_offset = flow_cfg->unicast_offset + OTX2_MAX_UNICAST_FLOWS; pfvf->flags |= OTX2_FLAG_UCAST_FLTR_SUPPORT; - pfvf->flags |= OTX2_FLAG_RX_VLAN_SUPPORT; - pfvf->flags |= OTX2_FLAG_VF_VLAN_SUPPORT; + + /* Check if NPC_DMAC field is supported + * by the mkex profile before setting VLAN support flag. + */ + freq = otx2_mbox_alloc_msg_npc_get_field_status(&pfvf->mbox); + if (!freq) { + mutex_unlock(&pfvf->mbox.lock); + return -ENOMEM; + } + + freq->field = NPC_DMAC; + if (otx2_sync_mbox_msg(&pfvf->mbox)) { + mutex_unlock(&pfvf->mbox.lock); + return -EINVAL; + } + + frsp = (struct npc_get_field_status_rsp *)otx2_mbox_get_rsp + (&pfvf->mbox.mbox, 0, &freq->hdr); + + if (frsp->enable) { + pfvf->flags |= OTX2_FLAG_RX_VLAN_SUPPORT; + pfvf->flags |= OTX2_FLAG_VF_VLAN_SUPPORT; + }
pfvf->flags |= OTX2_FLAG_MCAM_ENTRIES_ALLOC; mutex_unlock(&pfvf->mbox.lock);
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit 406bed11fb91a0b35c26fe633d8700febaec6439 ]
1. As per previous implementation, mask and control parameter to generate the field hash value was not passed to the caller program. Updated the secret key mbox to share that information as well, as a part of the fix. 2. Earlier implementation did not consider hash reduction of both source and destination IPv6 addresses. Only source IPv6 address was considered. This fix solves that and provides option to hash
Fixes: 56d9f5fd2246 ("octeontx2-af: Use hashed field in MCAM key") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 16 +++++--- .../marvell/octeontx2/af/rvu_npc_hash.c | 37 ++++++++++++------- .../marvell/octeontx2/af/rvu_npc_hash.h | 6 +++ 3 files changed, 41 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index c7c92c7510fa6..5921a78661e41 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -242,9 +242,9 @@ M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule, \ M(NPC_MCAM_GET_STATS, 0x6012, npc_mcam_entry_stats, \ npc_mcam_get_stats_req, \ npc_mcam_get_stats_rsp) \ -M(NPC_GET_SECRET_KEY, 0x6013, npc_get_secret_key, \ - npc_get_secret_key_req, \ - npc_get_secret_key_rsp) \ +M(NPC_GET_FIELD_HASH_INFO, 0x6013, npc_get_field_hash_info, \ + npc_get_field_hash_info_req, \ + npc_get_field_hash_info_rsp) \ M(NPC_GET_FIELD_STATUS, 0x6014, npc_get_field_status, \ npc_get_field_status_req, \ npc_get_field_status_rsp) \ @@ -1513,14 +1513,20 @@ struct npc_mcam_get_stats_rsp { u8 stat_ena; /* enabled */ };
-struct npc_get_secret_key_req { +struct npc_get_field_hash_info_req { struct mbox_msghdr hdr; u8 intf; };
-struct npc_get_secret_key_rsp { +struct npc_get_field_hash_info_rsp { struct mbox_msghdr hdr; u64 secret_key[3]; +#define NPC_MAX_HASH 2 +#define NPC_MAX_HASH_MASK 2 + /* NPC_AF_INTF(0..1)_HASH(0..1)_MASK(0..1) */ + u64 hash_mask[NPC_MAX_INTF][NPC_MAX_HASH][NPC_MAX_HASH_MASK]; + /* NPC_AF_INTF(0..1)_HASH(0..1)_RESULT_CTRL */ + u64 hash_ctrl[NPC_MAX_INTF][NPC_MAX_HASH]; };
enum ptp_op { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c index 5aed3dbf3b991..18f12f9156544 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c @@ -110,8 +110,8 @@ static u64 npc_update_use_hash(int lt, int ld) * in KEX_LD_CFG */ cfg = KEX_LD_CFG_USE_HASH(0x1, 0x03, - ld ? 0x8 : 0x18, - 0x1, 0x0, 0x10); + ld ? 0x18 : 0x8, + 0x1, 0x0, ld ? 0x14 : 0x10); break; }
@@ -134,7 +134,6 @@ static void npc_program_mkex_hash_rx(struct rvu *rvu, int blkaddr, if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) { u64 cfg = npc_update_use_hash(lt, ld);
- hash_cnt++; if (hash_cnt == NPC_MAX_HASH) return;
@@ -149,6 +148,8 @@ static void npc_program_mkex_hash_rx(struct rvu *rvu, int blkaddr, mkex_hash->hash_mask[intf][ld][1]); SET_KEX_LD_HASH_CTRL(intf, ld, mkex_hash->hash_ctrl[intf][ld]); + + hash_cnt++; } } } @@ -171,7 +172,6 @@ static void npc_program_mkex_hash_tx(struct rvu *rvu, int blkaddr, if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) { u64 cfg = npc_update_use_hash(lt, ld);
- hash_cnt++; if (hash_cnt == NPC_MAX_HASH) return;
@@ -187,8 +187,6 @@ static void npc_program_mkex_hash_tx(struct rvu *rvu, int blkaddr, SET_KEX_LD_HASH_CTRL(intf, ld, mkex_hash->hash_ctrl[intf][ld]); hash_cnt++; - if (hash_cnt == NPC_MAX_HASH) - return; } } } @@ -242,8 +240,8 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf, struct flow_msg *omask) { struct npc_mcam_kex_hash *mkex_hash = rvu->kpu.mkex_hash; - struct npc_get_secret_key_req req; - struct npc_get_secret_key_rsp rsp; + struct npc_get_field_hash_info_req req; + struct npc_get_field_hash_info_rsp rsp; u64 ldata[2], cfg; u32 field_hash; u8 hash_idx; @@ -254,7 +252,7 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf, }
req.intf = intf; - rvu_mbox_handler_npc_get_secret_key(rvu, &req, &rsp); + rvu_mbox_handler_npc_get_field_hash_info(rvu, &req, &rsp);
for (hash_idx = 0; hash_idx < NPC_MAX_HASH; hash_idx++) { cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_CFG(intf, hash_idx)); @@ -315,13 +313,13 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf, } }
-int rvu_mbox_handler_npc_get_secret_key(struct rvu *rvu, - struct npc_get_secret_key_req *req, - struct npc_get_secret_key_rsp *rsp) +int rvu_mbox_handler_npc_get_field_hash_info(struct rvu *rvu, + struct npc_get_field_hash_info_req *req, + struct npc_get_field_hash_info_rsp *rsp) { u64 *secret_key = rsp->secret_key; u8 intf = req->intf; - int blkaddr; + int i, j, blkaddr;
blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); if (blkaddr < 0) { @@ -333,6 +331,19 @@ int rvu_mbox_handler_npc_get_secret_key(struct rvu *rvu, secret_key[1] = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY1(intf)); secret_key[2] = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY2(intf));
+ for (i = 0; i < NPC_MAX_HASH; i++) { + for (j = 0; j < NPC_MAX_HASH_MASK; j++) { + rsp->hash_mask[NIX_INTF_RX][i][j] = + GET_KEX_LD_HASH_MASK(NIX_INTF_RX, i, j); + rsp->hash_mask[NIX_INTF_TX][i][j] = + GET_KEX_LD_HASH_MASK(NIX_INTF_TX, i, j); + } + } + + for (i = 0; i < NPC_MAX_INTF; i++) + for (j = 0; j < NPC_MAX_HASH; j++) + rsp->hash_ctrl[i][j] = GET_KEX_LD_HASH_CTRL(i, j); + return 0; }
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h index 3efeb09c58dec..65936f4aeaacf 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h @@ -31,6 +31,12 @@ rvu_write64(rvu, blkaddr, \ NPC_AF_INTFX_HASHX_MASKX(intf, ld, mask_idx), cfg)
+#define GET_KEX_LD_HASH_CTRL(intf, ld) \ + rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_RESULT_CTRL(intf, ld)) + +#define GET_KEX_LD_HASH_MASK(intf, ld, mask_idx) \ + rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_MASKX(intf, ld, mask_idx)) + #define SET_KEX_LD_HASH_CTRL(intf, ld, cfg) \ rvu_write64(rvu, blkaddr, \ NPC_AF_INTFX_HASHX_RESULT_CTRL(intf, ld), cfg)
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit f66155905959076619c9c519fb099e8ae6cb6f7b ]
1. Allow field hash configuration for both source and destination IPv6. 2. Configure hardware parser based on hash extract feature enable flag for IPv6. 3. Fix IPv6 endianness issue while updating the source/destination IP address via ntuple rule.
Fixes: 56d9f5fd2246 ("octeontx2-af: Use hashed field in MCAM key") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../marvell/octeontx2/af/rvu_npc_fs.c | 23 +++-- .../marvell/octeontx2/af/rvu_npc_fs.h | 4 + .../marvell/octeontx2/af/rvu_npc_hash.c | 88 ++++++++++--------- .../marvell/octeontx2/af/rvu_npc_hash.h | 4 +- 4 files changed, 69 insertions(+), 50 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c index 0a3b591c1ef5a..1eb5eb29a2ba6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c @@ -13,11 +13,6 @@ #include "rvu_npc_fs.h" #include "rvu_npc_hash.h"
-#define NPC_BYTESM GENMASK_ULL(19, 16) -#define NPC_HDR_OFFSET GENMASK_ULL(15, 8) -#define NPC_KEY_OFFSET GENMASK_ULL(5, 0) -#define NPC_LDATA_EN BIT_ULL(7) - static const char * const npc_flow_names[] = { [NPC_DMAC] = "dmac", [NPC_SMAC] = "smac", @@ -440,6 +435,7 @@ static void npc_handle_multi_layer_fields(struct rvu *rvu, int blkaddr, u8 intf) static void npc_scan_ldata(struct rvu *rvu, int blkaddr, u8 lid, u8 lt, u64 cfg, u8 intf) { + struct npc_mcam_kex_hash *mkex_hash = rvu->kpu.mkex_hash; struct npc_mcam *mcam = &rvu->hw->mcam; u8 hdr, key, nr_bytes, bit_offset; u8 la_ltype, la_start; @@ -486,8 +482,21 @@ do { \ NPC_SCAN_HDR(NPC_TOS, NPC_LID_LC, NPC_LT_LC_IP, 1, 1); NPC_SCAN_HDR(NPC_SIP_IPV4, NPC_LID_LC, NPC_LT_LC_IP, 12, 4); NPC_SCAN_HDR(NPC_DIP_IPV4, NPC_LID_LC, NPC_LT_LC_IP, 16, 4); - NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16); - NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16); + if (rvu->hw->cap.npc_hash_extract) { + if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][0]) + NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 4); + else + NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16); + + if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][1]) + NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 4); + else + NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16); + } else { + NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16); + NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16); + } + NPC_SCAN_HDR(NPC_SPORT_UDP, NPC_LID_LD, NPC_LT_LD_UDP, 0, 2); NPC_SCAN_HDR(NPC_DPORT_UDP, NPC_LID_LD, NPC_LT_LD_UDP, 2, 2); NPC_SCAN_HDR(NPC_SPORT_TCP, NPC_LID_LD, NPC_LT_LD_TCP, 0, 2); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h index bdd65ce56a32d..3f5c9042d10e7 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h @@ -9,6 +9,10 @@ #define __RVU_NPC_FS_H
#define IPV6_WORDS 4 +#define NPC_BYTESM GENMASK_ULL(19, 16) +#define NPC_HDR_OFFSET GENMASK_ULL(15, 8) +#define NPC_KEY_OFFSET GENMASK_ULL(5, 0) +#define NPC_LDATA_EN BIT_ULL(7)
void npc_update_entry(struct rvu *rvu, enum key_fields type, struct mcam_entry *entry, u64 val_lo, diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c index 18f12f9156544..3182adb7b9a80 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c @@ -78,42 +78,43 @@ static u32 rvu_npc_toeplitz_hash(const u64 *data, u64 *key, size_t data_bit_len, return hash_out; }
-u32 npc_field_hash_calc(u64 *ldata, struct npc_mcam_kex_hash *mkex_hash, - u64 *secret_key, u8 intf, u8 hash_idx) +u32 npc_field_hash_calc(u64 *ldata, struct npc_get_field_hash_info_rsp rsp, + u8 intf, u8 hash_idx) { u64 hash_key[3]; u64 data_padded[2]; u32 field_hash;
- hash_key[0] = secret_key[1] << 31; - hash_key[0] |= secret_key[2]; - hash_key[1] = secret_key[1] >> 33; - hash_key[1] |= secret_key[0] << 31; - hash_key[2] = secret_key[0] >> 33; + hash_key[0] = rsp.secret_key[1] << 31; + hash_key[0] |= rsp.secret_key[2]; + hash_key[1] = rsp.secret_key[1] >> 33; + hash_key[1] |= rsp.secret_key[0] << 31; + hash_key[2] = rsp.secret_key[0] >> 33;
- data_padded[0] = mkex_hash->hash_mask[intf][hash_idx][0] & ldata[0]; - data_padded[1] = mkex_hash->hash_mask[intf][hash_idx][1] & ldata[1]; + data_padded[0] = rsp.hash_mask[intf][hash_idx][0] & ldata[0]; + data_padded[1] = rsp.hash_mask[intf][hash_idx][1] & ldata[1]; field_hash = rvu_npc_toeplitz_hash(data_padded, hash_key, 128, 159);
- field_hash &= mkex_hash->hash_ctrl[intf][hash_idx] >> 32; - field_hash |= mkex_hash->hash_ctrl[intf][hash_idx]; + field_hash &= FIELD_GET(GENMASK(63, 32), rsp.hash_ctrl[intf][hash_idx]); + field_hash += FIELD_GET(GENMASK(31, 0), rsp.hash_ctrl[intf][hash_idx]); return field_hash; }
-static u64 npc_update_use_hash(int lt, int ld) +static u64 npc_update_use_hash(struct rvu *rvu, int blkaddr, + u8 intf, int lid, int lt, int ld) { - u64 cfg = 0; - - switch (lt) { - case NPC_LT_LC_IP6: - /* Update use_hash(bit-20) and bytesm1 (bit-16:19) - * in KEX_LD_CFG - */ - cfg = KEX_LD_CFG_USE_HASH(0x1, 0x03, - ld ? 0x18 : 0x8, - 0x1, 0x0, ld ? 0x14 : 0x10); - break; - } + u8 hdr, key; + u64 cfg; + + cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_LIDX_LTX_LDX_CFG(intf, lid, lt, ld)); + hdr = FIELD_GET(NPC_HDR_OFFSET, cfg); + key = FIELD_GET(NPC_KEY_OFFSET, cfg); + + /* Update use_hash(bit-20) to 'true' and + * bytesm1(bit-16:19) to '0x3' in KEX_LD_CFG + */ + cfg = KEX_LD_CFG_USE_HASH(0x1, 0x03, + hdr, 0x1, 0x0, key);
return cfg; } @@ -132,11 +133,13 @@ static void npc_program_mkex_hash_rx(struct rvu *rvu, int blkaddr, for (lt = 0; lt < NPC_MAX_LT; lt++) { for (ld = 0; ld < NPC_MAX_LD; ld++) { if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) { - u64 cfg = npc_update_use_hash(lt, ld); + u64 cfg;
if (hash_cnt == NPC_MAX_HASH) return;
+ cfg = npc_update_use_hash(rvu, blkaddr, + intf, lid, lt, ld); /* Set updated KEX configuration */ SET_KEX_LD(intf, lid, lt, ld, cfg); /* Set HASH configuration */ @@ -170,11 +173,13 @@ static void npc_program_mkex_hash_tx(struct rvu *rvu, int blkaddr, for (lt = 0; lt < NPC_MAX_LT; lt++) { for (ld = 0; ld < NPC_MAX_LD; ld++) if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) { - u64 cfg = npc_update_use_hash(lt, ld); + u64 cfg;
if (hash_cnt == NPC_MAX_HASH) return;
+ cfg = npc_update_use_hash(rvu, blkaddr, + intf, lid, lt, ld); /* Set updated KEX configuration */ SET_KEX_LD(intf, lid, lt, ld, cfg); /* Set HASH configuration */ @@ -268,44 +273,45 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf, * is hashed to 32 bit value. */ case NPC_LT_LC_IP6: - if (features & BIT_ULL(NPC_SIP_IPV6)) { + /* ld[0] == hash_idx[0] == Source IPv6 + * ld[1] == hash_idx[1] == Destination IPv6 + */ + if ((features & BIT_ULL(NPC_SIP_IPV6)) && !hash_idx) { u32 src_ip[IPV6_WORDS];
be32_to_cpu_array(src_ip, pkt->ip6src, IPV6_WORDS); - ldata[0] = (u64)src_ip[0] << 32 | src_ip[1]; - ldata[1] = (u64)src_ip[2] << 32 | src_ip[3]; + ldata[1] = (u64)src_ip[0] << 32 | src_ip[1]; + ldata[0] = (u64)src_ip[2] << 32 | src_ip[3]; field_hash = npc_field_hash_calc(ldata, - mkex_hash, - rsp.secret_key, + rsp, intf, hash_idx); npc_update_entry(rvu, NPC_SIP_IPV6, entry, - field_hash, 0, 32, 0, intf); + field_hash, 0, + GENMASK(31, 0), 0, intf); memcpy(&opkt->ip6src, &pkt->ip6src, sizeof(pkt->ip6src)); memcpy(&omask->ip6src, &mask->ip6src, sizeof(mask->ip6src)); - break; - } - - if (features & BIT_ULL(NPC_DIP_IPV6)) { + } else if ((features & BIT_ULL(NPC_DIP_IPV6)) && hash_idx) { u32 dst_ip[IPV6_WORDS];
be32_to_cpu_array(dst_ip, pkt->ip6dst, IPV6_WORDS); - ldata[0] = (u64)dst_ip[0] << 32 | dst_ip[1]; - ldata[1] = (u64)dst_ip[2] << 32 | dst_ip[3]; + ldata[1] = (u64)dst_ip[0] << 32 | dst_ip[1]; + ldata[0] = (u64)dst_ip[2] << 32 | dst_ip[3]; field_hash = npc_field_hash_calc(ldata, - mkex_hash, - rsp.secret_key, + rsp, intf, hash_idx); npc_update_entry(rvu, NPC_DIP_IPV6, entry, - field_hash, 0, 32, 0, intf); + field_hash, 0, + GENMASK(31, 0), 0, intf); memcpy(&opkt->ip6dst, &pkt->ip6dst, sizeof(pkt->ip6dst)); memcpy(&omask->ip6dst, &mask->ip6dst, sizeof(mask->ip6dst)); } + break; } } diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h index 65936f4aeaacf..a1c3d987b8044 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h @@ -62,8 +62,8 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf, struct flow_msg *omask); void npc_config_secret_key(struct rvu *rvu, int blkaddr); void npc_program_mkex_hash(struct rvu *rvu, int blkaddr); -u32 npc_field_hash_calc(u64 *ldata, struct npc_mcam_kex_hash *mkex_hash, - u64 *secret_key, u8 intf, u8 hash_idx); +u32 npc_field_hash_calc(u64 *ldata, struct npc_get_field_hash_info_rsp rsp, + u8 intf, u8 hash_idx);
static struct npc_mcam_kex_hash npc_mkex_hash_default __maybe_unused = { .lid_lt_ld_hash_en = {
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit 5eb1b7220948a69298a436148a735f32ec325289 ]
Firmware enables PFs and allocate mbox resources for each of the PFs. Currently PF driver configures mbox resources without checking whether PF is enabled or not. This results in crash. This patch fixes this issue by skipping disabled PF's mbox initialization.
Fixes: 9bdc47a6e328 ("octeontx2-af: Mbox communication support btw AF and it's VFs") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/mbox.c | 5 +- .../net/ethernet/marvell/octeontx2/af/mbox.h | 3 +- .../net/ethernet/marvell/octeontx2/af/rvu.c | 49 +++++++++++++++---- 3 files changed, 46 insertions(+), 11 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c index 2898931d5260a..9690ac01f02c8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c @@ -157,7 +157,7 @@ EXPORT_SYMBOL(otx2_mbox_init); */ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase, struct pci_dev *pdev, void *reg_base, - int direction, int ndevs) + int direction, int ndevs, unsigned long *pf_bmap) { struct otx2_mbox_dev *mdev; int devid, err; @@ -169,6 +169,9 @@ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase, mbox->hwbase = hwbase[0];
for (devid = 0; devid < ndevs; devid++) { + if (!test_bit(devid, pf_bmap)) + continue; + mdev = &mbox->dev[devid]; mdev->mbase = hwbase[devid]; mdev->hwbase = hwbase[devid]; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 5921a78661e41..11eeb36cf9a54 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -96,9 +96,10 @@ void otx2_mbox_destroy(struct otx2_mbox *mbox); int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase, struct pci_dev *pdev, void __force *reg_base, int direction, int ndevs); + int otx2_mbox_regions_init(struct otx2_mbox *mbox, void __force **hwbase, struct pci_dev *pdev, void __force *reg_base, - int direction, int ndevs); + int direction, int ndevs, unsigned long *bmap); void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid); int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid); int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 3f5e09b77d4bd..873f081c030de 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -2274,7 +2274,7 @@ static inline void rvu_afvf_mbox_up_handler(struct work_struct *work) }
static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, - int num, int type) + int num, int type, unsigned long *pf_bmap) { struct rvu_hwinfo *hw = rvu->hw; int region; @@ -2286,6 +2286,9 @@ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, */ if (type == TYPE_AFVF) { for (region = 0; region < num; region++) { + if (!test_bit(region, pf_bmap)) + continue; + if (hw->cap.per_pf_mbox_regs) { bar4 = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFX_BAR4_ADDR(0)) + @@ -2307,6 +2310,9 @@ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, * RVU_AF_PF_BAR4_ADDR register. */ for (region = 0; region < num; region++) { + if (!test_bit(region, pf_bmap)) + continue; + if (hw->cap.per_pf_mbox_regs) { bar4 = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFX_BAR4_ADDR(region)); @@ -2335,20 +2341,41 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, int err = -EINVAL, i, dir, dir_up; void __iomem *reg_base; struct rvu_work *mwork; + unsigned long *pf_bmap; void **mbox_regions; const char *name; + u64 cfg;
- mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL); - if (!mbox_regions) + pf_bmap = bitmap_zalloc(num, GFP_KERNEL); + if (!pf_bmap) return -ENOMEM;
+ /* RVU VFs */ + if (type == TYPE_AFVF) + bitmap_set(pf_bmap, 0, num); + + if (type == TYPE_AFPF) { + /* Mark enabled PFs in bitmap */ + for (i = 0; i < num; i++) { + cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(i)); + if (cfg & BIT_ULL(20)) + set_bit(i, pf_bmap); + } + } + + mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL); + if (!mbox_regions) { + err = -ENOMEM; + goto free_bitmap; + } + switch (type) { case TYPE_AFPF: name = "rvu_afpf_mailbox"; dir = MBOX_DIR_AFPF; dir_up = MBOX_DIR_AFPF_UP; reg_base = rvu->afreg_base; - err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF); + err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF, pf_bmap); if (err) goto free_regions; break; @@ -2357,7 +2384,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, dir = MBOX_DIR_PFVF; dir_up = MBOX_DIR_PFVF_UP; reg_base = rvu->pfreg_base; - err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF); + err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF, pf_bmap); if (err) goto free_regions; break; @@ -2388,16 +2415,19 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, }
err = otx2_mbox_regions_init(&mw->mbox, mbox_regions, rvu->pdev, - reg_base, dir, num); + reg_base, dir, num, pf_bmap); if (err) goto exit;
err = otx2_mbox_regions_init(&mw->mbox_up, mbox_regions, rvu->pdev, - reg_base, dir_up, num); + reg_base, dir_up, num, pf_bmap); if (err) goto exit;
for (i = 0; i < num; i++) { + if (!test_bit(i, pf_bmap)) + continue; + mwork = &mw->mbox_wrk[i]; mwork->rvu = rvu; INIT_WORK(&mwork->work, mbox_handler); @@ -2406,8 +2436,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, mwork->rvu = rvu; INIT_WORK(&mwork->work, mbox_up_handler); } - kfree(mbox_regions); - return 0; + goto free_regions;
exit: destroy_workqueue(mw->mbox_wq); @@ -2416,6 +2445,8 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, iounmap((void __iomem *)mbox_regions[num]); free_regions: kfree(mbox_regions); +free_bitmap: + bitmap_free(pf_bmap); return err; }
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit c926252205c424c4842dbdbe02f8e3296f623204 ]
At the stage of enabling packet I/O in otx2_open, If mailbox timeout occurs then interface ends up in down state where as hardware packet I/O is enabled. Hence disable packet I/O also before bailing out.
Fixes: 1ea0166da050 ("octeontx2-pf: Fix the device state on error") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 2221220abcaae..ed911d9946277 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -1835,13 +1835,22 @@ int otx2_open(struct net_device *netdev) otx2_dmacflt_reinstall_flows(pf);
err = otx2_rxtx_enable(pf, true); - if (err) + /* If a mbox communication error happens at this point then interface + * will end up in a state such that it is in down state but hardware + * mcam entries are enabled to receive the packets. Hence disable the + * packet I/O. + */ + if (err == EIO) + goto err_disable_rxtx; + else if (err) goto err_tx_stop_queues;
otx2_do_set_rx_mode(pf);
return 0;
+err_disable_rxtx: + otx2_rxtx_enable(pf, false); err_tx_stop_queues: netif_tx_stop_all_queues(netdev); netif_carrier_off(netdev);
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 99ae1260fdb5f15beab8a3adfb93a9041c87a2c1 ]
When a VF device probe fails due to error in MSIX vector allocation then the resources NIX and NPA LFs were not detached. Fix this by detaching the LFs when MSIX vector allocation fails.
Fixes: 3184fb5ba96e ("octeontx2-vf: Virtual function driver support") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index ab126f8706c74..53366dbfbf27c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -621,7 +621,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
err = otx2vf_realloc_msix_vectors(vf); if (err) - goto err_mbox_destroy; + goto err_detach_rsrc;
err = otx2_set_real_num_queues(netdev, qcount, qcount); if (err)
From: Shannon Nelson shannon.nelson@amd.com
[ Upstream commit 3711d44fac1f80ea69ecb7315fed05b3812a7401 ]
It seems that ethtool is calling into .get_rxnfc more often with ETHTOOL_GRXCLSRLCNT which ionic doesn't know about. We don't need to log a message about it, just return not supported.
Fixes: aa3198819bea6 ("ionic: Add RSS support") Signed-off-by: Shannon Nelson shannon.nelson@amd.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/pensando/ionic/ionic_ethtool.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c index 01c22701482d9..d7370fb60a168 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c @@ -691,7 +691,7 @@ static int ionic_get_rxnfc(struct net_device *netdev, info->data = lif->nxqs; break; default: - netdev_err(netdev, "Command parameter %d is not supported\n", + netdev_dbg(netdev, "Command parameter %d is not supported\n", info->cmd); err = -EOPNOTSUPP; }
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 9ad685dbfe7e856bbf17a7177b64676d324d6ed7 ]
It is not possible to set the number of lanes when setting link modes using the legacy IOCTL ethtool interface. Since 'struct ethtool_link_ksettings' is not initialized in this path, drivers receive an uninitialized number of lanes in 'struct ethtool_link_ksettings::lanes'.
When this information is later queried from drivers, it results in the ethtool code making decisions based on uninitialized memory, leading to the following KMSAN splat [1]. In practice, this most likely only happens with the tun driver that simply returns whatever it got in the set operation.
As far as I can tell, this uninitialized memory is not leaked to user space thanks to the 'ethtool_ops->cap_link_lanes_supported' check in linkmodes_prepare_data().
Fix by initializing the structure in the IOCTL path. Did not find any more call sites that pass an uninitialized structure when calling 'ethtool_ops::set_link_ksettings()'.
[1] BUG: KMSAN: uninit-value in ethnl_update_linkmodes net/ethtool/linkmodes.c:273 [inline] BUG: KMSAN: uninit-value in ethnl_set_linkmodes+0x190b/0x19d0 net/ethtool/linkmodes.c:333 ethnl_update_linkmodes net/ethtool/linkmodes.c:273 [inline] ethnl_set_linkmodes+0x190b/0x19d0 net/ethtool/linkmodes.c:333 ethnl_default_set_doit+0x88d/0xde0 net/ethtool/netlink.c:640 genl_family_rcv_msg_doit net/netlink/genetlink.c:968 [inline] genl_family_rcv_msg net/netlink/genetlink.c:1048 [inline] genl_rcv_msg+0x141a/0x14c0 net/netlink/genetlink.c:1065 netlink_rcv_skb+0x3f8/0x750 net/netlink/af_netlink.c:2577 genl_rcv+0x40/0x60 net/netlink/genetlink.c:1076 netlink_unicast_kernel net/netlink/af_netlink.c:1339 [inline] netlink_unicast+0xf41/0x1270 net/netlink/af_netlink.c:1365 netlink_sendmsg+0x127d/0x1430 net/netlink/af_netlink.c:1942 sock_sendmsg_nosec net/socket.c:724 [inline] sock_sendmsg net/socket.c:747 [inline] ____sys_sendmsg+0xa24/0xe40 net/socket.c:2501 ___sys_sendmsg+0x2a1/0x3f0 net/socket.c:2555 __sys_sendmsg net/socket.c:2584 [inline] __do_sys_sendmsg net/socket.c:2593 [inline] __se_sys_sendmsg net/socket.c:2591 [inline] __x64_sys_sendmsg+0x36b/0x540 net/socket.c:2591 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Uninit was stored to memory at: tun_get_link_ksettings+0x37/0x60 drivers/net/tun.c:3544 __ethtool_get_link_ksettings+0x17b/0x260 net/ethtool/ioctl.c:441 ethnl_set_linkmodes+0xee/0x19d0 net/ethtool/linkmodes.c:327 ethnl_default_set_doit+0x88d/0xde0 net/ethtool/netlink.c:640 genl_family_rcv_msg_doit net/netlink/genetlink.c:968 [inline] genl_family_rcv_msg net/netlink/genetlink.c:1048 [inline] genl_rcv_msg+0x141a/0x14c0 net/netlink/genetlink.c:1065 netlink_rcv_skb+0x3f8/0x750 net/netlink/af_netlink.c:2577 genl_rcv+0x40/0x60 net/netlink/genetlink.c:1076 netlink_unicast_kernel net/netlink/af_netlink.c:1339 [inline] netlink_unicast+0xf41/0x1270 net/netlink/af_netlink.c:1365 netlink_sendmsg+0x127d/0x1430 net/netlink/af_netlink.c:1942 sock_sendmsg_nosec net/socket.c:724 [inline] sock_sendmsg net/socket.c:747 [inline] ____sys_sendmsg+0xa24/0xe40 net/socket.c:2501 ___sys_sendmsg+0x2a1/0x3f0 net/socket.c:2555 __sys_sendmsg net/socket.c:2584 [inline] __do_sys_sendmsg net/socket.c:2593 [inline] __se_sys_sendmsg net/socket.c:2591 [inline] __x64_sys_sendmsg+0x36b/0x540 net/socket.c:2591 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Uninit was stored to memory at: tun_set_link_ksettings+0x37/0x60 drivers/net/tun.c:3553 ethtool_set_link_ksettings+0x600/0x690 net/ethtool/ioctl.c:609 __dev_ethtool net/ethtool/ioctl.c:3024 [inline] dev_ethtool+0x1db9/0x2a70 net/ethtool/ioctl.c:3078 dev_ioctl+0xb07/0x1270 net/core/dev_ioctl.c:524 sock_do_ioctl+0x295/0x540 net/socket.c:1213 sock_ioctl+0x729/0xd90 net/socket.c:1316 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:870 [inline] __se_sys_ioctl+0x222/0x400 fs/ioctl.c:856 __x64_sys_ioctl+0x96/0xe0 fs/ioctl.c:856 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Local variable link_ksettings created at: ethtool_set_link_ksettings+0x54/0x690 net/ethtool/ioctl.c:577 __dev_ethtool net/ethtool/ioctl.c:3024 [inline] dev_ethtool+0x1db9/0x2a70 net/ethtool/ioctl.c:3078
Fixes: 012ce4dd3102 ("ethtool: Extend link modes settings uAPI with lanes") Reported-and-tested-by: syzbot+ef6edd9f1baaa54d6235@syzkaller.appspotmail.com Link: https://lore.kernel.org/netdev/0000000000004bb41105fa70f361@google.com/ Reviewed-by: Danielle Ratson danieller@nvidia.com Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ethtool/ioctl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c index 038398d41a937..940c0e27be735 100644 --- a/net/ethtool/ioctl.c +++ b/net/ethtool/ioctl.c @@ -580,8 +580,8 @@ static int ethtool_get_link_ksettings(struct net_device *dev, static int ethtool_set_link_ksettings(struct net_device *dev, void __user *useraddr) { + struct ethtool_link_ksettings link_ksettings = {}; int err; - struct ethtool_link_ksettings link_ksettings;
ASSERT_RTNL();
From: Shannon Nelson shannon.nelson@amd.com
[ Upstream commit 4a54903ff68ddb33b6463c94b4eb37fc584ef760 ]
Add a check for NULL on the alloc return. If devlink_alloc() fails and we try to use devlink_priv() on the NULL return, the kernel gets very unhappy and panics. With this fix, the driver load will still fail, but at least it won't panic the kernel.
Fixes: df69ba43217d ("ionic: Add basic framework for IONIC Network device driver") Signed-off-by: Shannon Nelson shannon.nelson@amd.com Reviewed-by: Simon Horman simon.horman@corigine.com Reviewed-by: Jiri Pirko jiri@nvidia.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/pensando/ionic/ionic_devlink.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c index 4297ed9024c01..2696dac21b096 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c @@ -65,6 +65,8 @@ struct ionic *ionic_devlink_alloc(struct device *dev) struct devlink *dl;
dl = devlink_alloc(&ionic_dl_ops, sizeof(struct ionic), dev); + if (!dl) + return NULL;
return devlink_priv(dl); }
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 6a341729fb31b4c5df9f74f24b4b1c98410c9b87 ]
syzkaller reported a warning below [0].
We can reproduce it by sending 0-byte data from the (AF_PACKET, SOCK_PACKET) socket via some devices whose dev->hard_header_len is 0.
struct sockaddr_pkt addr = { .spkt_family = AF_PACKET, .spkt_device = "tun0", }; int fd;
fd = socket(AF_PACKET, SOCK_PACKET, 0); sendto(fd, NULL, 0, 0, (struct sockaddr *)&addr, sizeof(addr));
We have a similar fix for the (AF_PACKET, SOCK_RAW) socket as commit dc633700f00f ("net/af_packet: check len when min_header_len equals to 0").
Let's add the same test for the SOCK_PACKET socket.
[0]: skb_assert_len WARNING: CPU: 1 PID: 19945 at include/linux/skbuff.h:2552 skb_assert_len include/linux/skbuff.h:2552 [inline] WARNING: CPU: 1 PID: 19945 at include/linux/skbuff.h:2552 __dev_queue_xmit+0x1f26/0x31d0 net/core/dev.c:4159 Modules linked in: CPU: 1 PID: 19945 Comm: syz-executor.0 Not tainted 6.3.0-rc7-02330-gca6270c12e20 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 RIP: 0010:skb_assert_len include/linux/skbuff.h:2552 [inline] RIP: 0010:__dev_queue_xmit+0x1f26/0x31d0 net/core/dev.c:4159 Code: 89 de e8 1d a2 85 fd 84 db 75 21 e8 64 a9 85 fd 48 c7 c6 80 2a 1f 86 48 c7 c7 c0 06 1f 86 c6 05 23 cf 27 04 01 e8 fa ee 56 fd <0f> 0b e8 43 a9 85 fd 0f b6 1d 0f cf 27 04 31 ff 89 de e8 e3 a1 85 RSP: 0018:ffff8880217af6e0 EFLAGS: 00010282 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffc90001133000 RDX: 0000000000040000 RSI: ffffffff81186922 RDI: 0000000000000001 RBP: ffff8880217af8b0 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000001 R11: 0000000000000001 R12: ffff888030045640 R13: ffff8880300456b0 R14: ffff888030045650 R15: ffff888030045718 FS: 00007fc5864da640(0000) GS:ffff88806cd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020005740 CR3: 000000003f856003 CR4: 0000000000770ee0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: <TASK> dev_queue_xmit include/linux/netdevice.h:3085 [inline] packet_sendmsg_spkt+0xc4b/0x1230 net/packet/af_packet.c:2066 sock_sendmsg_nosec net/socket.c:724 [inline] sock_sendmsg+0x1b4/0x200 net/socket.c:747 ____sys_sendmsg+0x331/0x970 net/socket.c:2503 ___sys_sendmsg+0x11d/0x1c0 net/socket.c:2557 __sys_sendmmsg+0x18c/0x430 net/socket.c:2643 __do_sys_sendmmsg net/socket.c:2672 [inline] __se_sys_sendmmsg net/socket.c:2669 [inline] __x64_sys_sendmmsg+0x9c/0x100 net/socket.c:2669 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3c/0x90 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x72/0xdc RIP: 0033:0x7fc58791de5d Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 73 9f 1b 00 f7 d8 64 89 01 48 RSP: 002b:00007fc5864d9cc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000133 RAX: ffffffffffffffda RBX: 00000000004bbf80 RCX: 00007fc58791de5d RDX: 0000000000000001 RSI: 0000000020005740 RDI: 0000000000000004 RBP: 00000000004bbf80 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000b R14: 00007fc58797e530 R15: 0000000000000000 </TASK> ---[ end trace 0000000000000000 ]--- skb len=0 headroom=16 headlen=0 tailroom=304 mac=(16,0) net=(16,-1) trans=-1 shinfo(txflags=0 nr_frags=0 gso(size=0 type=0 segs=0)) csum(0x0 ip_summed=0 complete_sw=0 valid=0 level=0) hash(0x0 sw=0 l4=0) proto=0x0000 pkttype=0 iif=0 dev name=sit0 feat=0x00000006401d7869 sk family=17 type=10 proto=0
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Reviewed-by: Willem de Bruijn willemb@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/packet/af_packet.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index ac9335d76fb73..2af2ab924d64a 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2035,7 +2035,7 @@ static int packet_sendmsg_spkt(struct socket *sock, struct msghdr *msg, goto retry; }
- if (!dev_validate_header(dev, skb->data, len)) { + if (!dev_validate_header(dev, skb->data, len) || !skb->len) { err = -EINVAL; goto out_unlock; }
From: Chia-I Wu olvaffe@gmail.com
[ Upstream commit 2397e3d8d2e120355201a8310b61929f5a8bd2c0 ]
mgr->ctx_handles should be protected by mgr->lock.
v2: improve commit message v3: add a Fixes tag
Signed-off-by: Chia-I Wu olvaffe@gmail.com Reviewed-by: Christian König christian.koenig@amd.com Fixes: 52c6a62c64fa ("drm/amdgpu: add interface for editing a foreign process's priority v3") Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c index e9b45089a28a6..863b2a34b2d64 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c @@ -38,6 +38,7 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev, { struct fd f = fdget(fd); struct amdgpu_fpriv *fpriv; + struct amdgpu_ctx_mgr *mgr; struct amdgpu_ctx *ctx; uint32_t id; int r; @@ -51,8 +52,11 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev, return r; }
- idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id) + mgr = &fpriv->ctx_mgr; + mutex_lock(&mgr->lock); + idr_for_each_entry(&mgr->ctx_handles, ctx, id) amdgpu_ctx_priority_override(ctx, priority); + mutex_unlock(&mgr->lock);
fdput(f); return 0;
From: Ruliang Lin u202112092@hust.edu.cn
[ Upstream commit 0d727e1856ef22dd9337199430258cb64cbbc658 ]
Smatch complains that: snd_usb_caiaq_input_init() warn: missing error code 'ret'
This patch adds a new case to handle the situation where the device does not support any input methods in the `snd_usb_caiaq_input_init` function. It returns an `-EINVAL` error code to indicate that no input methods are supported on the device.
Fixes: 523f1dce3743 ("[ALSA] Add Native Instrument usb audio device support") Signed-off-by: Ruliang Lin u202112092@hust.edu.cn Reviewed-by: Dongliang Mu dzm91@hust.edu.cn Acked-by: Daniel Mack daniel@zonque.org Link: https://lore.kernel.org/r/20230504065054.3309-1-u202112092@hust.edu.cn Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/usb/caiaq/input.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/sound/usb/caiaq/input.c b/sound/usb/caiaq/input.c index 1e2cf2f08eecd..84f26dce7f5d0 100644 --- a/sound/usb/caiaq/input.c +++ b/sound/usb/caiaq/input.c @@ -804,6 +804,7 @@ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev)
default: /* no input methods supported on this device */ + ret = -EINVAL; goto exit_free_idev; }
From: Claudio Imbrenda imbrenda@linux.ibm.com
[ Upstream commit c148dc8e2fa403be501612ee409db866eeed35c0 ]
Fix a potential race in gmap_make_secure() and remove the last user of follow_page() without FOLL_GET.
The old code is locking something it doesn't have a reference to, and as explained by Jason and David in this discussion: https://lore.kernel.org/linux-mm/Y9J4P%2FRNvY1Ztn0Q@nvidia.com/ it can lead to all kind of bad things, including the page getting unmapped (MADV_DONTNEED), freed, reallocated as a larger folio and the unlock_page() would target the wrong bit. There is also another race with the FOLL_WRITE, which could race between the follow_page() and the get_locked_pte().
The main point is to remove the last use of follow_page() without FOLL_GET or FOLL_PIN, removing the races can be considered a nice bonus.
Link: https://lore.kernel.org/linux-mm/Y9J4P%2FRNvY1Ztn0Q@nvidia.com/ Suggested-by: Jason Gunthorpe jgg@nvidia.com Fixes: 214d9bbcd3a6 ("s390/mm: provide memory management functions for protected KVM guests") Reviewed-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Claudio Imbrenda imbrenda@linux.ibm.com Message-Id: 20230428092753.27913-2-imbrenda@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/s390/kernel/uv.c | 32 +++++++++++--------------------- 1 file changed, 11 insertions(+), 21 deletions(-)
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c index f9810d2a267c6..5caa0ed2b594a 100644 --- a/arch/s390/kernel/uv.c +++ b/arch/s390/kernel/uv.c @@ -192,21 +192,10 @@ static int expected_page_refs(struct page *page) return res; }
-static int make_secure_pte(pte_t *ptep, unsigned long addr, - struct page *exp_page, struct uv_cb_header *uvcb) +static int make_page_secure(struct page *page, struct uv_cb_header *uvcb) { - pte_t entry = READ_ONCE(*ptep); - struct page *page; int expected, cc = 0;
- if (!pte_present(entry)) - return -ENXIO; - if (pte_val(entry) & _PAGE_INVALID) - return -ENXIO; - - page = pte_page(entry); - if (page != exp_page) - return -ENXIO; if (PageWriteback(page)) return -EAGAIN; expected = expected_page_refs(page); @@ -297,17 +286,18 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb) goto out;
rc = -ENXIO; - page = follow_page(vma, uaddr, FOLL_WRITE); - if (IS_ERR_OR_NULL(page)) - goto out; - - lock_page(page); ptep = get_locked_pte(gmap->mm, uaddr, &ptelock); - if (should_export_before_import(uvcb, gmap->mm)) - uv_convert_from_secure(page_to_phys(page)); - rc = make_secure_pte(ptep, uaddr, page, uvcb); + if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) { + page = pte_page(*ptep); + rc = -EAGAIN; + if (trylock_page(page)) { + if (should_export_before_import(uvcb, gmap->mm)) + uv_convert_from_secure(page_to_phys(page)); + rc = make_page_secure(page, uvcb); + unlock_page(page); + } + } pte_unmap_unlock(ptep, ptelock); - unlock_page(page); out: mmap_read_unlock(gmap->mm);
From: Arınç ÜNAL arinc.unal@arinc9.com
[ Upstream commit 37c218d8021e36e226add4bab93d071d30fe0704 ]
The multi-chip module MT7530 switch with a 40 MHz oscillator on the MT7621AT, MT7621DAT, and MT7621ST SoCs forwards corrupt frames using trgmii.
This is caused by the assumption that MT7621 SoCs have got 150 MHz PLL, hence using the ncpo1 value, 0x0780.
My testing shows this value works on Unielec U7621-06, Bartel's testing shows it won't work on Hi-Link HLK-MT7621A and Netgear WAC104. All devices tested have got 40 MHz oscillators.
Using the value for 125 MHz PLL, 0x0640, works on all boards at hand. The definitions for 125 MHz PLL exist on the Banana Pi BPI-R2 BSP source code whilst 150 MHz PLL don't.
Forwarding frames using trgmii on the MCM MT7530 switch with a 25 MHz oscillator on the said MT7621 SoCs works fine because the ncpo1 value defined for it is for 125 MHz PLL.
Change the 150 MHz PLL comment to 125 MHz PLL, and use the 125 MHz PLL ncpo1 values for both oscillator frequencies.
Link: https://github.com/BPI-SINOVOIP/BPI-R2-bsp/blob/81d24bbce7d99524d0771a8bdb2d... Fixes: 7ef6f6f8d237 ("net: dsa: mt7530: Add MT7621 TRGMII mode support") Tested-by: Bartel Eerdekens bartel.eerdekens@constell8.be Signed-off-by: Arınç ÜNAL arinc.unal@arinc9.com Reviewed-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/mt7530.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index 38bf760b5b5ee..3ef92ac7815c1 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -446,9 +446,9 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface) else ssc_delta = 0x87; if (priv->id == ID_MT7621) { - /* PLL frequency: 150MHz: 1.2GBit */ + /* PLL frequency: 125MHz: 1.0GBit */ if (xtal == HWTRAP_XTAL_40MHZ) - ncpo1 = 0x0780; + ncpo1 = 0x0640; if (xtal == HWTRAP_XTAL_25MHZ) ncpo1 = 0x0a00; } else { /* PLL frequency: 250MHz: 2.0Gbit */
From: Daniel Golle daniel@makrotopia.org
[ Upstream commit 7f54cc9772ced2d76ac11832f0ada43798443ac9 ]
MT7988 shares a significant part of the setup function with MT7531. Split-off those parts into a shared function which is going to be used also by mt7988_setup.
Signed-off-by: Daniel Golle daniel@makrotopia.org Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: 120a56b01bee ("net: dsa: mt7530: fix network connectivity with multiple CPU ports") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/mt7530.c | 99 ++++++++++++++++++++++------------------ 1 file changed, 55 insertions(+), 44 deletions(-)
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index 3ef92ac7815c1..4e63f14c42228 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -2312,12 +2312,65 @@ mt7530_setup(struct dsa_switch *ds) return 0; }
+static int +mt7531_setup_common(struct dsa_switch *ds) +{ + struct mt7530_priv *priv = ds->priv; + struct dsa_port *cpu_dp; + int ret, i; + + /* BPDU to CPU port */ + dsa_switch_for_each_cpu_port(cpu_dp, ds) { + mt7530_rmw(priv, MT7531_CFC, MT7531_CPU_PMAP_MASK, + BIT(cpu_dp->index)); + break; + } + mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK, + MT753X_BPDU_CPU_ONLY); + + /* Enable and reset MIB counters */ + mt7530_mib_reset(ds); + + for (i = 0; i < MT7530_NUM_PORTS; i++) { + /* Disable forwarding by default on all ports */ + mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK, + PCR_MATRIX_CLR); + + /* Disable learning by default on all ports */ + mt7530_set(priv, MT7530_PSC_P(i), SA_DIS); + + mt7530_set(priv, MT7531_DBG_CNT(i), MT7531_DIS_CLR); + + if (dsa_is_cpu_port(ds, i)) { + ret = mt753x_cpu_port_enable(ds, i); + if (ret) + return ret; + } else { + mt7530_port_disable(ds, i); + + /* Set default PVID to 0 on all user ports */ + mt7530_rmw(priv, MT7530_PPBV1_P(i), G0_PORT_VID_MASK, + G0_PORT_VID_DEF); + } + + /* Enable consistent egress tag */ + mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK, + PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT)); + } + + /* Flush the FDB table */ + ret = mt7530_fdb_cmd(priv, MT7530_FDB_FLUSH, NULL); + if (ret < 0) + return ret; + + return 0; +} + static int mt7531_setup(struct dsa_switch *ds) { struct mt7530_priv *priv = ds->priv; struct mt7530_dummy_poll p; - struct dsa_port *cpu_dp; u32 val, id; int ret, i;
@@ -2395,44 +2448,7 @@ mt7531_setup(struct dsa_switch *ds) mt7531_ind_c45_phy_write(priv, MT753X_CTRL_PHY_ADDR, MDIO_MMD_VEND2, CORE_PLL_GROUP4, val);
- /* BPDU to CPU port */ - dsa_switch_for_each_cpu_port(cpu_dp, ds) { - mt7530_rmw(priv, MT7531_CFC, MT7531_CPU_PMAP_MASK, - BIT(cpu_dp->index)); - break; - } - mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK, - MT753X_BPDU_CPU_ONLY); - - /* Enable and reset MIB counters */ - mt7530_mib_reset(ds); - - for (i = 0; i < MT7530_NUM_PORTS; i++) { - /* Disable forwarding by default on all ports */ - mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK, - PCR_MATRIX_CLR); - - /* Disable learning by default on all ports */ - mt7530_set(priv, MT7530_PSC_P(i), SA_DIS); - - mt7530_set(priv, MT7531_DBG_CNT(i), MT7531_DIS_CLR); - - if (dsa_is_cpu_port(ds, i)) { - ret = mt753x_cpu_port_enable(ds, i); - if (ret) - return ret; - } else { - mt7530_port_disable(ds, i); - - /* Set default PVID to 0 on all user ports */ - mt7530_rmw(priv, MT7530_PPBV1_P(i), G0_PORT_VID_MASK, - G0_PORT_VID_DEF); - } - - /* Enable consistent egress tag */ - mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK, - PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT)); - } + mt7531_setup_common(ds);
/* Setup VLAN ID 0 for VLAN-unaware bridges */ ret = mt7530_setup_vlan0(priv); @@ -2442,11 +2458,6 @@ mt7531_setup(struct dsa_switch *ds) ds->assisted_learning_on_cpu_port = true; ds->mtu_enforcement_ingress = true;
- /* Flush the FDB table */ - ret = mt7530_fdb_cmd(priv, MT7530_FDB_FLUSH, NULL); - if (ret < 0) - return ret; - return 0; }
From: Arınç ÜNAL arinc.unal@arinc9.com
[ Upstream commit 120a56b01beed51ab5956a734adcfd2760307107 ]
On mt753x_cpu_port_enable() there's code that enables flooding for the CPU port only. Since mt753x_cpu_port_enable() runs twice when both CPU ports are enabled, port 6 becomes the only port to forward the frames to. But port 5 is the active port, so no frames received from the user ports will be forwarded to port 5 which breaks network connectivity.
Every bit of the BC_FFP, UNM_FFP, and UNU_FFP bits represents a port. Fix this issue by setting the bit that corresponds to the CPU port without overwriting the other bits.
Clear the bits beforehand only for the MT7531 switch. According to the documents MT7621 Giga Switch Programming Guide v0.3 and MT7531 Reference Manual for Development Board v1.0, after reset, the BC_FFP, UNM_FFP, and UNU_FFP bits are set to 1 for MT7531, 0 for MT7530.
The commit 5e5502e012b8 ("net: dsa: mt7530: fix roaming from DSA user ports") silently changed the method to set the bits on the MT7530_MFC. Instead of clearing the relevant bits before mt7530_cpu_port_enable() which runs under a for loop, the commit started doing it on mt7530_cpu_port_enable().
Back then, this didn't really matter as only a single CPU port could be used since the CPU port number was hardcoded. The driver was later changed with commit 1f9a6abecf53 ("net: dsa: mt7530: get cpu-port via dp->cpu_dp instead of constant") to retrieve the CPU port via dp->cpu_dp. With that, this silent change became an issue for when using multiple CPU ports.
Fixes: 5e5502e012b8 ("net: dsa: mt7530: fix roaming from DSA user ports") Signed-off-by: Arınç ÜNAL arinc.unal@arinc9.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/mt7530.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index 4e63f14c42228..855220c5ce339 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -1015,9 +1015,9 @@ mt753x_cpu_port_enable(struct dsa_switch *ds, int port) mt7530_write(priv, MT7530_PVC_P(port), PORT_SPEC_TAG);
- /* Disable flooding by default */ - mt7530_rmw(priv, MT7530_MFC, BC_FFP_MASK | UNM_FFP_MASK | UNU_FFP_MASK, - BC_FFP(BIT(port)) | UNM_FFP(BIT(port)) | UNU_FFP(BIT(port))); + /* Enable flooding on the CPU port */ + mt7530_set(priv, MT7530_MFC, BC_FFP(BIT(port)) | UNM_FFP(BIT(port)) | + UNU_FFP(BIT(port)));
/* Set CPU port number */ if (priv->id == ID_MT7621) @@ -2331,6 +2331,10 @@ mt7531_setup_common(struct dsa_switch *ds) /* Enable and reset MIB counters */ mt7530_mib_reset(ds);
+ /* Disable flooding on all ports */ + mt7530_clear(priv, MT7530_MFC, BC_FFP_MASK | UNM_FFP_MASK | + UNU_FFP_MASK); + for (i = 0; i < MT7530_NUM_PORTS; i++) { /* Disable forwarding by default on all ports */ mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK,
From: Michal Swiatkowski michal.swiatkowski@linux.intel.com
[ Upstream commit 9f699b71c2f31c51bd1483a20e1c8ddc5986a8c9 ]
VF to VF traffic shouldn't go outside. To enforce it, set only the loopback enable bit in case of all ingress type rules added via the tc tool.
Fixes: 0d08a441fb1a ("ice: ndo_setup_tc implementation for PF") Reported-by: Sujai Buvaneswaran Sujai.Buvaneswaran@intel.com Signed-off-by: Michal Swiatkowski michal.swiatkowski@linux.intel.com Tested-by: George Kuruvinakunnel george.kuruvinakunnel@intel.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ice/ice_tc_lib.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c index 71cb15fcf63b9..652ef09eeb305 100644 --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c @@ -693,17 +693,18 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr) * results into order of switch rule evaluation. */ rule_info.priority = 7; + rule_info.flags_info.act_valid = true;
if (fltr->direction == ICE_ESWITCH_FLTR_INGRESS) { rule_info.sw_act.flag |= ICE_FLTR_RX; rule_info.sw_act.src = hw->pf_id; rule_info.rx = true; + rule_info.flags_info.act = ICE_SINGLE_ACT_LB_ENABLE; } else { rule_info.sw_act.flag |= ICE_FLTR_TX; rule_info.sw_act.src = vsi->idx; rule_info.rx = false; rule_info.flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE; - rule_info.flags_info.act_valid = true; }
/* specify the cookie as filter_rule_id */
From: Wenliang Wang wangwenliang.1995@bytedance.com
[ Upstream commit f8bb5104394560e29017c25bcade4c6b7aabd108 ]
For multi-queue and large ring-size use case, the following error occurred when free_unused_bufs: rcu: INFO: rcu_sched self-detected stall on CPU.
Fixes: 986a4f4d452d ("virtio_net: multiqueue support") Signed-off-by: Wenliang Wang wangwenliang.1995@bytedance.com Acked-by: Michael S. Tsirkin mst@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/virtio_net.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 3f1883814ce21..9a612b13b4e46 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -3405,12 +3405,14 @@ static void free_unused_bufs(struct virtnet_info *vi) struct virtqueue *vq = vi->sq[i].vq; while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) virtnet_sq_free_unused_buf(vq, buf); + cond_resched(); }
for (i = 0; i < vi->max_queue_pairs; i++) { struct virtqueue *vq = vi->rq[i].vq; while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) virtnet_rq_free_unused_buf(vq, buf); + cond_resched(); } }
From: Wei Fang wei.fang@nxp.com
[ Upstream commit 299efdc2380aac588557f4d0b2ce7bee05bd0cf2 ]
We should check whether the current SFI (Stream Filter Instance) table is full before creating a new SFI entry. However, the previous logic checks the handle by mistake and might lead to unpredictable behavior.
Fixes: 888ae5a3952b ("net: enetc: add tc flower psfp offload driver") Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/freescale/enetc/enetc_qos.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c index fcebb54224c09..a8539a8554a13 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c @@ -1255,7 +1255,7 @@ static int enetc_psfp_parse_clsflower(struct enetc_ndev_priv *priv, int index;
index = enetc_get_free_index(priv); - if (sfi->handle < 0) { + if (index < 0) { NL_SET_ERR_MSG_MOD(extack, "No Stream Filter resource!"); err = -ENOSPC; goto free_fmi;
From: Florian Fainelli f.fainelli@gmail.com
[ Upstream commit 93e0401e0fc0c54b0ac05b687cd135c2ac38187c ]
The call to phy_stop() races with the later call to phy_disconnect(), resulting in concurrent phy_suspend() calls being run from different CPUs. The final call to phy_disconnect() ensures that the PHY is stopped and suspended, too.
Fixes: c96e731c93ff ("net: bcmgenet: connect and disconnect from the PHY state machine") Signed-off-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/genet/bcmgenet.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c index f679ed54b3ef2..9860fd66f3bca 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c @@ -3460,7 +3460,6 @@ static void bcmgenet_netif_stop(struct net_device *dev) /* Disable MAC transmit. TX DMA disabled must be done before this */ umac_enable_set(priv, CMD_TX_EN, false);
- phy_stop(dev->phydev); bcmgenet_disable_rx_napi(priv); bcmgenet_intr_disable(priv);
On 5/15/2023 9:25 AM, Greg Kroah-Hartman wrote:
From: Florian Fainelli f.fainelli@gmail.com
[ Upstream commit 93e0401e0fc0c54b0ac05b687cd135c2ac38187c ]
The call to phy_stop() races with the later call to phy_disconnect(), resulting in concurrent phy_suspend() calls being run from different CPUs. The final call to phy_disconnect() ensures that the PHY is stopped and suspended, too.
Fixes: c96e731c93ff ("net: bcmgenet: connect and disconnect from the PHY state machine") Signed-off-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org
Please drop this patch until https://lore.kernel.org/lkml/20230515025608.2587012-1-f.fainelli@gmail.com/ is merged, thanks!
From: Kan Liang kan.liang@linux.intel.com
[ Upstream commit 07d85ba9d04e1ebd282f656a29ddf08c5b7b32a2 ]
Hundreds of "read LOST count failed" error messages may be displayed, when the below command is launched.
perf record -e '{cpu/mem-loads-aux/,cpu/event=0xcd,umask=0x1/}:S' -a
According to the commit 89e3106fa25fb1b6 ("libperf: Handle read format in perf_evsel__read()"), the PERF_FORMAT_GROUP is only available for the leader. However, the record__read_lost_samples() goes through every entry of an evlist, which includes both leader and member. The member event errors out and triggers the error message. Since there may be hundreds of CPUs on a server, the message will be printed hundreds of times, which is very annoying.
The message itself is correct, but the pr_err is a overkill. Other error messages in the record__read_lost_samples() are all pr_debug. To make the output format consistent, change the pr_err("read LOST count failed\n"); to pr_debug("read LOST count failed\n");. User can still get the message via -v option.
Fixes: e3a23261ad06d598 ("perf record: Read and inject LOST_SAMPLES events") Signed-off-by: Kan Liang kan.liang@linux.intel.com Acked-by: Namhyung Kim namhyung@kernel.org Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/r/20230301150413.27011-1-kan.liang@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/builtin-record.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 48c3461b496c4..7314183cdcb6c 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -1864,7 +1864,7 @@ static void __record__read_lost_samples(struct record *rec, struct evsel *evsel, int id_hdr_size;
if (perf_evsel__read(&evsel->core, cpu_idx, thread_idx, &count) < 0) { - pr_err("read LOST count failed\n"); + pr_debug("read LOST count failed\n"); return; }
From: Roman Lozko lozko.roma@gmail.com
[ Upstream commit 1f64cfdebfe0494264271e8d7a3a47faf5f58ec7 ]
Integers are not converted to floats during division in Python 2 which results in incorrect IPC values. Fix by switching to new division behavior.
Fixes: a483e64c0b62e93a ("perf scripting python: intel-pt-events.py: Add --insn-trace and --src-trace") Signed-off-by: Roman Lozko lozko.roma@gmail.com Acked-by: Adrian Hunter adrian.hunter@intel.com Cc: Adrian Hunter adrian.hunter@intel.com Link: https://lore.kernel.org/r/20230310150445.2925841-1-lozko.roma@gmail.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/scripts/python/intel-pt-events.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/scripts/python/intel-pt-events.py b/tools/perf/scripts/python/intel-pt-events.py index 6be7fd8fd6158..8058b2fc2b686 100644 --- a/tools/perf/scripts/python/intel-pt-events.py +++ b/tools/perf/scripts/python/intel-pt-events.py @@ -11,7 +11,7 @@ # FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for # more details.
-from __future__ import print_function +from __future__ import division, print_function
import os import sys
From: Thomas Richter tmricht@linux.ibm.com
[ Upstream commit eb2feb68cb7d404288493c41480843bc9f404789 ]
Commit 7f76b31130680fb3 ("perf list: Add IBM z16 event description for s390") contains the verbal description for z16 extended counter set.
However some entries of the public description contain UTF-8 characters which breaks the build on some distros.
Fix this and remove the UTF-8 characters.
Fixes: 7f76b31130680fb3 ("perf list: Add IBM z16 event description for s390") Reported-by: Arnaldo Carvalho de Melo acme@redhat.com Suggested-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Thomas Richter tmricht@linux.ibm.com Tested-by: Arnaldo Carvalho de Melo acme@redhat.com Cc: Sumanth Korikkar sumanthk@linux.ibm.com Cc: Sven Schnelle svens@linux.ibm.com Cc: Thomas Richter tmricht@linux.ibm.com Cc: Vasily Gorbik gor@linux.ibm.com Link: https://lore.kernel.org/r/ZBwkl77/I31AQk12@osiris Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/pmu-events/arch/s390/cf_z16/extended.json | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/tools/perf/pmu-events/arch/s390/cf_z16/extended.json b/tools/perf/pmu-events/arch/s390/cf_z16/extended.json index c306190fc06f2..c2b10ec1c6e01 100644 --- a/tools/perf/pmu-events/arch/s390/cf_z16/extended.json +++ b/tools/perf/pmu-events/arch/s390/cf_z16/extended.json @@ -95,28 +95,28 @@ "EventCode": "145", "EventName": "DCW_REQ", "BriefDescription": "Directory Write Level 1 Data Cache from Cache", - "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache." + "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache." }, { "Unit": "CPU-M-CF", "EventCode": "146", "EventName": "DCW_REQ_IV", "BriefDescription": "Directory Write Level 1 Data Cache from Cache with Intervention", - "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache with intervention." + "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache with intervention." }, { "Unit": "CPU-M-CF", "EventCode": "147", "EventName": "DCW_REQ_CHIP_HIT", "BriefDescription": "Directory Write Level 1 Data Cache from Cache with Chip HP Hit", - "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache after using chip level horizontal persistence, Chip-HP hit." + "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache after using chip level horizontal persistence, Chip-HP hit." }, { "Unit": "CPU-M-CF", "EventCode": "148", "EventName": "DCW_REQ_DRAWER_HIT", "BriefDescription": "Directory Write Level 1 Data Cache from Cache with Drawer HP Hit", - "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit." + "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit." }, { "Unit": "CPU-M-CF", @@ -284,7 +284,7 @@ "EventCode": "172", "EventName": "ICW_REQ_DRAWER_HIT", "BriefDescription": "Directory Write Level 1 Instruction Cache from Cache with Drawer HP Hit", - "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestor’s Level-2 cache using drawer level horizontal persistence, Drawer-HP hit." + "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestors Level-2 cache using drawer level horizontal persistence, Drawer-HP hit." }, { "Unit": "CPU-M-CF",
From: Patrice Duroux patrice.duroux@gmail.com
[ Upstream commit 9835b742ac3ee16dee361e7ccda8022f99d1cd94 ]
It's not 2&>1, the correct is 2>&1
Fixes: ade1d0307b2fb3d9 ("perf offcpu: Update offcpu test for child process") Signed-off-by: Patrice Duroux patrice.duroux@gmail.com Acked-by: Ian Rogers irogers@google.com Cc: Namhyung Kim namhyung@kernel.org Link: https://lore.kernel.org/r/20230303193058.21274-1-patrice.duroux@gmail.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/tests/shell/record_offcpu.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/tests/shell/record_offcpu.sh b/tools/perf/tests/shell/record_offcpu.sh index d2eba583a2ac9..054272750aa9c 100755 --- a/tools/perf/tests/shell/record_offcpu.sh +++ b/tools/perf/tests/shell/record_offcpu.sh @@ -65,7 +65,7 @@ test_offcpu_child() {
# perf bench sched messaging creates 400 processes if ! perf record --off-cpu -e dummy -o ${perfdata} -- \ - perf bench sched messaging -g 10 > /dev/null 2&>1 + perf bench sched messaging -g 10 > /dev/null 2>&1 then echo "Child task off-cpu test [Failed record]" err=1
From: Yang Jihong yangjihong1@huawei.com
[ Upstream commit ecd4960d908e27e40b63a7046df2f942c148c6f6 ]
If no target is specified for 'latency' subcommand, the execution fails because - 1 (invalid value) is written to set_ftrace_pid tracefs file. Make system wide the default target, which is the same as the default behavior of 'trace' subcommand.
Before the fix:
# perf ftrace latency -T schedule failed to set ftrace pid
After the fix:
# perf ftrace latency -T schedule ^C# DURATION | COUNT | GRAPH | 0 - 1 us | 0 | | 1 - 2 us | 0 | | 2 - 4 us | 0 | | 4 - 8 us | 2828 | #### | 8 - 16 us | 23953 | ######################################## | 16 - 32 us | 408 | | 32 - 64 us | 318 | | 64 - 128 us | 4 | | 128 - 256 us | 3 | | 256 - 512 us | 0 | | 512 - 1024 us | 1 | | 1 - 2 ms | 4 | | 2 - 4 ms | 0 | | 4 - 8 ms | 0 | | 8 - 16 ms | 0 | | 16 - 32 ms | 0 | | 32 - 64 ms | 0 | | 64 - 128 ms | 0 | | 128 - 256 ms | 4 | | 256 - 512 ms | 2 | | 512 - 1024 ms | 0 | | 1 - ... s | 0 | |
Fixes: 53be50282269b46c ("perf ftrace: Add 'latency' subcommand") Signed-off-by: Yang Jihong yangjihong1@huawei.com Acked-by: Namhyung Kim namhyung@kernel.org Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/r/20230324032702.109964-1-yangjihong1@huawei.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/builtin-ftrace.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c index 7de07bb16d235..4bc5b7cf3e04b 100644 --- a/tools/perf/builtin-ftrace.c +++ b/tools/perf/builtin-ftrace.c @@ -1228,10 +1228,12 @@ int cmd_ftrace(int argc, const char **argv) goto out_delete_filters; }
+ /* Make system wide (-a) the default target. */ + if (!argc && target__none(&ftrace.target)) + ftrace.target.system_wide = true; + switch (subcmd) { case PERF_FTRACE_TRACE: - if (!argc && target__none(&ftrace.target)) - ftrace.target.system_wide = true; cmd_func = __cmd_ftrace; break; case PERF_FTRACE_LATENCY:
From: Kajol Jain kjain@linux.ibm.com
[ Upstream commit 5d9df8731c0941f3add30f96745a62586a0c9d52 ]
Commit 3c22ba5243040c13 ("perf vendor events powerpc: Update POWER9 events") added and updated power9 PMU JSON events. However some of the JSON events which are part of other.json and pipeline.json files, contains UTF-8 characters in their brief description. Having UTF-8 character could breaks the perf build on some distros.
Fix this issue by removing the UTF-8 characters from other.json and pipeline.json files.
Result without the fix:
[command]# file -i pmu-events/arch/powerpc/power9/* pmu-events/arch/powerpc/power9/cache.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/floating-point.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/frontend.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/marked.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/memory.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/metrics.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/nest_metrics.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/other.json: application/json; charset=utf-8 pmu-events/arch/powerpc/power9/pipeline.json: application/json; charset=utf-8 pmu-events/arch/powerpc/power9/pmc.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/translation.json: application/json; charset=us-ascii [command]#
Result with the fix:
[command]# file -i pmu-events/arch/powerpc/power9/* pmu-events/arch/powerpc/power9/cache.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/floating-point.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/frontend.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/marked.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/memory.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/metrics.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/nest_metrics.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/other.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/pipeline.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/pmc.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/translation.json: application/json; charset=us-ascii [command]#
Fixes: 3c22ba5243040c13 ("perf vendor events powerpc: Update POWER9 events") Reported-by: Arnaldo Carvalho de Melo acme@kernel.com Signed-off-by: Kajol Jain kjain@linux.ibm.com Acked-by: Ian Rogers irogers@google.com Tested-by: Arnaldo Carvalho de Melo acme@redhat.com Cc: Athira Rajeev atrajeev@linux.vnet.ibm.com Cc: Disha Goel disgoel@linux.ibm.com Cc: Jiri Olsa jolsa@kernel.org Cc: Madhavan Srinivasan maddy@linux.ibm.com Cc: Sukadev Bhattiprolu sukadev@linux.vnet.ibm.com Cc: linuxppc-dev@lists.ozlabs.org Link: https://lore.kernel.org/lkml/ZBxP77deq7ikTxwG@kernel.org/ Link: https://lore.kernel.org/r/20230328112908.113158-1-kjain@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/pmu-events/arch/powerpc/power9/other.json | 4 ++-- tools/perf/pmu-events/arch/powerpc/power9/pipeline.json | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/tools/perf/pmu-events/arch/powerpc/power9/other.json b/tools/perf/pmu-events/arch/powerpc/power9/other.json index 3f69422c21f99..f10bd554521a0 100644 --- a/tools/perf/pmu-events/arch/powerpc/power9/other.json +++ b/tools/perf/pmu-events/arch/powerpc/power9/other.json @@ -1417,7 +1417,7 @@ { "EventCode": "0x45054", "EventName": "PM_FMA_CMPL", - "BriefDescription": "two flops operation completed (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only. " + "BriefDescription": "two flops operation completed (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only." }, { "EventCode": "0x201E8", @@ -2017,7 +2017,7 @@ { "EventCode": "0xC0BC", "EventName": "PM_LSU_FLUSH_OTHER", - "BriefDescription": "Other LSU flushes including: Sync (sync ack from L2 caused search of LRQ for oldest snooped load, This will either signal a Precise Flush of the oldest snooped loa or a Flush Next PPC); Data Valid Flush Next (several cases of this, one example is store and reload are lined up such that a store-hit-reload scenario exists and the CDF has already launched and has gotten bad/stale data); Bad Data Valid Flush Next (might be a few cases of this, one example is a larxa (D$ hit) return data and dval but can't allocate to LMQ (LMQ full or other reason). Already gave dval but can't watch it for snoop_hit_larx. Need to take the “bad dval” back and flush all younger ops)" + "BriefDescription": "Other LSU flushes including: Sync (sync ack from L2 caused search of LRQ for oldest snooped load, This will either signal a Precise Flush of the oldest snooped loa or a Flush Next PPC); Data Valid Flush Next (several cases of this, one example is store and reload are lined up such that a store-hit-reload scenario exists and the CDF has already launched and has gotten bad/stale data); Bad Data Valid Flush Next (might be a few cases of this, one example is a larxa (D$ hit) return data and dval but can't allocate to LMQ (LMQ full or other reason). Already gave dval but can't watch it for snoop_hit_larx. Need to take the 'bad dval' back and flush all younger ops)" }, { "EventCode": "0x5094", diff --git a/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json b/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json index d0265f255de2b..723bffa41c448 100644 --- a/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json +++ b/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json @@ -442,7 +442,7 @@ { "EventCode": "0x4D052", "EventName": "PM_2FLOP_CMPL", - "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg " + "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg" }, { "EventCode": "0x1F142",
From: Arnaldo Carvalho de Melo acme@redhat.com
[ Upstream commit 57f14b5ae1a97537f2abd2828ee7212cada7036e ]
An audit showed just this one problem with zfree(), fix it.
Fixes: 9fbc61f832ebf432 ("perf pmu: Add support for PMU capabilities") Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/pmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c index 03284059175f7..9a762c0cc53ce 100644 --- a/tools/perf/util/pmu.c +++ b/tools/perf/util/pmu.c @@ -1845,7 +1845,7 @@ static int perf_pmu__new_caps(struct list_head *list, char *name, char *value) return 0;
free_name: - zfree(caps->name); + zfree(&caps->name); free_caps: free(caps);
From: Markus Elfring Markus.Elfring@web.de
[ Upstream commit c160118a90d4acf335993d8d59b02ae2147a524e ]
Addresses of two data structure members were determined before corresponding null pointer checks in the implementation of the function “sort__sym_from_cmp”.
Thus avoid the risk for undefined behaviour by removing extra initialisations for the local variables “from_l” and “from_r” (also because they were already reassigned with the same value behind this pointer check).
This issue was detected by using the Coccinelle software.
Fixes: 1b9e97a2a95e4941 ("perf tools: Fix report -F symbol_from for data without branch info") Signed-off-by: elfring@users.sourceforge.net Acked-by: Ian Rogers irogers@google.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: German Gomez german.gomez@arm.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Kan Liang kan.liang@linux.intel.com Cc: Mark Rutland mark.rutland@arm.com Cc: Namhyung Kim namhyung@kernel.org Link: https://lore.kernel.org/cocci/54a21fea-64e3-de67-82ef-d61b90ffad05@web.de/ Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/sort.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c index 2e7330867e2ef..6882b17144994 100644 --- a/tools/perf/util/sort.c +++ b/tools/perf/util/sort.c @@ -876,8 +876,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type, static int64_t sort__sym_from_cmp(struct hist_entry *left, struct hist_entry *right) { - struct addr_map_symbol *from_l = &left->branch_info->from; - struct addr_map_symbol *from_r = &right->branch_info->from; + struct addr_map_symbol *from_l, *from_r;
if (!left->branch_info || !right->branch_info) return cmp_null(left->branch_info, right->branch_info);
From: James Clark james.clark@arm.com
[ Upstream commit 449067f3fc9f340da54e383738286881e6634d0b ]
In this context, timeless refers to the trace data rather than the perf event data. But when detecting whether there are timestamps in the trace data or not, the presence of a timestamp flag on any perf event is used.
Since commit f42c0ce573df ("perf record: Always get text_poke events with --kcore option") timestamps were added to a tracking event when --kcore is used which breaks this detection mechanism. Fix it by detecting if trace timestamps exist by looking at the ETM config flags. This would have always been a more accurate way of doing it anyway.
This fixes the following error message when using --kcore with Coresight:
$ perf record --kcore -e cs_etm// --per-thread $ perf report The perf.data/data data has no samples!
Fixes: f42c0ce573df79d1 ("perf record: Always get text_poke events with --kcore option") Reported-by: Yang Shi shy828301@gmail.com Signed-off-by: James Clark james.clark@arm.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: John Garry john.g.garry@oracle.com Cc: Leo Yan leo.yan@linaro.org Cc: Mark Rutland mark.rutland@arm.com Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Suzuki Poulouse suzuki.poulose@arm.com Cc: Will Deacon will@kernel.org Cc: coresight@lists.linaro.org Cc: denik@google.com Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/lkml/CAHbLzkrJQTrYBtPkf=jf3OpQ-yBcJe7XkvQstX9j2frz4W... Link: https://lore.kernel.org/r/20230424134748.228137-2-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/cs-etm.c | 30 ++++++++++++++++++------------ 1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c index 16db965ac995e..09e240e4477d0 100644 --- a/tools/perf/util/cs-etm.c +++ b/tools/perf/util/cs-etm.c @@ -2488,26 +2488,29 @@ static int cs_etm__process_auxtrace_event(struct perf_session *session, return 0; }
-static bool cs_etm__is_timeless_decoding(struct cs_etm_auxtrace *etm) +static int cs_etm__setup_timeless_decoding(struct cs_etm_auxtrace *etm) { struct evsel *evsel; struct evlist *evlist = etm->session->evlist; - bool timeless_decoding = true;
/* Override timeless mode with user input from --itrace=Z */ - if (etm->synth_opts.timeless_decoding) - return true; + if (etm->synth_opts.timeless_decoding) { + etm->timeless_decoding = true; + return 0; + }
/* - * Circle through the list of event and complain if we find one - * with the time bit set. + * Find the cs_etm evsel and look at what its timestamp setting was */ - evlist__for_each_entry(evlist, evsel) { - if ((evsel->core.attr.sample_type & PERF_SAMPLE_TIME)) - timeless_decoding = false; - } + evlist__for_each_entry(evlist, evsel) + if (cs_etm__evsel_is_auxtrace(etm->session, evsel)) { + etm->timeless_decoding = + !(evsel->core.attr.config & BIT(ETM_OPT_TS)); + return 0; + }
- return timeless_decoding; + pr_err("CS ETM: Couldn't find ETM evsel\n"); + return -EINVAL; }
static const char * const cs_etm_global_header_fmts[] = { @@ -3051,7 +3054,6 @@ int cs_etm__process_auxtrace_info(union perf_event *event, etm->snapshot_mode = (hdr[CS_ETM_SNAPSHOT] != 0); etm->metadata = metadata; etm->auxtrace_type = auxtrace_info->type; - etm->timeless_decoding = cs_etm__is_timeless_decoding(etm);
etm->auxtrace.process_event = cs_etm__process_event; etm->auxtrace.process_auxtrace_event = cs_etm__process_auxtrace_event; @@ -3061,6 +3063,10 @@ int cs_etm__process_auxtrace_info(union perf_event *event, etm->auxtrace.evsel_is_auxtrace = cs_etm__evsel_is_auxtrace; session->auxtrace = &etm->auxtrace;
+ err = cs_etm__setup_timeless_decoding(etm); + if (err) + return err; + etm->unknown_thread = thread__new(999999999, 999999999); if (!etm->unknown_thread) { err = -ENOMEM;
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
[ Upstream commit 8fd91151ebcb21b3f2f2bf158ac6092192550b2b ]
SS_ENCRYPTION is (0 << 7 = 0), so the test can never be true. Use a direct comparison to SS_ENCRYPTION instead.
The same king of test is already done the same way in sun8i_ss_run_task().
Fixes: 359e893e8af4 ("crypto: sun8i-ss - rework handling of IV") Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c index 902f6be057ec6..e97fb203690ae 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c @@ -151,7 +151,7 @@ static int sun8i_ss_setup_ivs(struct skcipher_request *areq) } rctx->p_iv[i] = a; /* we need to setup all others IVs only in the decrypt way */ - if (rctx->op_dir & SS_ENCRYPTION) + if (rctx->op_dir == SS_ENCRYPTION) return 0; todo = min(len, sg_dma_len(sg)); len -= todo;
From: Herbert Xu herbert@gondor.apana.org.au
[ Upstream commit c35e03eaece71101ff6cbf776b86403860ac8cc3 ]
The crypto completion function currently takes a pointer to a struct crypto_async_request object. However, in reality the API does not allow the use of any part of the object apart from the data field. For example, ahash/shash will create a fake object on the stack to pass along a different data field.
This leads to potential bugs where the user may try to dereference or otherwise use the crypto_async_request object.
This patch adds some temporary scaffolding so that the completion function can take a void * instead. Once affected users have been converted this can be removed.
The helper crypto_request_complete will remain even after the conversion is complete. It should be used instead of calling the completion function directly.
Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Reviewed-by: Giovanni Cabiddu giovanni.cabiddu@intel.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Stable-dep-of: 4140aafcff16 ("crypto: engine - fix crypto_queue backlog handling") Signed-off-by: Sasha Levin sashal@kernel.org --- include/crypto/algapi.h | 7 +++++++ include/linux/crypto.h | 6 ++++++ 2 files changed, 13 insertions(+)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index f50c5d1725da5..224b860647083 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -265,4 +265,11 @@ enum { CRYPTO_MSG_ALG_LOADED, };
+static inline void crypto_request_complete(struct crypto_async_request *req, + int err) +{ + crypto_completion_t complete = req->complete; + complete(req, err); +} + #endif /* _CRYPTO_ALGAPI_H */ diff --git a/include/linux/crypto.h b/include/linux/crypto.h index 2324ab6f1846b..e3c4be29aaccb 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -176,6 +176,7 @@ struct crypto_async_request; struct crypto_tfm; struct crypto_type;
+typedef struct crypto_async_request crypto_completion_data_t; typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err);
/** @@ -595,6 +596,11 @@ struct crypto_wait { /* * Async ops completion helper functioons */ +static inline void *crypto_get_completion_data(crypto_completion_data_t *req) +{ + return req->data; +} + void crypto_req_done(struct crypto_async_request *req, int err);
static inline int crypto_wait_req(int err, struct crypto_wait *wait)
From: Herbert Xu herbert@gondor.apana.org.au
[ Upstream commit 6909823d47c17cba84e9244d04050b5db8d53789 ]
Use the crypto_request_complete helper instead of calling the completion function directly.
Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Stable-dep-of: 4140aafcff16 ("crypto: engine - fix crypto_queue backlog handling") Signed-off-by: Sasha Levin sashal@kernel.org --- crypto/crypto_engine.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index bb8e77077f020..48c15f4079bb8 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -54,7 +54,7 @@ static void crypto_finalize_request(struct crypto_engine *engine, } } lockdep_assert_in_softirq(); - req->complete(req, err); + crypto_request_complete(req, err);
kthread_queue_work(engine->kworker, &engine->pump_requests); } @@ -130,7 +130,7 @@ static void crypto_pump_requests(struct crypto_engine *engine, engine->cur_req = async_req;
if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS);
if (engine->busy) was_busy = true; @@ -214,7 +214,7 @@ static void crypto_pump_requests(struct crypto_engine *engine, }
req_err_2: - async_req->complete(async_req, ret); + crypto_request_complete(async_req, ret);
retry: /* If retry mechanism is supported, send new requests to engine */
From: Olivier Bacon olivierb89@gmail.com
[ Upstream commit 4140aafcff167b5b9e8dae6a1709a6de7cac6f74 ]
CRYPTO_TFM_REQ_MAY_BACKLOG tells the crypto driver that it should internally backlog requests until the crypto hw's queue becomes full. At that point, crypto_engine backlogs the request and returns -EBUSY. Calling driver such as dm-crypt then waits until the complete() function is called with a status of -EINPROGRESS before sending a new request.
The problem lies in the call to complete() with a value of -EINPROGRESS that is made when a backlog item is present on the queue. The call is done before the successful execution of the crypto request. In the case that do_one_request() returns < 0 and the retry support is available, the request is put back in the queue. This leads upper drivers to send a new request even if the queue is still full.
The problem can be reproduced by doing a large dd into a crypto dm-crypt device. This is pretty easy to see when using Freescale CAAM crypto driver and SWIOTLB dma. Since the actual amount of requests that can be hold in the queue is unlimited we get IOs error and dma allocation.
The fix is to call complete with a value of -EINPROGRESS only if the request is not enqueued back in crypto_queue. This is done by calling complete() later in the code. In order to delay the decision, crypto_queue is modified to correctly set the backlog pointer when a request is enqueued back.
Fixes: 6a89f492f8e5 ("crypto: engine - support for parallel requests based on retry mechanism") Co-developed-by: Sylvain Ouellet souellet@genetec.com Signed-off-by: Sylvain Ouellet souellet@genetec.com Signed-off-by: Olivier Bacon obacon@genetec.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Sasha Levin sashal@kernel.org --- crypto/algapi.c | 3 +++ crypto/crypto_engine.c | 6 +++--- 2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/crypto/algapi.c b/crypto/algapi.c index c72622f20f52b..8c3a869cc43a9 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -944,6 +944,9 @@ EXPORT_SYMBOL_GPL(crypto_enqueue_request); void crypto_enqueue_request_head(struct crypto_queue *queue, struct crypto_async_request *request) { + if (unlikely(queue->qlen >= queue->max_qlen)) + queue->backlog = queue->backlog->prev; + queue->qlen++; list_add(&request->list, &queue->list); } diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index 48c15f4079bb8..50bac2ab55f17 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -129,9 +129,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, if (!engine->retry_support) engine->cur_req = async_req;
- if (backlog) - crypto_request_complete(backlog, -EINPROGRESS); - if (engine->busy) was_busy = true; else @@ -217,6 +214,9 @@ static void crypto_pump_requests(struct crypto_engine *engine, crypto_request_complete(async_req, ret);
retry: + if (backlog) + crypto_request_complete(backlog, -EINPROGRESS); + /* If retry mechanism is supported, send new requests to engine */ if (engine->retry_support) { spin_lock_irqsave(&engine->queue_lock, flags);
From: Yang Jihong yangjihong1@huawei.com
[ Upstream commit 1511e4696acb715a4fe48be89e1e691daec91c0e ]
In elf_read_build_id(), if gnu build_id is found, should return the size of the actually copied data. If descsz is greater thanBuild_ID_SIZE, write_buildid data access may occur.
Fixes: be96ea8ffa788dcc ("perf symbols: Fix issue with binaries using 16-bytes buildids (v2)") Reported-by: Will Ochowicz Will.Ochowicz@genusplc.com Signed-off-by: Yang Jihong yangjihong1@huawei.com Tested-by: Will Ochowicz Will.Ochowicz@genusplc.com Acked-by: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Leo Yan leo.yan@linaro.org Cc: Mark Rutland mark.rutland@arm.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Stephane Eranian eranian@google.com Link: https://lore.kernel.org/lkml/CWLP265MB49702F7BA3D6D8F13E4B1A719C649@CWLP265M... Link: https://lore.kernel.org/r/20230427012841.231729-1-yangjihong1@huawei.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/symbol-elf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c index 80345695b1360..29c9348c30f00 100644 --- a/tools/perf/util/symbol-elf.c +++ b/tools/perf/util/symbol-elf.c @@ -553,7 +553,7 @@ static int elf_read_build_id(Elf *elf, void *bf, size_t size) size_t sz = min(size, descsz); memcpy(bf, ptr, sz); memset(bf + sz, 0, size - sz); - err = descsz; + err = sz; break; } }
From: Yang Jihong yangjihong1@huawei.com
[ Upstream commit 9b86c49710eec7b4fbb78a0232b2dd0972a2b576 ]
When is_valid_tracepoint() returns 1, need to call put_events_file() to free `dir_path`.
Fixes: 25a7d914274de386 ("perf parse-events: Use get/put_events_file()") Signed-off-by: Yang Jihong yangjihong1@huawei.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/r/20230421025953.173826-1-yangjihong1@huawei.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/tracepoint.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/tools/perf/util/tracepoint.c b/tools/perf/util/tracepoint.c index 89ef56c433110..92dd8b455b902 100644 --- a/tools/perf/util/tracepoint.c +++ b/tools/perf/util/tracepoint.c @@ -50,6 +50,7 @@ int is_valid_tracepoint(const char *event_string) sys_dirent->d_name, evt_dirent->d_name); if (!strcmp(evt_path, event_string)) { closedir(evt_dir); + put_events_file(dir_path); closedir(sys_dir); return 1; }
From: Dmitrii Dolgov 9erthalion6@gmail.com
[ Upstream commit ecc68ee216c6c5b2f84915e1441adf436f1b019b ]
It seems that perf stat -b <prog id> doesn't produce any results:
$ perf stat -e cycles -b 4 -I 10000 -vvv Control descriptor is not initialized cycles: 0 0 0 time counts unit events 10.007641640 <not supported> cycles
Looks like this happens because fentry/fexit progs are getting loaded, but the corresponding perf event is not enabled and not added into the events bpf map. I think there is some mixing up between two type of bpf support, one for bperf and one for bpf_profiler. Both are identified via evsel__is_bpf, based on which perf events are enabled, but for the latter (bpf_profiler) a perf event is required. Using evsel__is_bperf to check only bperf produces expected results:
$ perf stat -e cycles -b 4 -I 10000 -vvv Control descriptor is not initialized ------------------------------------------------------------ perf_event_attr: size 136 sample_type IDENTIFIER read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING disabled 1 exclude_guest 1 ------------------------------------------------------------ sys_perf_event_open: pid -1 cpu 0 group_fd -1 flags 0x8 = 3 ------------------------------------------------------------ [...perf_event_attr for other CPUs...] ------------------------------------------------------------ cycles: 309426 169009 169009 time counts unit events 10.010091271 309426 cycles
The final numbers correspond (at least in the level of magnitude) to the same metric obtained via bpftool.
Fixes: 112cb56164bc2108 ("perf stat: Introduce config stat.bpf-counter-events") Reviewed-by: Song Liu song@kernel.org Signed-off-by: Dmitrii Dolgov 9erthalion6@gmail.com Tested-by: Song Liu song@kernel.org Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Namhyung Kim namhyung@kernel.org Cc: Song Liu song@kernel.org Link: https://lore.kernel.org/r/20230412182316.11628-1-9erthalion6@gmail.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/builtin-stat.c | 4 ++-- tools/perf/util/evsel.h | 5 +++++ 2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index f6427e3a47421..a2c74a34e4a44 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -765,7 +765,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) counter->reset_group = false; if (bpf_counter__load(counter, &target)) return -1; - if (!evsel__is_bpf(counter)) + if (!(evsel__is_bperf(counter))) all_counters_use_bpf = false; }
@@ -781,7 +781,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
if (counter->reset_group || counter->errored) continue; - if (evsel__is_bpf(counter)) + if (evsel__is_bperf(counter)) continue; try_again: if (create_perf_stat_counter(counter, &stat_config, &target, diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h index 989865e16aadd..8ce30329a0772 100644 --- a/tools/perf/util/evsel.h +++ b/tools/perf/util/evsel.h @@ -263,6 +263,11 @@ static inline bool evsel__is_bpf(struct evsel *evsel) return evsel->bpf_counter_ops != NULL; }
+static inline bool evsel__is_bperf(struct evsel *evsel) +{ + return evsel->bpf_counter_ops != NULL && list_empty(&evsel->bpf_counter_list); +} + #define EVSEL__MAX_ALIASES 8
extern const char *const evsel__hw_cache[PERF_COUNT_HW_CACHE_MAX][EVSEL__MAX_ALIASES];
From: Conor Dooley conor.dooley@microchip.com
commit 9493e6f3ce02f44c21aa19f3cbf3b9aa05479d06 upstream.
Guenter reported a splat during boot, that Samuel pointed out was the lockdep assertion failing in patch_insn_write():
WARNING: CPU: 0 PID: 0 at arch/riscv/kernel/patch.c:63 patch_insn_write+0x222/0x2f6 epc : patch_insn_write+0x222/0x2f6 ra : patch_insn_write+0x21e/0x2f6 epc : ffffffff800068c6 ra : ffffffff800068c2 sp : ffffffff81803df0 gp : ffffffff81a1ab78 tp : ffffffff81814f80 t0 : ffffffffffffe000 t1 : 0000000000000001 t2 : 4c45203a76637369 s0 : ffffffff81803e40 s1 : 0000000000000004 a0 : 0000000000000000 a1 : ffffffffffffffff a2 : 0000000000000004 a3 : 0000000000000000 a4 : 0000000000000001 a5 : 0000000000000000 a6 : 0000000000000000 a7 : 0000000052464e43 s2 : ffffffff80b4889c s3 : 000000000000082c s4 : ffffffff80b48828 s5 : 0000000000000828 s6 : ffffffff8131a0a0 s7 : 0000000000000fff s8 : 0000000008000200 s9 : ffffffff8131a520 s10: 0000000000000018 s11: 000000000000000b t3 : 0000000000000001 t4 : 000000000000000d t5 : ffffffffd8180000 t6 : ffffffff81803bc8 status: 0000000200000100 badaddr: 0000000000000000 cause: 0000000000000003 [<ffffffff800068c6>] patch_insn_write+0x222/0x2f6 [<ffffffff80006a36>] patch_text_nosync+0xc/0x2a [<ffffffff80003b86>] riscv_cpufeature_patch_func+0x52/0x98 [<ffffffff80003348>] _apply_alternatives+0x46/0x86 [<ffffffff80c02d36>] apply_boot_alternatives+0x3c/0xfa [<ffffffff80c03ad8>] setup_arch+0x584/0x5b8 [<ffffffff80c0075a>] start_kernel+0xa2/0x8f8
This issue was exposed by 702e64550b12 ("riscv: fpu: switch has_fpu() to riscv_has_extension_likely()"), as it is the patching in has_fpu() that triggers the splats in Guenter's report.
Take the text_mutex before doing any code patching to satisfy lockdep.
Fixes: ff689fd21cb1 ("riscv: add RISC-V Svpbmt extension support") Fixes: a35707c3d850 ("riscv: add memory-type errata for T-Head") Fixes: 1a0e5dbd3723 ("riscv: sifive: Add SiFive alternative ports") Reported-by: Guenter Roeck linux@roeck-us.net Link: https://lore.kernel.org/all/20230212154333.GA3760469@roeck-us.net/ Signed-off-by: Conor Dooley conor.dooley@microchip.com Reviewed-by: Samuel Holland samuel@sholland.org Tested-by: Guenter Roeck linux@roeck-us.net Link: https://lore.kernel.org/r/20230212194735.491785-1-conor@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/errata/sifive/errata.c | 3 +++ arch/riscv/errata/thead/errata.c | 8 ++++++-- arch/riscv/kernel/cpufeature.c | 6 +++++- 3 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/arch/riscv/errata/sifive/errata.c b/arch/riscv/errata/sifive/errata.c index 1031038423e74..5b77d7310e391 100644 --- a/arch/riscv/errata/sifive/errata.c +++ b/arch/riscv/errata/sifive/errata.c @@ -4,6 +4,7 @@ */
#include <linux/kernel.h> +#include <linux/memory.h> #include <linux/module.h> #include <linux/string.h> #include <linux/bug.h> @@ -107,7 +108,9 @@ void __init_or_module sifive_errata_patch_func(struct alt_entry *begin,
tmp = (1U << alt->errata_id); if (cpu_req_errata & tmp) { + mutex_lock(&text_mutex); patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len); + mutex_lock(&text_mutex); cpu_apply_errata |= tmp; } } diff --git a/arch/riscv/errata/thead/errata.c b/arch/riscv/errata/thead/errata.c index 21546937db39b..32a34ed735098 100644 --- a/arch/riscv/errata/thead/errata.c +++ b/arch/riscv/errata/thead/errata.c @@ -5,6 +5,7 @@
#include <linux/bug.h> #include <linux/kernel.h> +#include <linux/memory.h> #include <linux/module.h> #include <linux/string.h> #include <linux/uaccess.h> @@ -78,11 +79,14 @@ void __init_or_module thead_errata_patch_func(struct alt_entry *begin, struct al tmp = (1U << alt->errata_id); if (cpu_req_errata & tmp) { /* On vm-alternatives, the mmu isn't running yet */ - if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) + if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) { memcpy((void *)__pa_symbol(alt->old_ptr), (void *)__pa_symbol(alt->alt_ptr), alt->alt_len); - else + } else { + mutex_lock(&text_mutex); patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len); + mutex_unlock(&text_mutex); + } } }
diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 694267d1fe814..fd1238df61497 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -9,6 +9,7 @@ #include <linux/bitmap.h> #include <linux/ctype.h> #include <linux/libfdt.h> +#include <linux/memory.h> #include <linux/module.h> #include <linux/of.h> #include <asm/alternative.h> @@ -316,8 +317,11 @@ void __init_or_module riscv_cpufeature_patch_func(struct alt_entry *begin, }
tmp = (1U << alt->errata_id); - if (cpu_req_feature & tmp) + if (cpu_req_feature & tmp) { + mutex_lock(&text_mutex); patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len); + mutex_unlock(&text_mutex); + } } } #endif
From: Conor Dooley conor.dooley@microchip.com
commit bf89b7ee52af5a5944fa3539e86089f72475055b upstream.
Chris pointed out that some bonehead, *cough* me *cough*, added two mutex_locks() to the SiFive errata patching. The second was meant to have been a mutex_unlock().
This results in errors such as
Unable to handle kernel NULL pointer dereference at virtual address 0000000000000030 Oops [#1] Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 6.2.0-rc1-starlight-00079-g9493e6f3ce02 #229 Hardware name: BeagleV Starlight Beta (DT) epc : __schedule+0x42/0x500 ra : schedule+0x46/0xce epc : ffffffff8065957c ra : ffffffff80659a80 sp : ffffffff81203c80 gp : ffffffff812d50a0 tp : ffffffff8120db40 t0 : ffffffff81203d68 t1 : 0000000000000001 t2 : 4c45203a76637369 s0 : ffffffff81203cf0 s1 : ffffffff8120db40 a0 : 0000000000000000 a1 : ffffffff81213958 a2 : ffffffff81213958 a3 : 0000000000000000 a4 : 0000000000000000 a5 : ffffffff80a1bd00 a6 : 0000000000000000 a7 : 0000000052464e43 s2 : ffffffff8120db41 s3 : ffffffff80a1ad00 s4 : 0000000000000000 s5 : 0000000000000002 s6 : ffffffff81213938 s7 : 0000000000000000 s8 : 0000000000000000 s9 : 0000000000000001 s10: ffffffff812d7204 s11: ffffffff80d3c920 t3 : 0000000000000001 t4 : ffffffff812e6dd7 t5 : ffffffff812e6dd8 t6 : ffffffff81203bb8 status: 0000000200000100 badaddr: 0000000000000030 cause: 000000000000000d [<ffffffff80659a80>] schedule+0x46/0xce [<ffffffff80659dce>] schedule_preempt_disabled+0x16/0x28 [<ffffffff8065ae0c>] __mutex_lock.constprop.0+0x3fe/0x652 [<ffffffff8065b138>] __mutex_lock_slowpath+0xe/0x16 [<ffffffff8065b182>] mutex_lock+0x42/0x4c [<ffffffff8000ad94>] sifive_errata_patch_func+0xf6/0x18c [<ffffffff80002b92>] _apply_alternatives+0x74/0x76 [<ffffffff80802ee8>] apply_boot_alternatives+0x3c/0xfa [<ffffffff80803cb0>] setup_arch+0x60c/0x640 [<ffffffff80800926>] start_kernel+0x8e/0x99c ---[ end trace 0000000000000000 ]---
Reported-by: Chris Hofstaedtler zeha@debian.org Fixes: 9493e6f3ce02 ("RISC-V: take text_mutex during alternative patching") Signed-off-by: Conor Dooley conor.dooley@microchip.com Tested-by: Geert Uytterhoeven geert+renesas@glider.be Link: https://lore.kernel.org/r/20230302174154.970746-1-conor@kernel.org [Palmer: pick up Geert's bug report from the thread] Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/errata/sifive/errata.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/errata/sifive/errata.c b/arch/riscv/errata/sifive/errata.c index 5b77d7310e391..c24a349dd026d 100644 --- a/arch/riscv/errata/sifive/errata.c +++ b/arch/riscv/errata/sifive/errata.c @@ -110,7 +110,7 @@ void __init_or_module sifive_errata_patch_func(struct alt_entry *begin, if (cpu_req_errata & tmp) { mutex_lock(&text_mutex); patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len); - mutex_lock(&text_mutex); + mutex_unlock(&text_mutex); cpu_apply_errata |= tmp; } }
From: Borislav Petkov (AMD) bp@alien8.de
commit 9a48d604672220545d209e9996c2a1edbb5637f6 upstream.
SYM_FUNC_START_LOCAL_NOALIGN() adds an endbr leading to this layout (leaving only the last 2 bytes of the address):
3bff <zen_untrain_ret>: 3bff: f3 0f 1e fa endbr64 3c03: f6 test $0xcc,%bl
3c04 <__x86_return_thunk>: 3c04: c3 ret 3c05: cc int3 3c06: 0f ae e8 lfence
However, "the RET at __x86_return_thunk must be on a 64 byte boundary, for alignment within the BTB."
Use SYM_START instead.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Reviewed-by: Thomas Gleixner tglx@linutronix.de Cc: stable@kernel.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/lib/retpoline.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -87,8 +87,8 @@ SYM_CODE_END(__x86_indirect_thunk_array) */ .align 64 .skip 63, 0xcc -SYM_FUNC_START_NOALIGN(zen_untrain_ret); - +SYM_START(zen_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) + ANNOTATE_NOENDBR /* * As executed from zen_untrain_ret, this is: *
From: Filipe Manana fdmanana@suse.com
commit 6f932d4ef007d6a4ae03badcb749fbb8f49196f6 upstream.
A call to btrfs_prev_leaf() may end up returning a path that points to the same item (key) again. This happens if while btrfs_prev_leaf(), after we release the path, a concurrent insertion happens, which moves items off from a sibling into the front of the previous leaf, and an item with the computed previous key does not exists.
For example, suppose we have the two following leaves:
Leaf A
------------------------------------------------------------- | ... key (300 96 10) key (300 96 15) key (300 96 16) | ------------------------------------------------------------- slot 20 slot 21 slot 22
Leaf B
------------------------------------------------------------- | key (300 96 20) key (300 96 21) key (300 96 22) ... | ------------------------------------------------------------- slot 0 slot 1 slot 2
If we call btrfs_prev_leaf(), from btrfs_previous_item() for example, with a path pointing to leaf B and slot 0 and the following happens:
1) At btrfs_prev_leaf() we compute the previous key to search as: (300 96 19), which is a key that does not exists in the tree;
2) Then we call btrfs_release_path() at btrfs_prev_leaf();
3) Some other task inserts a key at leaf A, that sorts before the key at slot 20, for example it has an objectid of 299. In order to make room for the new key, the key at slot 22 is moved to the front of leaf B. This happens at push_leaf_right(), called from split_leaf().
After this leaf B now looks like:
-------------------------------------------------------------------------------- | key (300 96 16) key (300 96 20) key (300 96 21) key (300 96 22) ... | -------------------------------------------------------------------------------- slot 0 slot 1 slot 2 slot 3
4) At btrfs_prev_leaf() we call btrfs_search_slot() for the computed previous key: (300 96 19). Since the key does not exists, btrfs_search_slot() returns 1 and with a path pointing to leaf B and slot 1, the item with key (300 96 20);
5) This makes btrfs_prev_leaf() return a path that points to slot 1 of leaf B, the same key as before it was called, since the key at slot 0 of leaf B (300 96 16) is less than the computed previous key, which is (300 96 19);
6) As a consequence btrfs_previous_item() returns a path that points again to the item with key (300 96 20).
For some users of btrfs_prev_leaf() or btrfs_previous_item() this may not be functional a problem, despite not making sense to return a new path pointing again to the same item/key. However for a caller such as tree-log.c:log_dir_items(), this has a bad consequence, as it can result in not logging some dir index deletions in case the directory is being logged without holding the inode's VFS lock (logging triggered while logging a child inode for example) - for the example scenario above, in case the dir index keys 17, 18 and 19 were deleted in the current transaction.
CC: stable@vger.kernel.org # 4.14+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Filipe Manana fdmanana@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/ctree.c | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-)
--- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -4411,10 +4411,12 @@ int btrfs_del_items(struct btrfs_trans_h int btrfs_prev_leaf(struct btrfs_root *root, struct btrfs_path *path) { struct btrfs_key key; + struct btrfs_key orig_key; struct btrfs_disk_key found_key; int ret;
btrfs_item_key_to_cpu(path->nodes[0], &key, 0); + orig_key = key;
if (key.offset > 0) { key.offset--; @@ -4431,8 +4433,36 @@ int btrfs_prev_leaf(struct btrfs_root *r
btrfs_release_path(path); ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); - if (ret < 0) + if (ret <= 0) return ret; + + /* + * Previous key not found. Even if we were at slot 0 of the leaf we had + * before releasing the path and calling btrfs_search_slot(), we now may + * be in a slot pointing to the same original key - this can happen if + * after we released the path, one of more items were moved from a + * sibling leaf into the front of the leaf we had due to an insertion + * (see push_leaf_right()). + * If we hit this case and our slot is > 0 and just decrement the slot + * so that the caller does not process the same key again, which may or + * may not break the caller, depending on its logic. + */ + if (path->slots[0] < btrfs_header_nritems(path->nodes[0])) { + btrfs_item_key(path->nodes[0], &found_key, path->slots[0]); + ret = comp_keys(&found_key, &orig_key); + if (ret == 0) { + if (path->slots[0] > 0) { + path->slots[0]--; + return 0; + } + /* + * At slot 0, same key as before, it means orig_key is + * the lowest, leftmost, key in the tree. We're done. + */ + return 1; + } + } + btrfs_item_key(path->nodes[0], &found_key, 0); ret = comp_keys(&found_key, &key); /*
From: Naohiro Aota naohiro.aota@wdc.com
commit 631003e2333c12cc1b52df06a707365b7363a159 upstream.
find_next_bit and find_next_zero_bit take @size as the second parameter and @offset as the third parameter. They are specified opposite in btrfs_ensure_empty_zones(). Thanks to the later loop, it never failed to detect the empty zones. Fix them and (maybe) return the result a bit faster.
Note: the naming is a bit confusing, size has two meanings here, bitmap and our range size.
Fixes: 1cd6121f2a38 ("btrfs: zoned: implement zoned chunk allocator") CC: stable@vger.kernel.org # 5.15+ Signed-off-by: Naohiro Aota naohiro.aota@wdc.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/zoned.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1163,12 +1163,12 @@ int btrfs_ensure_empty_zones(struct btrf return -ERANGE;
/* All the zones are conventional */ - if (find_next_bit(zinfo->seq_zones, begin, end) == end) + if (find_next_bit(zinfo->seq_zones, end, begin) == end) return 0;
/* All the zones are sequential and empty */ - if (find_next_zero_bit(zinfo->seq_zones, begin, end) == end && - find_next_zero_bit(zinfo->empty_zones, begin, end) == end) + if (find_next_zero_bit(zinfo->seq_zones, end, begin) == end && + find_next_zero_bit(zinfo->empty_zones, end, begin) == end) return 0;
for (pos = start; pos < start + size; pos += zinfo->zone_size) {
From: Qu Wenruo wqu@suse.com
commit 64b5d5b2852661284ccbb038c697562cc56231bf upstream.
[BUG] With block-group-tree feature enabled, mounting it with clear_cache would cause the following transaction abort at mount or remount:
BTRFS info (device dm-4): force clearing of disk cache BTRFS info (device dm-4): using free space tree BTRFS info (device dm-4): auto enabling async discard BTRFS info (device dm-4): clearing free space tree BTRFS info (device dm-4): clearing compat-ro feature flag for FREE_SPACE_TREE (0x1) BTRFS info (device dm-4): clearing compat-ro feature flag for FREE_SPACE_TREE_VALID (0x2) BTRFS error (device dm-4): block-group-tree feature requires fres-space-tree and no-holes BTRFS error (device dm-4): super block corruption detected before writing it to disk BTRFS: error (device dm-4) in write_all_supers:4288: errno=-117 Filesystem corrupted (unexpected superblock corruption detected) BTRFS warning (device dm-4: state E): Skipping commit of aborted transaction.
[CAUSE] For block-group-tree feature, we have an artificial dependency on free-space-tree.
This means if we detect block-group-tree without v2 cache, we consider it a corruption and cause the problem.
For clear_cache mount option, it would temporary disable v2 cache, then re-enable it.
But unfortunately for that temporary v2 cache disabled status, we refuse to write a superblock with bg tree only flag, thus leads to the above transaction abortion.
[FIX] For now, just reject clear_cache and v1 cache mount option for block group tree. So now we got a graceful rejection other than a transaction abort:
BTRFS info (device dm-4): force clearing of disk cache BTRFS error (device dm-4): cannot disable free space tree with block-group-tree feature BTRFS error (device dm-4): open_ctree failed
CC: stable@vger.kernel.org # 6.1+ Signed-off-by: Qu Wenruo wqu@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/super.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
--- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1136,7 +1136,12 @@ out: !btrfs_test_opt(info, CLEAR_CACHE)) { btrfs_err(info, "cannot disable free space tree"); ret = -EINVAL; - + } + if (btrfs_fs_compat_ro(info, BLOCK_GROUP_TREE) && + (btrfs_test_opt(info, CLEAR_CACHE) || + !btrfs_test_opt(info, FREE_SPACE_TREE))) { + btrfs_err(info, "cannot disable free space tree with block-group-tree feature"); + ret = -EINVAL; } if (!ret) ret = btrfs_check_mountopts_zoned(info);
From: xiaoshoukui xiaoshoukui@gmail.com
commit ac868bc9d136cde6e3eb5de77019a63d57a540ff upstream.
Balance as exclusive state is compatible with paused balance and device add, which makes some things more complicated. The assertion of valid states when starting from paused balance needs to take into account two more states, the combinations can be hit when there are several threads racing to start balance and device add. This won't typically happen when the commands are started from command line.
Scenario 1: With exclusive_operation state == BTRFS_EXCLOP_NONE.
Concurrently adding multiple devices to the same mount point and btrfs_exclop_finish executed finishes before assertion in btrfs_exclop_balance, exclusive_operation will changed to BTRFS_EXCLOP_NONE state which lead to assertion failed:
fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE || fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD, in fs/btrfs/ioctl.c:456 Call Trace: <TASK> btrfs_exclop_balance+0x13c/0x310 ? memdup_user+0xab/0xc0 ? PTR_ERR+0x17/0x20 btrfs_ioctl_add_dev+0x2ee/0x320 btrfs_ioctl+0x9d5/0x10d0 ? btrfs_ioctl_encoded_write+0xb80/0xb80 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x3c/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Scenario 2: With exclusive_operation state == BTRFS_EXCLOP_BALANCE_PAUSED.
Concurrently adding multiple devices to the same mount point and btrfs_exclop_balance executed finish before the latter thread execute assertion in btrfs_exclop_balance, exclusive_operation will changed to BTRFS_EXCLOP_BALANCE_PAUSED state which lead to assertion failed:
fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE || fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD || fs_info->exclusive_operation == BTRFS_EXCLOP_NONE, fs/btrfs/ioctl.c:458 Call Trace: <TASK> btrfs_exclop_balance+0x240/0x410 ? memdup_user+0xab/0xc0 ? PTR_ERR+0x17/0x20 btrfs_ioctl_add_dev+0x2ee/0x320 btrfs_ioctl+0x9d5/0x10d0 ? btrfs_ioctl_encoded_write+0xb80/0xb80 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x3c/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd
An example of the failed assertion is below, which shows that the paused balance is also needed to be checked.
root@syzkaller:/home/xsk# ./repro Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 [ 416.611428][ T7970] BTRFS info (device loop0): fs_info exclusive_operation: 0 Failed to add device /dev/vda, errno 14 [ 416.613973][ T7971] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.615456][ T7972] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.617528][ T7973] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.618359][ T7974] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.622589][ T7975] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.624034][ T7976] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.626420][ T7977] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.627643][ T7978] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.629006][ T7979] BTRFS info (device loop0): fs_info exclusive_operation: 3 [ 416.630298][ T7980] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 [ 416.632787][ T7981] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.634282][ T7982] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.636202][ T7983] BTRFS info (device loop0): fs_info exclusive_operation: 3 [ 416.637012][ T7984] BTRFS info (device loop0): fs_info exclusive_operation: 1 Failed to add device /dev/vda, errno 14 [ 416.637759][ T7984] assertion failed: fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE || fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD || fs_info->exclusive_operation == BTRFS_EXCLOP_NONE, in fs/btrfs/ioctl.c:458 [ 416.639845][ T7984] invalid opcode: 0000 [#1] PREEMPT SMP KASAN [ 416.640485][ T7984] CPU: 0 PID: 7984 Comm: repro Not tainted 6.2.0 #7 [ 416.641172][ T7984] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 [ 416.642090][ T7984] RIP: 0010:btrfs_assertfail+0x2c/0x2e [ 416.644423][ T7984] RSP: 0018:ffffc90003ea7e28 EFLAGS: 00010282 [ 416.645018][ T7984] RAX: 00000000000000cc RBX: 0000000000000000 RCX: 0000000000000000 [ 416.645763][ T7984] RDX: ffff88801d030000 RSI: ffffffff81637e7c RDI: fffff520007d4fb7 [ 416.646554][ T7984] RBP: ffffffff8a533de0 R08: 00000000000000cc R09: 0000000000000000 [ 416.647299][ T7984] R10: 0000000000000001 R11: 0000000000000001 R12: ffffffff8a533da0 [ 416.648041][ T7984] R13: 00000000000001ca R14: 000000005000940a R15: 0000000000000000 [ 416.648785][ T7984] FS: 00007fa2985d4640(0000) GS:ffff88802cc00000(0000) knlGS:0000000000000000 [ 416.649616][ T7984] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 416.650238][ T7984] CR2: 0000000000000000 CR3: 0000000018e5e000 CR4: 0000000000750ef0 [ 416.650980][ T7984] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 416.651725][ T7984] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 416.652502][ T7984] PKRU: 55555554 [ 416.652888][ T7984] Call Trace: [ 416.653241][ T7984] <TASK> [ 416.653527][ T7984] btrfs_exclop_balance+0x240/0x410 [ 416.654036][ T7984] ? memdup_user+0xab/0xc0 [ 416.654465][ T7984] ? PTR_ERR+0x17/0x20 [ 416.654874][ T7984] btrfs_ioctl_add_dev+0x2ee/0x320 [ 416.655380][ T7984] btrfs_ioctl+0x9d5/0x10d0 [ 416.655822][ T7984] ? btrfs_ioctl_encoded_write+0xb80/0xb80 [ 416.656400][ T7984] __x64_sys_ioctl+0x197/0x210 [ 416.656874][ T7984] do_syscall_64+0x3c/0xb0 [ 416.657346][ T7984] entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 416.657922][ T7984] RIP: 0033:0x4546af [ 416.660170][ T7984] RSP: 002b:00007fa2985d4150 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 416.660972][ T7984] RAX: ffffffffffffffda RBX: 00007fa2985d4640 RCX: 00000000004546af [ 416.661714][ T7984] RDX: 0000000000000000 RSI: 000000005000940a RDI: 0000000000000003 [ 416.662449][ T7984] RBP: 00007fa2985d41d0 R08: 0000000000000000 R09: 00007ffee37a4c4f [ 416.663195][ T7984] R10: 0000000000000000 R11: 0000000000000246 R12: 00007fa2985d4640 [ 416.663951][ T7984] R13: 0000000000000009 R14: 000000000041b320 R15: 00007fa297dd4000 [ 416.664703][ T7984] </TASK> [ 416.665040][ T7984] Modules linked in: [ 416.665590][ T7984] ---[ end trace 0000000000000000 ]--- [ 416.666176][ T7984] RIP: 0010:btrfs_assertfail+0x2c/0x2e [ 416.668775][ T7984] RSP: 0018:ffffc90003ea7e28 EFLAGS: 00010282 [ 416.669425][ T7984] RAX: 00000000000000cc RBX: 0000000000000000 RCX: 0000000000000000 [ 416.670235][ T7984] RDX: ffff88801d030000 RSI: ffffffff81637e7c RDI: fffff520007d4fb7 [ 416.671050][ T7984] RBP: ffffffff8a533de0 R08: 00000000000000cc R09: 0000000000000000 [ 416.671867][ T7984] R10: 0000000000000001 R11: 0000000000000001 R12: ffffffff8a533da0 [ 416.672685][ T7984] R13: 00000000000001ca R14: 000000005000940a R15: 0000000000000000 [ 416.673501][ T7984] FS: 00007fa2985d4640(0000) GS:ffff88802cc00000(0000) knlGS:0000000000000000 [ 416.674425][ T7984] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 416.675114][ T7984] CR2: 0000000000000000 CR3: 0000000018e5e000 CR4: 0000000000750ef0 [ 416.675933][ T7984] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 416.676760][ T7984] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Link: https://lore.kernel.org/linux-btrfs/20230324031611.98986-1-xiaoshoukui@gmail... CC: stable@vger.kernel.org # 6.1+ Signed-off-by: xiaoshoukui xiaoshoukui@ruijie.com.cn Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/ioctl.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -443,7 +443,9 @@ void btrfs_exclop_balance(struct btrfs_f case BTRFS_EXCLOP_BALANCE_PAUSED: spin_lock(&fs_info->super_lock); ASSERT(fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE || - fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD); + fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD || + fs_info->exclusive_operation == BTRFS_EXCLOP_NONE || + fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE_PAUSED); fs_info->exclusive_operation = BTRFS_EXCLOP_BALANCE_PAUSED; spin_unlock(&fs_info->super_lock); break;
From: Boris Burkov boris@bur.io
commit e7db9e5c6b9615b287d01f0231904fbc1fbde9c5 upstream.
We have observed a btrfs filesystem corruption on workloads using no-holes and encoded writes via send stream v2. The symptom is that a file appears to be truncated to the end of its last aligned extent, even though the final unaligned extent and even the file extent and otherwise correctly updated inode item have been written.
So if we were writing out a 1MiB+X file via 8 128K extents and one extent of length X, i_size would be set to 1MiB, but the ninth extent, nbyte, etc. would all appear correct otherwise.
The source of the race is a narrow (one line of code) window in which a no-holes fs has read in an updated i_size, but has not yet set a shared disk_i_size variable to write. Therefore, if two ordered extents run in parallel (par for the course for receive workloads), the following sequence can play out: (following "threads" a bit loosely, since there are callbacks involved for endio but extra threads aren't needed to cause the issue)
ENC-WR1 (second to last) ENC-WR2 (last) ------- ------- btrfs_do_encoded_write set i_size = 1M submit bio B1 ending at 1M endio B1 btrfs_inode_safe_disk_i_size_write local i_size = 1M falls off a cliff for some reason btrfs_do_encoded_write set i_size = 1M+X submit bio B2 ending at 1M+X endio B2 btrfs_inode_safe_disk_i_size_write local i_size = 1M+X disk_i_size = 1M+X disk_i_size = 1M btrfs_delayed_update_inode btrfs_delayed_update_inode
And the delayed inode ends up filled with nbytes=1M+X and isize=1M, and writes respect i_size and present a corrupted file missing its last extents.
Fix this by holding the inode lock in the no-holes case so that a thread can't sneak in a write to disk_i_size that gets overwritten with an out of date i_size.
Fixes: 41a2ee75aab0 ("btrfs: introduce per-inode file extent tree") CC: stable@vger.kernel.org # 5.10+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Boris Burkov boris@bur.io Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/file-item.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/fs/btrfs/file-item.c +++ b/fs/btrfs/file-item.c @@ -47,13 +47,13 @@ void btrfs_inode_safe_disk_i_size_write( u64 start, end, i_size; int ret;
+ spin_lock(&inode->lock); i_size = new_i_size ?: i_size_read(&inode->vfs_inode); if (btrfs_fs_incompat(fs_info, NO_HOLES)) { inode->disk_i_size = i_size; - return; + goto out_unlock; }
- spin_lock(&inode->lock); ret = find_contiguous_extent_bit(&inode->file_extent_tree, 0, &start, &end, EXTENT_DIRTY); if (!ret && start == 0) @@ -61,6 +61,7 @@ void btrfs_inode_safe_disk_i_size_write( else i_size = 0; inode->disk_i_size = i_size; +out_unlock: spin_unlock(&inode->lock); }
From: Josef Bacik josef@toxicpanda.com
commit d246331b78cbef86237f9c22389205bc9b4e1cc1 upstream.
Boris noticed in his simple quotas testing that he was getting a leak with Sweet Tea's change to subvol create that stopped doing a transaction commit. This was just a side effect of that change.
In the delayed inode code we have an optimization that will free extra reservations if we think we can pack a dir item into an already modified leaf. Previously this wouldn't be triggered in the subvolume create case because we'd commit the transaction, it was still possible but much harder to trigger. It could actually be triggered if we did a mkdir && subvol create with qgroups enabled.
This occurs because in btrfs_insert_delayed_dir_index(), which gets called when we're adding the dir item, we do the following:
btrfs_block_rsv_release(fs_info, trans->block_rsv, bytes, NULL);
if we're able to skip reserving space.
The problem here is that trans->block_rsv points at the temporary block rsv for the subvolume create, which has qgroup reservations in the block rsv.
This is a problem because btrfs_block_rsv_release() will do the following:
if (block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) { qgroup_to_release = block_rsv->qgroup_rsv_reserved - block_rsv->qgroup_rsv_size; block_rsv->qgroup_rsv_reserved = block_rsv->qgroup_rsv_size; }
The temporary block rsv just has ->qgroup_rsv_reserved set, ->qgroup_rsv_size == 0. The optimization in btrfs_insert_delayed_dir_index() sets ->qgroup_rsv_reserved = 0. Then later on when we call btrfs_subvolume_release_metadata() which has
btrfs_block_rsv_release(fs_info, rsv, (u64)-1, &qgroup_to_release); btrfs_qgroup_convert_reserved_meta(root, qgroup_to_release);
qgroup_to_release is set to 0, and we do not convert the reserved metadata space.
The problem here is that the block rsv code has been unconditionally messing with ->qgroup_rsv_reserved, because the main place this is used is delalloc, and any time we call btrfs_block_rsv_release() we do it with qgroup_to_release set, and thus do the proper accounting.
The subvolume code is the only other code that uses the qgroup reservation stuff, but it's intermingled with the above optimization, and thus was getting its reservation freed out from underneath it and thus leaking the reserved space.
The solution is to simply not mess with the qgroup reservations if we don't have qgroup_to_release set. This works with the existing code as anything that messes with the delalloc reservations always have qgroup_to_release set. This fixes the leak that Boris was observing.
Reviewed-by: Qu Wenruo wqu@suse.com CC: stable@vger.kernel.org # 5.4+ Signed-off-by: Josef Bacik josef@toxicpanda.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/block-rsv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/fs/btrfs/block-rsv.c +++ b/fs/btrfs/block-rsv.c @@ -122,7 +122,8 @@ static u64 block_rsv_release_bytes(struc } else { num_bytes = 0; } - if (block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) { + if (qgroup_to_release_ret && + block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) { qgroup_to_release = block_rsv->qgroup_rsv_reserved - block_rsv->qgroup_rsv_size; block_rsv->qgroup_rsv_reserved = block_rsv->qgroup_rsv_size;
From: Christoph Hellwig hch@lst.de
commit c83b56d1dd87cf67492bb770c26d6f87aee70ed6 upstream.
btrfs_redirty_list_add zeroes the buffer data and sets the EXTENT_BUFFER_NO_CHECK to make sure writeback is fine with a bogus header. But it does that after already marking the buffer dirty, which means that writeback could already be looking at the buffer.
Switch the order of operations around so that the buffer is only marked dirty when we're ready to write it.
Fixes: d3575156f662 ("btrfs: zoned: redirty released extent buffers") CC: stable@vger.kernel.org # 5.15+ Signed-off-by: Christoph Hellwig hch@lst.de Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/zoned.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1605,11 +1605,11 @@ void btrfs_redirty_list_add(struct btrfs !list_empty(&eb->release_list)) return;
+ memzero_extent_buffer(eb, 0, eb->len); + set_bit(EXTENT_BUFFER_NO_CHECK, &eb->bflags); set_extent_buffer_dirty(eb); set_extent_bits_nowait(&trans->dirty_pages, eb->start, eb->start + eb->len - 1, EXTENT_DIRTY); - memzero_extent_buffer(eb, 0, eb->len); - set_bit(EXTENT_BUFFER_NO_CHECK, &eb->bflags);
spin_lock(&trans->releasing_ebs_lock); list_add_tail(&eb->release_list, &trans->releasing_ebs);
From: Qu Wenruo wqu@suse.com
commit 1d6a4fc85717677e00fefffd847a50fc5928ce69 upstream.
Previously clear_cache mount option would simply disable free-space-tree feature temporarily then re-enable it to rebuild the whole free space tree.
But this is problematic for block-group-tree feature, as we have an artificial dependency on free-space-tree feature.
If we go the existing method, after clearing the free-space-tree feature, we would flip the filesystem to read-only mode, as we detect a super block write with block-group-tree but no free-space-tree feature.
This patch would change the behavior by properly rebuilding the free space tree without disabling this feature, thus allowing clear_cache mount option to work with block group tree.
Now we can mount a filesystem with block-group-tree feature and clear_mount option:
$ mkfs.btrfs -O block-group-tree /dev/test/scratch1 -f $ sudo mount /dev/test/scratch1 /mnt/btrfs -o clear_cache $ sudo dmesg -t | head -n 5 BTRFS info (device dm-1): force clearing of disk cache BTRFS info (device dm-1): using free space tree BTRFS info (device dm-1): auto enabling async discard BTRFS info (device dm-1): rebuilding free space tree BTRFS info (device dm-1): checking UUID tree
CC: stable@vger.kernel.org # 6.1+ Signed-off-by: Qu Wenruo wqu@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/disk-io.c | 25 ++++++++++++++++------ fs/btrfs/free-space-tree.c | 50 ++++++++++++++++++++++++++++++++++++++++++++- fs/btrfs/free-space-tree.h | 3 +- fs/btrfs/super.c | 3 -- 4 files changed, 70 insertions(+), 11 deletions(-)
--- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -3215,23 +3215,34 @@ int btrfs_start_pre_rw_mount(struct btrf { int ret; const bool cache_opt = btrfs_test_opt(fs_info, SPACE_CACHE); - bool clear_free_space_tree = false; + bool rebuild_free_space_tree = false;
if (btrfs_test_opt(fs_info, CLEAR_CACHE) && btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) { - clear_free_space_tree = true; + rebuild_free_space_tree = true; } else if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE) && !btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID)) { btrfs_warn(fs_info, "free space tree is invalid"); - clear_free_space_tree = true; + rebuild_free_space_tree = true; }
- if (clear_free_space_tree) { - btrfs_info(fs_info, "clearing free space tree"); - ret = btrfs_clear_free_space_tree(fs_info); + if (rebuild_free_space_tree) { + btrfs_info(fs_info, "rebuilding free space tree"); + ret = btrfs_rebuild_free_space_tree(fs_info); if (ret) { btrfs_warn(fs_info, - "failed to clear free space tree: %d", ret); + "failed to rebuild free space tree: %d", ret); + goto out; + } + } + + if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE) && + !btrfs_test_opt(fs_info, FREE_SPACE_TREE)) { + btrfs_info(fs_info, "disabling free space tree"); + ret = btrfs_delete_free_space_tree(fs_info); + if (ret) { + btrfs_warn(fs_info, + "failed to disable free space tree: %d", ret); goto out; } } --- a/fs/btrfs/free-space-tree.c +++ b/fs/btrfs/free-space-tree.c @@ -1247,7 +1247,7 @@ out: return ret; }
-int btrfs_clear_free_space_tree(struct btrfs_fs_info *fs_info) +int btrfs_delete_free_space_tree(struct btrfs_fs_info *fs_info) { struct btrfs_trans_handle *trans; struct btrfs_root *tree_root = fs_info->tree_root; @@ -1290,6 +1290,54 @@ int btrfs_clear_free_space_tree(struct b abort: btrfs_abort_transaction(trans, ret); btrfs_end_transaction(trans); + return ret; +} + +int btrfs_rebuild_free_space_tree(struct btrfs_fs_info *fs_info) +{ + struct btrfs_trans_handle *trans; + struct btrfs_key key = { + .objectid = BTRFS_FREE_SPACE_TREE_OBJECTID, + .type = BTRFS_ROOT_ITEM_KEY, + .offset = 0, + }; + struct btrfs_root *free_space_root = btrfs_global_root(fs_info, &key); + struct rb_node *node; + int ret; + + trans = btrfs_start_transaction(free_space_root, 1); + if (IS_ERR(trans)) + return PTR_ERR(trans); + + set_bit(BTRFS_FS_CREATING_FREE_SPACE_TREE, &fs_info->flags); + set_bit(BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED, &fs_info->flags); + + ret = clear_free_space_tree(trans, free_space_root); + if (ret) + goto abort; + + node = rb_first_cached(&fs_info->block_group_cache_tree); + while (node) { + struct btrfs_block_group *block_group; + + block_group = rb_entry(node, struct btrfs_block_group, + cache_node); + ret = populate_free_space_tree(trans, block_group); + if (ret) + goto abort; + node = rb_next(node); + } + + btrfs_set_fs_compat_ro(fs_info, FREE_SPACE_TREE); + btrfs_set_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID); + clear_bit(BTRFS_FS_CREATING_FREE_SPACE_TREE, &fs_info->flags); + + ret = btrfs_commit_transaction(trans); + clear_bit(BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED, &fs_info->flags); + return ret; +abort: + btrfs_abort_transaction(trans, ret); + btrfs_end_transaction(trans); return ret; }
--- a/fs/btrfs/free-space-tree.h +++ b/fs/btrfs/free-space-tree.h @@ -18,7 +18,8 @@ struct btrfs_caching_control;
void set_free_space_tree_thresholds(struct btrfs_block_group *block_group); int btrfs_create_free_space_tree(struct btrfs_fs_info *fs_info); -int btrfs_clear_free_space_tree(struct btrfs_fs_info *fs_info); +int btrfs_delete_free_space_tree(struct btrfs_fs_info *fs_info); +int btrfs_rebuild_free_space_tree(struct btrfs_fs_info *fs_info); int load_free_space_tree(struct btrfs_caching_control *caching_ctl); int add_block_group_free_space(struct btrfs_trans_handle *trans, struct btrfs_block_group *block_group); --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -1138,8 +1138,7 @@ out: ret = -EINVAL; } if (btrfs_fs_compat_ro(info, BLOCK_GROUP_TREE) && - (btrfs_test_opt(info, CLEAR_CACHE) || - !btrfs_test_opt(info, FREE_SPACE_TREE))) { + !btrfs_test_opt(info, FREE_SPACE_TREE)) { btrfs_err(info, "cannot disable free space tree with block-group-tree feature"); ret = -EINVAL; }
From: Anastasia Belova abelova@astralinux.ru
commit c87f318e6f47696b4040b58f460d5c17ea0280e6 upstream.
Check nodesize to sectorsize in alignment check in print_extent_item. The comment states that and this is correct, similar check is done elsewhere in the functions.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: ea57788eb76d ("btrfs: require only sector size alignment for parent eb bytenr") CC: stable@vger.kernel.org # 4.14+ Reviewed-by: Qu Wenruo wqu@suse.com Signed-off-by: Anastasia Belova abelova@astralinux.ru Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/print-tree.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/fs/btrfs/print-tree.c +++ b/fs/btrfs/print-tree.c @@ -148,10 +148,10 @@ static void print_extent_item(struct ext pr_cont("shared data backref parent %llu count %u\n", offset, btrfs_shared_data_ref_count(eb, sref)); /* - * offset is supposed to be a tree block which - * must be aligned to nodesize. + * Offset is supposed to be a tree block which must be + * aligned to sectorsize. */ - if (!IS_ALIGNED(offset, eb->fs_info->nodesize)) + if (!IS_ALIGNED(offset, eb->fs_info->sectorsize)) pr_info( "\t\t\t(parent %llu not aligned to sectorsize %u)\n", offset, eb->fs_info->sectorsize);
From: Filipe Manana fdmanana@suse.com
commit 0004ff15ea26015a0a3a6182dca3b9d1df32e2b7 upstream.
When loading a free space cache from disk, at __load_free_space_cache(), if we fail to insert a bitmap entry, we still increment the number of total bitmaps in the btrfs_free_space_ctl structure, which is incorrect since we failed to add the bitmap entry. On error we then empty the cache by calling __btrfs_remove_free_space_cache(), which will result in getting the total bitmaps counter set to 1.
A failure to load a free space cache is not critical, so if a failure happens we just rebuild the cache by scanning the extent tree, which happens at block-group.c:caching_thread(). Yet the failure will result in having the total bitmaps of the btrfs_free_space_ctl always bigger by 1 then the number of bitmap entries we have. So fix this by having the total bitmaps counter be incremented only if we successfully added the bitmap entry.
Fixes: a67509c30079 ("Btrfs: add a io_ctl struct and helpers for dealing with the space cache") Reviewed-by: Anand Jain anand.jain@oracle.com CC: stable@vger.kernel.org # 4.4+ Signed-off-by: Filipe Manana fdmanana@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/free-space-cache.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
--- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -861,15 +861,16 @@ static int __load_free_space_cache(struc } spin_lock(&ctl->tree_lock); ret = link_free_space(ctl, e); - ctl->total_bitmaps++; - recalculate_thresholds(ctl); - spin_unlock(&ctl->tree_lock); if (ret) { + spin_unlock(&ctl->tree_lock); btrfs_err(fs_info, "Duplicate entries in free space cache, dumping"); kmem_cache_free(btrfs_free_space_cachep, e); goto free_cache; } + ctl->total_bitmaps++; + recalculate_thresholds(ctl); + spin_unlock(&ctl->tree_lock); list_add_tail(&e->list, &bitmaps); }
From: Naohiro Aota Naohiro.Aota@wdc.com
commit f84353c7c20536ea7e01eca79430eccdf3cc7348 upstream.
For data block groups, we zone finish a zone (or, just deactivate it) when seeing the last IO in btrfs_finish_ordered_io(). That is only called for IOs using ZONE_APPEND, but we use a regular WRITE command for data relocation IOs. Detect it and call btrfs_zone_finish_endio() properly.
Fixes: be1a1d7a5d24 ("btrfs: zoned: finish fully written block group") CC: stable@vger.kernel.org # 6.1+ Reviewed-by: Johannes Thumshirn johannes.thumshirn@wdc.com Signed-off-by: Naohiro Aota naohiro.aota@wdc.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/inode.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -3237,6 +3237,9 @@ int btrfs_finish_ordered_io(struct btrfs btrfs_rewrite_logical_zoned(ordered_extent); btrfs_zone_finish_endio(fs_info, ordered_extent->disk_bytenr, ordered_extent->disk_num_bytes); + } else if (btrfs_is_data_reloc_root(inode->root)) { + btrfs_zone_finish_endio(fs_info, ordered_extent->disk_bytenr, + ordered_extent->disk_num_bytes); }
btrfs_free_io_failure_record(inode, start, end);
From: Naohiro Aota Naohiro.Aota@wdc.com
commit 02ca9e6fb5f66a031df4fac508b8e477ca69e918 upstream.
When both of the superblock zones are full, we need to check which superblock is newer. The calculation of last superblock position is wrong as it does not consider zone_capacity and uses the length.
Fixes: 9658b72ef300 ("btrfs: zoned: locate superblock position using zone capacity") CC: stable@vger.kernel.org # 6.1+ Reviewed-by: Johannes Thumshirn johannes.thumshirn@wdc.com Signed-off-by: Naohiro Aota naohiro.aota@wdc.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/zoned.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -119,10 +119,9 @@ static int sb_write_pointer(struct block int i;
for (i = 0; i < BTRFS_NR_SB_LOG_ZONES; i++) { - u64 bytenr; - - bytenr = ((zones[i].start + zones[i].len) - << SECTOR_SHIFT) - BTRFS_SUPER_INFO_SIZE; + u64 zone_end = (zones[i].start + zones[i].capacity) << SECTOR_SHIFT; + u64 bytenr = ALIGN_DOWN(zone_end, BTRFS_SUPER_INFO_SIZE) - + BTRFS_SUPER_INFO_SIZE;
page[i] = read_cache_page_gfp(mapping, bytenr >> PAGE_SHIFT, GFP_NOFS);
From: Pawel Witek pawel.ireneusz.witek@gmail.com
commit d66cde50c3c868af7abddafce701bb86e4a93039 upstream.
Change type of pcchunk->Length from u32 to u64 to match smb2_copychunk_range arguments type. Fixes the problem where performing server-side copy with CIFS_IOC_COPYCHUNK_FILE ioctl resulted in incomplete copy of large files while returning -EINVAL.
Fixes: 9bf0c9cd4314 ("CIFS: Fix SMB2/SMB3 Copy offload support (refcopy) for large files") Cc: stable@vger.kernel.org Signed-off-by: Pawel Witek pawel.ireneusz.witek@gmail.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/cifs/smb2ops.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -1682,7 +1682,7 @@ smb2_copychunk_range(const unsigned int pcchunk->SourceOffset = cpu_to_le64(src_off); pcchunk->TargetOffset = cpu_to_le64(dest_off); pcchunk->Length = - cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk)); + cpu_to_le32(min_t(u64, len, tcon->max_bytes_chunk));
/* Request server copy to target from src identified by key */ kfree(retbuf);
From: Steve French stfrench@microsoft.com
commit d39fc592ef8ae9a89c5e85c8d9f760937a57d5ba upstream.
We should not be caching closed files when freeze is invoked on an fs (so we can release resources more gracefully).
Fixes xfstests generic/068 generic/390 generic/491
Reviewed-by: David Howells dhowells@redhat.com Cc: stable@vger.kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/cifs/cifsfs.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+)
--- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -758,6 +758,20 @@ static void cifs_umount_begin(struct sup return; }
+static int cifs_freeze(struct super_block *sb) +{ + struct cifs_sb_info *cifs_sb = CIFS_SB(sb); + struct cifs_tcon *tcon; + + if (cifs_sb == NULL) + return 0; + + tcon = cifs_sb_master_tcon(cifs_sb); + + cifs_close_all_deferred_files(tcon); + return 0; +} + #ifdef CONFIG_CIFS_STATS2 static int cifs_show_stats(struct seq_file *s, struct dentry *root) { @@ -796,6 +810,7 @@ static const struct super_operations cif as opens */ .show_options = cifs_show_options, .umount_begin = cifs_umount_begin, + .freeze_fs = cifs_freeze, #ifdef CONFIG_CIFS_STATS2 .show_stats = cifs_show_stats, #endif
From: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com
commit 75e406b540c3eca67625d97bbefd4e3787eafbfe upstream.
Currently when the uncore_write() returns error, it is silently ignored. Return error to user space when uncore_write() fails.
Fixes: 49a474c7ba51 ("platform/x86: Add support for Uncore frequency control") Signed-off-by: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com Reviewed-by: Zhang Rui rui.zhang@intel.com Tested-by: Wendy Wang wendy.wang@intel.com Reviewed-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Link: https://lore.kernel.org/r/20230418153230.679094-1-srinivas.pandruvada@linux.... Cc: stable@vger.kernel.org Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c +++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c @@ -44,14 +44,18 @@ static ssize_t store_min_max_freq_khz(st int min_max) { unsigned int input; + int ret;
if (kstrtouint(buf, 10, &input)) return -EINVAL;
mutex_lock(&uncore_lock); - uncore_write(data, input, min_max); + ret = uncore_write(data, input, min_max); mutex_unlock(&uncore_lock);
+ if (ret) + return ret; + return count; }
From: Hans de Goede hdegoede@redhat.com
commit 6abfa99ce52f61a31bcfc2aaaae09006f5665495 upstream.
The Juno Computers Juno Tablet has an upside-down mounted Goodix touchscreen. Add a quirk to invert both axis to correct for this.
Link: https://junocomputers.com/us/product/juno-tablet/ Cc: stable@vger.kernel.org Signed-off-by: Hans de Goede hdegoede@redhat.com Link: https://lore.kernel.org/r/20230505210323.43177-1-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/touchscreen_dmi.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+)
--- a/drivers/platform/x86/touchscreen_dmi.c +++ b/drivers/platform/x86/touchscreen_dmi.c @@ -378,6 +378,11 @@ static const struct ts_dmi_data gdix1001 .properties = gdix1001_upside_down_props, };
+static const struct ts_dmi_data gdix1002_00_upside_down_data = { + .acpi_name = "GDIX1002:00", + .properties = gdix1001_upside_down_props, +}; + static const struct property_entry gp_electronic_t701_props[] = { PROPERTY_ENTRY_U32("touchscreen-size-x", 960), PROPERTY_ENTRY_U32("touchscreen-size-y", 640), @@ -1296,6 +1301,18 @@ const struct dmi_system_id touchscreen_d }, }, { + /* Juno Tablet */ + .driver_data = (void *)&gdix1002_00_upside_down_data, + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Default string"), + /* Both product- and board-name being "Default string" is somewhat rare */ + DMI_MATCH(DMI_PRODUCT_NAME, "Default string"), + DMI_MATCH(DMI_BOARD_NAME, "Default string"), + /* Above matches are too generic, add partial bios-version match */ + DMI_MATCH(DMI_BIOS_VERSION, "JP2V1."), + }, + }, + { /* Mediacom WinPad 7.0 W700 (same hw as Wintron surftab 7") */ .driver_data = (void *)&trekstor_surftab_wintron70_data, .matches = {
From: Mark Pearson mpearson-lenovo@squebb.ca
commit 0c0cd3e25a5b64b541dd83ba6e032475a9d77432 upstream.
I had incorrectly thought that PSC profiles were not usable on Intel platforms so had blocked them in the driver initialistion. This broke platform profiles on the T490.
After discussion with the FW team PSC does work on Intel platforms and should be allowed.
Note - it's possible this may impact other platforms where it is advertised but special driver support that only Windows has is needed. But if it does then they will need fixing via quirks. Please report any issues to me so I can get them addressed - but I haven't found any problems in testing...yet
Fixes: bce6243f767f ("platform/x86: thinkpad_acpi: do not use PSC mode on Intel platforms") Link: https://bugzilla.redhat.com/show_bug.cgi?id=2177962 Cc: stable@vger.kernel.org Signed-off-by: Mark Pearson mpearson-lenovo@squebb.ca Link: https://lore.kernel.org/r/20230505132523.214338-1-mpearson-lenovo@squebb.ca Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/thinkpad_acpi.c | 5 ----- 1 file changed, 5 deletions(-)
--- a/drivers/platform/x86/thinkpad_acpi.c +++ b/drivers/platform/x86/thinkpad_acpi.c @@ -10597,11 +10597,6 @@ static int tpacpi_dytc_profile_init(stru dytc_mmc_get_available = true; } } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { /* PSC MODE */ - /* Support for this only works on AMD platforms */ - if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) { - dbg_printk(TPACPI_DBG_INIT, "PSC not support on Intel platforms\n"); - return -ENODEV; - } pr_debug("PSC is supported\n"); } else { dbg_printk(TPACPI_DBG_INIT, "No DYTC support available\n");
From: Andrey Avdeev jamesstoun@gmail.com
commit 4b65f95c87c35699bc6ad540d6b9dd7f950d0924 upstream.
Add touchscreen info for the Dexp Ursus KX210i
Signed-off-by: Andrey Avdeev jamesstoun@gmail.com Link: https://lore.kernel.org/r/ZE4gRgzRQCjXFYD0@avdeevavpc Cc: stable@vger.kernel.org Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/touchscreen_dmi.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
--- a/drivers/platform/x86/touchscreen_dmi.c +++ b/drivers/platform/x86/touchscreen_dmi.c @@ -336,6 +336,22 @@ static const struct ts_dmi_data dexp_urs .properties = dexp_ursus_7w_props, };
+static const struct property_entry dexp_ursus_kx210i_props[] = { + PROPERTY_ENTRY_U32("touchscreen-min-x", 5), + PROPERTY_ENTRY_U32("touchscreen-min-y", 2), + PROPERTY_ENTRY_U32("touchscreen-size-x", 1720), + PROPERTY_ENTRY_U32("touchscreen-size-y", 1137), + PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-dexp-ursus-kx210i.fw"), + PROPERTY_ENTRY_U32("silead,max-fingers", 10), + PROPERTY_ENTRY_BOOL("silead,home-button"), + { } +}; + +static const struct ts_dmi_data dexp_ursus_kx210i_data = { + .acpi_name = "MSSL1680:00", + .properties = dexp_ursus_kx210i_props, +}; + static const struct property_entry digma_citi_e200_props[] = { PROPERTY_ENTRY_U32("touchscreen-size-x", 1980), PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), @@ -1191,6 +1207,14 @@ const struct dmi_system_id touchscreen_d }, }, { + /* DEXP Ursus KX210i */ + .driver_data = (void *)&dexp_ursus_kx210i_data, + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "INSYDE Corp."), + DMI_MATCH(DMI_PRODUCT_NAME, "S107I"), + }, + }, + { /* Digma Citi E200 */ .driver_data = (void *)&digma_citi_e200_data, .matches = {
From: Mark Pearson mpearson-lenovo@squebb.ca
commit 1684878952929e20a864af5df7b498941c750f45 upstream.
There has been a lot of confusion around which platform profiles are supported on various platforms and it would be useful to have a debug method to be able to override the profile mode that is selected.
I don't expect this to be used in anything other than debugging in conjunction with Lenovo engineers - but it does give a way to get a system working whilst we wait for either FW fixes, or a driver fix to land upstream, if something is wonky in the mode detection logic
Signed-off-by: Mark Pearson mpearson-lenovo@squebb.ca Link: https://lore.kernel.org/r/20230505132523.214338-2-mpearson-lenovo@squebb.ca Cc: stable@vger.kernel.org Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/thinkpad_acpi.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
--- a/drivers/platform/x86/thinkpad_acpi.c +++ b/drivers/platform/x86/thinkpad_acpi.c @@ -10322,6 +10322,7 @@ static atomic_t dytc_ignore_event = ATOM static DEFINE_MUTEX(dytc_mutex); static int dytc_capabilities; static bool dytc_mmc_get_available; +static int profile_force;
static int convert_dytc_to_profile(int funcmode, int dytcmode, enum platform_profile_option *profile) @@ -10584,6 +10585,21 @@ static int tpacpi_dytc_profile_init(stru if (err) return err;
+ /* Check if user wants to override the profile selection */ + if (profile_force) { + switch (profile_force) { + case -1: + dytc_capabilities = 0; + break; + case 1: + dytc_capabilities = BIT(DYTC_FC_MMC); + break; + case 2: + dytc_capabilities = BIT(DYTC_FC_PSC); + break; + } + pr_debug("Profile selection forced: 0x%x\n", dytc_capabilities); + } if (dytc_capabilities & BIT(DYTC_FC_MMC)) { /* MMC MODE */ pr_debug("MMC is supported\n"); /* @@ -11645,6 +11661,9 @@ MODULE_PARM_DESC(uwb_state, "Initial state of the emulated UWB switch"); #endif
+module_param(profile_force, int, 0444); +MODULE_PARM_DESC(profile_force, "Force profile mode. -1=off, 1=MMC, 2=PSC"); + static void thinkpad_acpi_module_exit(void) { struct ibm_struct *ibm, *itmp;
From: Jan Kara jack@suse.cz
commit c915d8f5918bea7c3962b09b8884ca128bfd9b0c upstream.
When inotify_freeing_mark() races with inotify_handle_inode_event() it can happen that inotify_handle_inode_event() sees that i_mark->wd got already reset to -1 and reports this value to userspace which can confuse the inotify listener. Avoid the problem by validating that wd is sensible (and pretend the mark got removed before the event got generated otherwise).
CC: stable@vger.kernel.org Fixes: 7e790dd5fc93 ("inotify: fix error paths in inotify_update_watch") Message-Id: 20230424163219.9250-1-jack@suse.cz Reported-by: syzbot+4a06d4373fd52f0b2f9c@syzkaller.appspotmail.com Reviewed-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/notify/inotify/inotify_fsnotify.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
--- a/fs/notify/inotify/inotify_fsnotify.c +++ b/fs/notify/inotify/inotify_fsnotify.c @@ -65,7 +65,7 @@ int inotify_handle_inode_event(struct fs struct fsnotify_event *fsn_event; struct fsnotify_group *group = inode_mark->group; int ret; - int len = 0; + int len = 0, wd; int alloc_len = sizeof(struct inotify_event_info); struct mem_cgroup *old_memcg;
@@ -81,6 +81,13 @@ int inotify_handle_inode_event(struct fs fsn_mark);
/* + * We can be racing with mark being detached. Don't report event with + * invalid wd. + */ + wd = READ_ONCE(i_mark->wd); + if (wd == -1) + return 0; + /* * Whoever is interested in the event, pays for the allocation. Do not * trigger OOM killer in the target monitoring memcg as it may have * security repercussion. @@ -110,7 +117,7 @@ int inotify_handle_inode_event(struct fs fsn_event = &event->fse; fsnotify_init_event(fsn_event); event->mask = mask; - event->wd = i_mark->wd; + event->wd = wd; event->sync_cookie = cookie; event->name_len = len; if (len)
From: Steve French stfrench@microsoft.com
commit 716a3cf317456fa01d54398bb14ab354f50ed6a2 upstream.
xfstests generic/392 showed a problem where even after a shutdown call was made on a mount, we would still attempt to use the (now inaccessible) superblock if another mount was attempted for the same share.
Reported-by: David Howells dhowells@redhat.com Reviewed-by: David Howells dhowells@redhat.com Cc: stable@vger.kernel.org Fixes: 087f757b0129 ("cifs: add shutdown support") Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/cifs/connect.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/fs/cifs/connect.c +++ b/fs/cifs/connect.c @@ -2742,6 +2742,13 @@ cifs_match_super(struct super_block *sb,
spin_lock(&cifs_tcp_ses_lock); cifs_sb = CIFS_SB(sb); + + /* We do not want to use a superblock that has been shutdown */ + if (CIFS_MOUNT_SHUTDOWN & cifs_sb->mnt_cifs_flags) { + spin_unlock(&cifs_tcp_ses_lock); + return 0; + } + tlink = cifs_get_tlink(cifs_sb_master_tlink(cifs_sb)); if (tlink == NULL) { /* can not match superblock if tlink were ever null */
From: Steve French stfrench@microsoft.com
commit 2cb6f968775a9fd60c90a6042b9550bcec3ea087 upstream.
In investigating a failure with xfstest generic/392 it was noticed that mounts were reusing a superblock that should already have been freed. This turned out to be related to deferred close files keeping a reference count until the closetimeo expired.
Currently the only way an fs knows that mount is beginning is when force unmount is called, but when this, ie umount_begin(), is called all deferred close files on the share (tree connection) should be closed immediately (unless shared by another mount) to avoid using excess resources on the server and to avoid reusing a superblock which should already be freed.
In umount_begin, close all deferred close handles for that share if this is the last mount using that share on this client (ie send the SMB3 close request over the wire for those that have been already closed by the app but that we have kept a handle lease open for and have not sent closes to the server for yet).
Reported-by: David Howells dhowells@redhat.com Acked-by: Bharath SM bharathsm@microsoft.com Cc: stable@vger.kernel.org Fixes: 78c09634f7dc ("Cifs: Fix kernel oops caused by deferred close for files.") Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/cifs/cifsfs.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -743,6 +743,7 @@ static void cifs_umount_begin(struct sup spin_unlock(&tcon->tc_lock); spin_unlock(&cifs_tcp_ses_lock);
+ cifs_close_all_deferred_files(tcon); /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */ /* cancel_notify_requests(tcon); */ if (tcon->ses && tcon->ses->server) {
From: Randy Dunlap rdunlap@infradead.org
commit 58a49ad90939386a8682e842c474a0d2c00ec39c upstream.
Fix a warning that was reported by the kernel test robot:
In file included from ../include/math-emu/soft-fp.h:27, from ../arch/sh/math-emu/math.c:22: ../arch/sh/include/asm/sfp-machine.h:17: warning: "__BYTE_ORDER" redefined 17 | #define __BYTE_ORDER __BIG_ENDIAN In file included from ../arch/sh/math-emu/math.c:21: ../arch/sh/math-emu/sfp-util.h:71: note: this is the location of the previous definition 71 | #define __BYTE_ORDER __LITTLE_ENDIAN
Fixes: b929926f01f2 ("sh: define __BIG_ENDIAN for math-emu") Signed-off-by: Randy Dunlap rdunlap@infradead.org Reported-by: kernel test robot lkp@intel.com Link: lore.kernel.org/r/202111121827.6v6SXtVv-lkp@intel.com Cc: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Rich Felker dalias@libc.org Cc: linux-sh@vger.kernel.org Reviewed-by: Geert Uytterhoeven geert+renesas@glider.be Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Link: https://lore.kernel.org/r/20230306040037.20350-5-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/sh/math-emu/sfp-util.h | 4 ---- 1 file changed, 4 deletions(-)
--- a/arch/sh/math-emu/sfp-util.h +++ b/arch/sh/math-emu/sfp-util.h @@ -67,7 +67,3 @@ } while (0)
#define abort() return 0 - -#define __BYTE_ORDER __LITTLE_ENDIAN - -
From: Randy Dunlap rdunlap@infradead.org
commit c2bd1e18c6f85c0027da2e5e7753b9bfd9f8e6dc upstream.
Fix a build error in mcount.S when CONFIG_PRINTK is not enabled. Fixes this build error:
sh2-linux-ld: arch/sh/lib/mcount.o: in function `stack_panic': (.text+0xec): undefined reference to `dump_stack'
Fixes: e460ab27b6c3 ("sh: Fix up stack overflow check with ftrace disabled.") Signed-off-by: Randy Dunlap rdunlap@infradead.org Cc: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Rich Felker dalias@libc.org Suggested-by: Geert Uytterhoeven geert@linux-m68k.org Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Link: https://lore.kernel.org/r/20230306040037.20350-8-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/sh/Kconfig.debug | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/sh/Kconfig.debug +++ b/arch/sh/Kconfig.debug @@ -15,7 +15,7 @@ config SH_STANDARD_BIOS
config STACK_DEBUG bool "Check for stack overflows" - depends on DEBUG_KERNEL + depends on DEBUG_KERNEL && PRINTK help This option will cause messages to be printed if free stack space drops below a certain limit. Saying Y here will add overhead to
From: Randy Dunlap rdunlap@infradead.org
commit 6cba655543c7959f8a6d2979b9d40a6a66b7ed4f upstream.
When CONFIG_OF_EARLY_FLATTREE and CONFIG_SH_DEVICE_TREE are not set, SH3 build fails with a call to early_init_dt_scan(), so in arch/sh/kernel/setup.c and arch/sh/kernel/head_32.S, use CONFIG_OF_EARLY_FLATTREE instead of CONFIG_OF_FLATTREE.
Fixes this build error: ../arch/sh/kernel/setup.c: In function 'sh_fdt_init': ../arch/sh/kernel/setup.c:262:26: error: implicit declaration of function 'early_init_dt_scan' [-Werror=implicit-function-declaration] 262 | if (!dt_virt || !early_init_dt_scan(dt_virt)) {
Fixes: 03767daa1387 ("sh: fix build regression with CONFIG_OF && !CONFIG_OF_FLATTREE") Fixes: eb6b6930a70f ("sh: fix memory corruption of unflattened device tree") Signed-off-by: Randy Dunlap rdunlap@infradead.org Suggested-by: Rob Herring robh+dt@kernel.org Cc: Frank Rowand frowand.list@gmail.com Cc: devicetree@vger.kernel.org Cc: Rich Felker dalias@libc.org Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Geert Uytterhoeven geert+renesas@glider.be Cc: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Cc: linux-sh@vger.kernel.org Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Link: https://lore.kernel.org/r/20230306040037.20350-4-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/sh/kernel/head_32.S | 6 +++--- arch/sh/kernel/setup.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-)
--- a/arch/sh/kernel/head_32.S +++ b/arch/sh/kernel/head_32.S @@ -64,7 +64,7 @@ ENTRY(_stext) ldc r0, r6_bank #endif
-#ifdef CONFIG_OF_FLATTREE +#ifdef CONFIG_OF_EARLY_FLATTREE mov r4, r12 ! Store device tree blob pointer in r12 #endif @@ -315,7 +315,7 @@ ENTRY(_stext) 10: #endif
-#ifdef CONFIG_OF_FLATTREE +#ifdef CONFIG_OF_EARLY_FLATTREE mov.l 8f, r0 ! Make flat device tree available early. jsr @r0 mov r12, r4 @@ -346,7 +346,7 @@ ENTRY(stack_start) 5: .long start_kernel 6: .long cpu_init 7: .long init_thread_union -#if defined(CONFIG_OF_FLATTREE) +#if defined(CONFIG_OF_EARLY_FLATTREE) 8: .long sh_fdt_init #endif
--- a/arch/sh/kernel/setup.c +++ b/arch/sh/kernel/setup.c @@ -244,7 +244,7 @@ void __init __weak plat_early_device_set { }
-#ifdef CONFIG_OF_FLATTREE +#ifdef CONFIG_OF_EARLY_FLATTREE void __ref sh_fdt_init(phys_addr_t dt_phys) { static int done = 0; @@ -326,7 +326,7 @@ void __init setup_arch(char **cmdline_p) /* Let earlyprintk output early console messages */ sh_early_platform_driver_probe("earlyprintk", 1, 1);
-#ifdef CONFIG_OF_FLATTREE +#ifdef CONFIG_OF_EARLY_FLATTREE #ifdef CONFIG_USE_BUILTIN_DTB unflatten_and_copy_device_tree(); #else
From: Randy Dunlap rdunlap@infradead.org
commit d1155e4132de712a9d3066e2667ceaad39a539c5 upstream.
__setup() handlers should return 1 to obsolete_checksetup() in init/main.c to indicate that the boot option has been handled. A return of 0 causes the boot option/value to be listed as an Unknown kernel parameter and added to init's (limited) argument or environment strings. Also, error return codes don't mean anything to obsolete_checksetup() -- only non-zero (usually 1) or zero. So return 1 from nmi_debug_setup().
Fixes: 1e1030dccb10 ("sh: nmi_debug support.") Signed-off-by: Randy Dunlap rdunlap@infradead.org Reported-by: Igor Zhbanov izh1979@gmail.com Link: lore.kernel.org/r/64644a2f-4a20-bab3-1e15-3b2cdd0defe3@omprussia.ru Cc: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Rich Felker dalias@libc.org Cc: linux-sh@vger.kernel.org Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Link: https://lore.kernel.org/r/20230306040037.20350-3-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/sh/kernel/nmi_debug.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/sh/kernel/nmi_debug.c +++ b/arch/sh/kernel/nmi_debug.c @@ -49,7 +49,7 @@ static int __init nmi_debug_setup(char * register_die_notifier(&nmi_debug_nb);
if (*str != '=') - return 0; + return 1;
for (p = str + 1; *p; p = sep + 1) { sep = strchr(p, ','); @@ -70,6 +70,6 @@ static int __init nmi_debug_setup(char * break; }
- return 0; + return 1; } __setup("nmi_debug", nmi_debug_setup);
From: Luis Chamberlain mcgrof@kernel.org
commit 67ff32289acad9ed338cd9f2351b44939e55163e upstream.
Update the docs for __register_sysctl_table() to make it clear no child entries can be passed. When the child is true these are non-leaf entries on the ctl table and sysctl treats these as directories. The point to __register_sysctl_table() is to deal only with directories not part of the ctl table where thay may riside, to be simple and avoid recursion.
While at it, hint towards using long on extra1 and extra2 later.
Cc: stable@vger.kernel.org # v5.17 Cc: Christian Brauner brauner@kernel.org Cc: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Luis Chamberlain mcgrof@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/proc/proc_sysctl.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-)
--- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c @@ -1281,7 +1281,7 @@ out: * __register_sysctl_table - register a leaf sysctl table * @set: Sysctl tree to register on * @path: The path to the directory the sysctl table is in. - * @table: the top-level table structure + * @table: the top-level table structure without any child * * Register a sysctl table hierarchy. @table should be a filled in ctl_table * array. A completely 0 filled entry terminates the table. @@ -1302,9 +1302,12 @@ out: * proc_handler - the text handler routine (described below) * * extra1, extra2 - extra pointers usable by the proc handler routines + * XXX: we should eventually modify these to use long min / max [0] + * [0] https://lkml.kernel.org/87zgpte9o4.fsf@email.froward.int.ebiederm.org * * Leaf nodes in the sysctl tree will be represented by a single file - * under /proc; non-leaf nodes will be represented by directories. + * under /proc; non-leaf nodes (where child is not NULL) are not allowed, + * sysctl_check_table() verifies this. * * There must be a proc_handler routine for any terminal nodes. * Several default handlers are available to cover common cases - @@ -1346,7 +1349,7 @@ struct ctl_table_header *__register_sysc
spin_lock(&sysctl_lock); dir = &set->dir; - /* Reference moved down the diretory tree get_subdir */ + /* Reference moved down the directory tree get_subdir */ dir->header.nreg++; spin_unlock(&sysctl_lock);
@@ -1363,6 +1366,11 @@ struct ctl_table_header *__register_sysc if (namelen == 0) continue;
+ /* + * namelen ensures if name is "foo/bar/yay" only foo is + * registered first. We traverse as if using mkdir -p and + * return a ctl_dir for the last directory entry. + */ dir = get_subdir(dir, name, namelen); if (IS_ERR(dir)) goto fail;
From: Luis Chamberlain mcgrof@kernel.org
commit 1dc8689e4cc651e21566e10206a84c4006e81fb1 upstream.
Expand documentation to clarify:
o that paths don't need to exist for the new API callers o clarify that we *require* callers to keep the memory of the table around during the lifetime of the sysctls o annotate routines we are trying to deprecate and later remove
Cc: stable@vger.kernel.org # v5.17 Cc: Christian Brauner brauner@kernel.org Cc: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Luis Chamberlain mcgrof@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/proc/proc_sysctl.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-)
--- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c @@ -1281,7 +1281,10 @@ out: * __register_sysctl_table - register a leaf sysctl table * @set: Sysctl tree to register on * @path: The path to the directory the sysctl table is in. - * @table: the top-level table structure without any child + * @table: the top-level table structure without any child. This table + * should not be free'd after registration. So it should not be + * used on stack. It can either be a global or dynamically allocated + * by the caller and free'd later after sysctl unregistration. * * Register a sysctl table hierarchy. @table should be a filled in ctl_table * array. A completely 0 filled entry terminates the table. @@ -1396,8 +1399,15 @@ fail:
/** * register_sysctl - register a sysctl table - * @path: The path to the directory the sysctl table is in. - * @table: the table structure + * @path: The path to the directory the sysctl table is in. If the path + * doesn't exist we will create it for you. + * @table: the table structure. The calller must ensure the life of the @table + * will be kept during the lifetime use of the syctl. It must not be freed + * until unregister_sysctl_table() is called with the given returned table + * with this registration. If your code is non modular then you don't need + * to call unregister_sysctl_table() and can instead use something like + * register_sysctl_init() which does not care for the result of the syctl + * registration. * * Register a sysctl table. @table should be a filled in ctl_table * array. A completely 0 filled entry terminates the table. @@ -1413,8 +1423,11 @@ EXPORT_SYMBOL(register_sysctl);
/** * __register_sysctl_init() - register sysctl table to path - * @path: path name for sysctl base - * @table: This is the sysctl table that needs to be registered to the path + * @path: path name for sysctl base. If that path doesn't exist we will create + * it for you. + * @table: This is the sysctl table that needs to be registered to the path. + * The caller must ensure the life of the @table will be kept during the + * lifetime use of the sysctl. * @table_name: The name of sysctl table, only used for log printing when * registration fails * @@ -1559,6 +1572,7 @@ out: * * Register a sysctl table hierarchy. @table should be a filled in ctl_table * array. A completely 0 filled entry terminates the table. + * We are slowly deprecating this call so avoid its use. * * See __register_sysctl_table for more details. */ @@ -1630,6 +1644,7 @@ err_register_leaves: * * Register a sysctl table hierarchy. @table should be a filled in ctl_table * array. A completely 0 filled entry terminates the table. + * We are slowly deprecating this caller so avoid future uses of it. * * See __register_sysctl_paths for more details. */
From: Mathieu Poirier mathieu.poirier@linaro.org
commit ccadca5baf5124a880f2bb50ed1ec265415f025b upstream.
Function of_phandle_iterator_next() calls of_node_put() on the last device_node it iterated over, but when the loop exits prematurely it has to be called explicitly.
Fixes: 13140de09cc2 ("remoteproc: stm32: add an ST stm32_rproc driver") Cc: stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Reviewed-by: Arnaud Pouliquen arnaud.pouliquen@foss.st.com Link: https://lore.kernel.org/r/20230320221826.2728078-2-mathieu.poirier@linaro.or... Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/remoteproc/stm32_rproc.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/remoteproc/stm32_rproc.c +++ b/drivers/remoteproc/stm32_rproc.c @@ -223,11 +223,13 @@ static int stm32_rproc_prepare(struct rp while (of_phandle_iterator_next(&it) == 0) { rmem = of_reserved_mem_lookup(it.node); if (!rmem) { + of_node_put(it.node); dev_err(dev, "unable to acquire memory-region\n"); return -EINVAL; }
if (stm32_rproc_pa_to_da(rproc, rmem->base, &da) < 0) { + of_node_put(it.node); dev_err(dev, "memory region not valid %pa\n", &rmem->base); return -EINVAL; @@ -254,8 +256,10 @@ static int stm32_rproc_prepare(struct rp it.node->name); }
- if (!mem) + if (!mem) { + of_node_put(it.node); return -ENOMEM; + }
rproc_add_carveout(rproc, mem); index++;
From: Mathieu Poirier mathieu.poirier@linaro.org
commit 8a74918948b40317a5b5bab9739d13dcb5de2784 upstream.
Function of_phandle_iterator_next() calls of_node_put() on the last device_node it iterated over, but when the loop exits prematurely it has to be called explicitly.
Fixes: 3df52ed7f269 ("remoteproc: st: add reserved memory support") Cc: stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Reviewed-by: Arnaud Pouliquen arnaud.pouliquen@foss.st.com Link: https://lore.kernel.org/r/20230320221826.2728078-3-mathieu.poirier@linaro.or... Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/remoteproc/st_remoteproc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/remoteproc/st_remoteproc.c +++ b/drivers/remoteproc/st_remoteproc.c @@ -129,6 +129,7 @@ static int st_rproc_parse_fw(struct rpro while (of_phandle_iterator_next(&it) == 0) { rmem = of_reserved_mem_lookup(it.node); if (!rmem) { + of_node_put(it.node); dev_err(dev, "unable to acquire memory-region\n"); return -EINVAL; } @@ -150,8 +151,10 @@ static int st_rproc_parse_fw(struct rpro it.node->name); }
- if (!mem) + if (!mem) { + of_node_put(it.node); return -ENOMEM; + }
rproc_add_carveout(rproc, mem); index++;
From: Mathieu Poirier mathieu.poirier@linaro.org
commit e0e01de8ee146986872e54e8365f4b4654819412 upstream.
Function of_phandle_iterator_next() calls of_node_put() on the last device_node it iterated over, but when the loop exits prematurely it has to be called explicitly.
Fixes: ec0e5549f358 ("remoteproc: imx_dsp_rproc: Add remoteproc driver for DSP on i.MX") Cc: stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Acked-by: Shengjiu Wang shengjiu.wang@gmail.com Link: https://lore.kernel.org/r/20230320221826.2728078-6-mathieu.poirier@linaro.or... Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/remoteproc/imx_dsp_rproc.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-)
--- a/drivers/remoteproc/imx_dsp_rproc.c +++ b/drivers/remoteproc/imx_dsp_rproc.c @@ -627,15 +627,19 @@ static int imx_dsp_rproc_add_carveout(st
rmem = of_reserved_mem_lookup(it.node); if (!rmem) { + of_node_put(it.node); dev_err(dev, "unable to acquire memory-region\n"); return -EINVAL; }
- if (imx_dsp_rproc_sys_to_da(priv, rmem->base, rmem->size, &da)) + if (imx_dsp_rproc_sys_to_da(priv, rmem->base, rmem->size, &da)) { + of_node_put(it.node); return -EINVAL; + }
cpu_addr = devm_ioremap_wc(dev, rmem->base, rmem->size); if (!cpu_addr) { + of_node_put(it.node); dev_err(dev, "failed to map memory %p\n", &rmem->base); return -ENOMEM; } @@ -644,10 +648,12 @@ static int imx_dsp_rproc_add_carveout(st mem = rproc_mem_entry_init(dev, (void __force *)cpu_addr, (dma_addr_t)rmem->base, rmem->size, da, NULL, NULL, it.node->name);
- if (mem) + if (mem) { rproc_coredump_add_segment(rproc, da, rmem->size); - else + } else { + of_node_put(it.node); return -ENOMEM; + }
rproc_add_carveout(rproc, mem); }
From: Mathieu Poirier mathieu.poirier@linaro.org
commit 5ef074e805ecfd9a16dbb7b6b88bbfa8abad7054 upstream.
Function of_phandle_iterator_next() calls of_node_put() on the last device_node it iterated over, but when the loop exits prematurely it has to be called explicitly.
Fixes: b29b4249f8f0 ("remoteproc: imx_rproc: add i.MX specific parse fw hook") Cc: stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Reviewed-by: Peng Fan peng.fan@nxp.com Link: https://lore.kernel.org/r/20230320221826.2728078-5-mathieu.poirier@linaro.or... Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/remoteproc/imx_rproc.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
--- a/drivers/remoteproc/imx_rproc.c +++ b/drivers/remoteproc/imx_rproc.c @@ -460,6 +460,7 @@ static int imx_rproc_prepare(struct rpro
rmem = of_reserved_mem_lookup(it.node); if (!rmem) { + of_node_put(it.node); dev_err(priv->dev, "unable to acquire memory-region\n"); return -EINVAL; } @@ -472,10 +473,12 @@ static int imx_rproc_prepare(struct rpro imx_rproc_mem_alloc, imx_rproc_mem_release, it.node->name);
- if (mem) + if (mem) { rproc_coredump_add_segment(rproc, da, rmem->size); - else + } else { + of_node_put(it.node); return -ENOMEM; + }
rproc_add_carveout(rproc, mem); }
From: Mathieu Poirier mathieu.poirier@linaro.org
commit f8bae637d3d5e082b4ced71e28b16eb3ee0683c1 upstream.
Function of_phandle_iterator_next() calls of_node_put() on the last device_node it iterated over, but when the loop exits prematurely it has to be called explicitly.
Fixes: 285892a74f13 ("remoteproc: Add Renesas rcar driver") Cc: stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Link: https://lore.kernel.org/r/20230320221826.2728078-4-mathieu.poirier@linaro.or... Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/remoteproc/rcar_rproc.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/drivers/remoteproc/rcar_rproc.c +++ b/drivers/remoteproc/rcar_rproc.c @@ -62,13 +62,16 @@ static int rcar_rproc_prepare(struct rpr
rmem = of_reserved_mem_lookup(it.node); if (!rmem) { + of_node_put(it.node); dev_err(&rproc->dev, "unable to acquire memory-region\n"); return -EINVAL; }
- if (rmem->base > U32_MAX) + if (rmem->base > U32_MAX) { + of_node_put(it.node); return -EINVAL; + }
/* No need to translate pa to da, R-Car use same map */ da = rmem->base; @@ -79,8 +82,10 @@ static int rcar_rproc_prepare(struct rpr rcar_rproc_mem_release, it.node->name);
- if (!mem) + if (!mem) { + of_node_put(it.node); return -ENOMEM; + }
rproc_add_carveout(rproc, mem); }
From: Luis Chamberlain mcgrof@kernel.org
commit 228b09de936395ddd740df3522ea35ae934830d8 upstream.
Relatively new docs which I added which hinted the base directories needed to be created before is wrong, remove that incorrect comment. This has been hinted before by Eric twice already [0] [1], I had just not verified that until now. Now that I've verified that updates the docs to relax the context described.
[0] https://lkml.kernel.org/r/875ys0azt8.fsf@email.froward.int.ebiederm.org [1] https://lkml.kernel.org/r/87ftbiud6s.fsf@x220.int.ebiederm.org
Cc: stable@vger.kernel.org # v5.17 Cc: Christian Brauner brauner@kernel.org Cc: Kefeng Wang wangkefeng.wang@huawei.com Suggested-by: Eric W. Biederman ebiederm@xmission.com Signed-off-by: Luis Chamberlain mcgrof@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/proc/proc_sysctl.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
--- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c @@ -1439,10 +1439,7 @@ EXPORT_SYMBOL(register_sysctl); * register_sysctl() failing on init are extremely low, and so for both reasons * this function does not return any error as it is used by initialization code. * - * Context: Can only be called after your respective sysctl base path has been - * registered. So for instance, most base directories are registered early on - * init before init levels are processed through proc_sys_init() and - * sysctl_init_bases(). + * Context: if your base directory does not exist it will be created for you. */ void __init __register_sysctl_init(const char *path, struct ctl_table *table, const char *table_name)
From: Zev Weiss zev@bewilderbeest.net
commit 9dedb724446913ea7b1591b4b3d2e3e909090980 upstream.
While I'm not aware of any problems that have occurred running these at 100 MHz, the official word from ASRock is that 50 MHz is the correct speed to use, so let's be safe and use that instead.
Signed-off-by: Zev Weiss zev@bewilderbeest.net Cc: stable@vger.kernel.org Fixes: 2b81613ce417 ("ARM: dts: aspeed: Add ASRock E3C246D4I BMC") Fixes: a9a3d60b937a ("ARM: dts: aspeed: Add ASRock ROMED8HM3 BMC") Link: https://lore.kernel.org/r/20230224000400.12226-4-zev@bewilderbeest.net Signed-off-by: Joel Stanley joel@jms.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts | 2 +- arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts b/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts index 67a75aeafc2b..c4b2efbfdf56 100644 --- a/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts +++ b/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts @@ -63,7 +63,7 @@ flash@0 { status = "okay"; m25p,fast-read; label = "bmc"; - spi-max-frequency = <100000000>; /* 100 MHz */ + spi-max-frequency = <50000000>; /* 50 MHz */ #include "openbmc-flash-layout.dtsi" }; }; diff --git a/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts b/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts index 00efe1a93a69..4554abf0c7cd 100644 --- a/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts +++ b/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts @@ -51,7 +51,7 @@ flash@0 { status = "okay"; m25p,fast-read; label = "bmc"; - spi-max-frequency = <100000000>; /* 100 MHz */ + spi-max-frequency = <50000000>; /* 50 MHz */ #include "openbmc-flash-layout-64.dtsi" }; };
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
commit 6c950c20da38debf1ed531e0b972bd8b53d1c11f upstream.
The WM8960 Linux driver expects the clock to be named "mclk". Otherwise the clock will be ignored and not prepared/enabled by the driver.
Cc: stable@vger.kernel.org Fixes: 339b2fb36a67 ("ARM: dts: exynos: Add TOPEET itop elite based board") Link: https://lore.kernel.org/r/20230217150627.779764-3-krzysztof.kozlowski@linaro... Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm/boot/dts/exynos4412-itop-elite.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm/boot/dts/exynos4412-itop-elite.dts +++ b/arch/arm/boot/dts/exynos4412-itop-elite.dts @@ -182,7 +182,7 @@ compatible = "wlf,wm8960"; reg = <0x1a>; clocks = <&pmu_system_controller 0>; - clock-names = "MCLK1"; + clock-names = "mclk"; wlf,shared-lrclk; #sound-dai-cells = <0>; };
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
commit 665b9459bb53b8f19bd1541567e1fe9782c83c4b upstream.
The Samsung S5P/Exynos MIPI CSIS bindings and Linux driver expect first clock name to be "csis". Otherwise the driver fails to probe.
Fixes: 94ad0f6d9278 ("ARM: dts: Add Device tree for s5pv210 SoC") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230212185818.43503-2-krzysztof.kozlowski@linaro.... Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm/boot/dts/s5pv210.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm/boot/dts/s5pv210.dtsi +++ b/arch/arm/boot/dts/s5pv210.dtsi @@ -566,7 +566,7 @@ interrupts = <29>; clocks = <&clocks CLK_CSIS>, <&clocks SCLK_CSIS>; - clock-names = "clk_csis", + clock-names = "csis", "sclk_csis"; bus-width = <4>; status = "disabled";
From: Zev Weiss zev@bewilderbeest.net
commit a3fd10732d276d7cf372c6746a78a1c8b6aa7541 upstream.
Turns out it's in fact not the same as the heartbeat LED.
Signed-off-by: Zev Weiss zev@bewilderbeest.net Cc: stable@vger.kernel.org # v5.18+ Fixes: a9a3d60b937a ("ARM: dts: aspeed: Add ASRock ROMED8HM3 BMC") Link: https://lore.kernel.org/r/20230224000400.12226-2-zev@bewilderbeest.net Signed-off-by: Joel Stanley joel@jms.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts +++ b/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts @@ -31,7 +31,7 @@ };
system-fault { - gpios = <&gpio ASPEED_GPIO(Z, 2) GPIO_ACTIVE_LOW>; + gpios = <&gpio ASPEED_GPIO(Z, 2) GPIO_ACTIVE_HIGH>; panic-indicator; }; };
From: Johan Hovold johan+linaro@kernel.org
commit 0d997f95b70f98987ae031a89677c13e0e223670 upstream.
A recent commit moved enabling of runtime PM to GPU load time (first open()) but failed to update the error paths so that runtime PM is disabled if initialisation of the GPU fails. This would trigger a warning about the unbalanced disable count on the next open() attempt.
Note that pm_runtime_put_noidle() is sufficient to balance the usage count when pm_runtime_put_sync() fails (and is chosen over pm_runtime_resume_and_get() for consistency reasons).
Fixes: 4b18299b3365 ("drm/msm/adreno: Defer enabling runpm until hw_init()") Cc: stable@vger.kernel.org # 6.0 Signed-off-by: Johan Hovold johan+linaro@kernel.org Patchwork: https://patchwork.freedesktop.org/patch/524971/ Link: https://lore.kernel.org/r/20230303164807.13124-3-johan+linaro@kernel.org Signed-off-by: Rob Clark robdclark@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/adreno/adreno_device.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)
--- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -440,20 +440,21 @@ struct msm_gpu *adreno_load_gpu(struct d
ret = pm_runtime_get_sync(&pdev->dev); if (ret < 0) { - pm_runtime_put_sync(&pdev->dev); + pm_runtime_put_noidle(&pdev->dev); DRM_DEV_ERROR(dev->dev, "Couldn't power up the GPU: %d\n", ret); - return NULL; + goto err_disable_rpm; }
mutex_lock(&gpu->lock); ret = msm_gpu_hw_init(gpu); mutex_unlock(&gpu->lock); - pm_runtime_put_autosuspend(&pdev->dev); if (ret) { DRM_DEV_ERROR(dev->dev, "gpu hw init failed: %d\n", ret); - return NULL; + goto err_put_rpm; }
+ pm_runtime_put_autosuspend(&pdev->dev); + #ifdef CONFIG_DEBUG_FS if (gpu->funcs->debugfs_init) { gpu->funcs->debugfs_init(gpu, dev->primary); @@ -462,6 +463,13 @@ struct msm_gpu *adreno_load_gpu(struct d #endif
return gpu; + +err_put_rpm: + pm_runtime_put_sync(&pdev->dev); +err_disable_rpm: + pm_runtime_disable(&pdev->dev); + + return NULL; }
static int find_chipid(struct device *dev, struct adreno_rev *rev)
From: Francesco Dolcini francesco.dolcini@toradex.com
commit f435b7ef3b360d689df2ffa8326352cd07940d92 upstream.
LT8912 DSI port supports only Non-Burst mode video operation with Sync Events and continuous clock on clock lane, correct dsi mode flags according to that removing MIPI_DSI_MODE_VIDEO_BURST flag.
Cc: stable@vger.kernel.org Fixes: 30e2ae943c26 ("drm/bridge: Introduce LT8912B DSI to HDMI bridge") Signed-off-by: Francesco Dolcini francesco.dolcini@toradex.com Reviewed-by: Robert Foss rfoss@kernel.org Signed-off-by: Robert Foss rfoss@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20230330093131.424828-1-france... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/bridge/lontium-lt8912b.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/gpu/drm/bridge/lontium-lt8912b.c +++ b/drivers/gpu/drm/bridge/lontium-lt8912b.c @@ -504,7 +504,6 @@ static int lt8912_attach_dsi(struct lt89 dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | - MIPI_DSI_MODE_VIDEO_BURST | MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET;
From: Chaitanya Kumar Borah chaitanya.kumar.borah@intel.com
commit 2efc8e1001acfdc143cf2d25a08a4974c322e2a8 upstream.
Replace _PLANE_INPUT_CSC_RY_GY_2_* with _PLANE_CSC_RY_GY_2_* for Plane CSC
Fixes: 6eba56f64d5d ("drm/i915/pxp: black pixels on pxp disabled")
Signed-off-by: Chaitanya Kumar Borah chaitanya.kumar.borah@intel.com Reviewed-by: Uma Shankar uma.shankar@intel.com Signed-off-by: Animesh Manna animesh.manna@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230330150104.2923519-1-chait... (cherry picked from commit e39c76b2160bbd005587f978d29603ef790aefcd) Signed-off-by: Joonas Lahtinen joonas.lahtinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/i915_reg.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -7840,8 +7840,8 @@ enum skl_power_gate {
#define _PLANE_CSC_RY_GY_1(pipe) _PIPE(pipe, _PLANE_CSC_RY_GY_1_A, \ _PLANE_CSC_RY_GY_1_B) -#define _PLANE_CSC_RY_GY_2(pipe) _PIPE(pipe, _PLANE_INPUT_CSC_RY_GY_2_A, \ - _PLANE_INPUT_CSC_RY_GY_2_B) +#define _PLANE_CSC_RY_GY_2(pipe) _PIPE(pipe, _PLANE_CSC_RY_GY_2_A, \ + _PLANE_CSC_RY_GY_2_B) #define PLANE_CSC_COEFF(pipe, plane, index) _MMIO_PLANE(plane, \ _PLANE_CSC_RY_GY_1(pipe) + (index) * 4, \ _PLANE_CSC_RY_GY_2(pipe) + (index) * 4)
From: Johan Hovold johan+linaro@kernel.org
commit a465353b9250802f87b97123e33a17f51277f0b1 upstream.
In case of early initialisation errors and on platforms that do not use the DPU controller, the deinitilisation code can be called with the kms pointer set to NULL.
Fixes: 98659487b845 ("drm/msm: add support to take dpu snapshot") Cc: stable@vger.kernel.org # 5.14 Cc: Abhinav Kumar quic_abhinavk@quicinc.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525099/ Link: https://lore.kernel.org/r/20230306100722.28485-4-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -241,7 +241,8 @@ static int msm_drm_uninit(struct device msm_fbdev_free(ddev); #endif
- msm_disp_snapshot_destroy(ddev); + if (kms) + msm_disp_snapshot_destroy(ddev);
drm_mode_config_cleanup(ddev);
From: Johan Hovold johan+linaro@kernel.org
commit cd459c005de3e2b855a8cc7768e633ce9d018e9f upstream.
In case of early initialisation errors and on platforms that do not use the DPU controller, the deinitilisation code can be called with the kms pointer set to NULL.
Fixes: f026e431cf86 ("drm/msm: Convert to Linux IRQ interfaces") Cc: stable@vger.kernel.org # 5.14 Cc: Thomas Zimmermann tzimmermann@suse.de Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525104/ Link: https://lore.kernel.org/r/20230306100722.28485-5-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -250,9 +250,11 @@ static int msm_drm_uninit(struct device drm_bridge_remove(priv->bridges[i]); priv->num_bridges = 0;
- pm_runtime_get_sync(dev); - msm_irq_uninstall(ddev); - pm_runtime_put_sync(dev); + if (kms) { + pm_runtime_get_sync(dev); + msm_irq_uninstall(ddev); + pm_runtime_put_sync(dev); + }
if (kms && kms->funcs) kms->funcs->destroy(kms);
From: Johan Hovold johan+linaro@kernel.org
commit 214b09db61978497df24efcb3959616814bca46b upstream.
Make sure to free the DRM device also in case of early errors during bind().
Fixes: 2027e5b3413d ("drm/msm: Initialize MDSS irq domain at probe time") Cc: stable@vger.kernel.org # 5.17 Cc: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525097/ Link: https://lore.kernel.org/r/20230306100722.28485-6-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -446,12 +446,12 @@ static int msm_drm_init(struct device *d
ret = msm_init_vram(ddev); if (ret) - return ret; + goto err_put_dev;
/* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); if (ret) - return ret; + goto err_put_dev;
dma_set_max_seg_size(dev, UINT_MAX);
@@ -546,6 +546,12 @@ static int msm_drm_init(struct device *d
err_msm_uninit: msm_drm_uninit(dev); + + return ret; + +err_put_dev: + drm_dev_put(ddev); + return ret; }
From: Johan Hovold johan+linaro@kernel.org
commit 60d476af96015891c7959f30838ae7a9749932bf upstream.
Make sure to release the VRAM buffer also in a case a subcomponent fails to bind.
Fixes: d863f0c7b536 ("drm/msm: Call msm_init_vram before binding the gpu") Cc: stable@vger.kernel.org # 5.11 Cc: Craig Tatlor ctatlor97@gmail.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525094/ Link: https://lore.kernel.org/r/20230306100722.28485-7-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -50,6 +50,8 @@ #define MSM_VERSION_MINOR 9 #define MSM_VERSION_PATCHLEVEL 0
+static void msm_deinit_vram(struct drm_device *ddev); + static const struct drm_mode_config_funcs mode_config_funcs = { .fb_create = msm_framebuffer_create, .output_poll_changed = drm_fb_helper_output_poll_changed, @@ -259,12 +261,7 @@ static int msm_drm_uninit(struct device if (kms && kms->funcs) kms->funcs->destroy(kms);
- if (priv->vram.paddr) { - unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING; - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(dev, priv->vram.size, NULL, - priv->vram.paddr, attrs); - } + msm_deinit_vram(ddev);
component_unbind_all(dev, ddev);
@@ -404,6 +401,19 @@ static int msm_init_vram(struct drm_devi return ret; }
+static void msm_deinit_vram(struct drm_device *ddev) +{ + struct msm_drm_private *priv = ddev->dev_private; + unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING; + + if (!priv->vram.paddr) + return; + + drm_mm_takedown(&priv->vram.mm); + dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, + attrs); +} + static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv = dev_get_drvdata(dev); @@ -451,7 +461,7 @@ static int msm_drm_init(struct device *d /* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); if (ret) - goto err_put_dev; + goto err_deinit_vram;
dma_set_max_seg_size(dev, UINT_MAX);
@@ -549,6 +559,8 @@ err_msm_uninit:
return ret;
+err_deinit_vram: + msm_deinit_vram(ddev); err_put_dev: drm_dev_put(ddev);
From: Johan Hovold johan+linaro@kernel.org
commit a75b49db6529b2af049eafd938fae888451c3685 upstream.
Make sure to destroy the workqueue also in case of early errors during bind (e.g. a subcomponent failing to bind).
Since commit c3b790ea07a1 ("drm: Manage drm_mode_config_init with drmm_") the mode config will be freed when the drm device is released also when using the legacy interface, but add an explicit cleanup for consistency and to facilitate backporting.
Fixes: 060530f1ea67 ("drm/msm: use componentised device support") Cc: stable@vger.kernel.org # 3.15 Cc: Rob Clark robdclark@gmail.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525093/ Link: https://lore.kernel.org/r/20230306100722.28485-9-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -456,7 +456,7 @@ static int msm_drm_init(struct device *d
ret = msm_init_vram(ddev); if (ret) - goto err_put_dev; + goto err_cleanup_mode_config;
/* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); @@ -561,6 +561,9 @@ err_msm_uninit:
err_deinit_vram: msm_deinit_vram(ddev); +err_cleanup_mode_config: + drm_mode_config_cleanup(ddev); + destroy_workqueue(priv->wq); err_put_dev: drm_dev_put(ddev);
From: Hans de Goede hdegoede@redhat.com
commit c8c2969bfcba5fcba3a5b078315c1b586d927d9f upstream.
The intel_dsi_msleep() helper skips sleeping if the MIPI-sequences have a version of 3 or newer and the panel is in vid-mode.
This is based on the big comment around line 730 which starts with "Panel enable/disable sequences from the VBT spec.", where the "v3 video mode seq" column does not have any wait t# entries.
Checking the Windows driver shows that it does always honor the VBT delays independent of the version of the VBT sequences.
Commit 6fdb335f1c9c ("drm/i915/dsi: Use unconditional msleep for the panel_on_delay when there is no reset-deassert MIPI-sequence") switched to a direct msleep() instead of intel_dsi_msleep() when there is no MIPI_SEQ_DEASSERT_RESET sequence, to fix the panel on an Acer Aspire Switch 10 E SW3-016 not turning on.
And now testing on a Nextbook Ares 8A shows that panel_on_delay must always be honored otherwise the panel will not turn on.
Instead of only always using regular msleep() for panel_on_delay do as Windows does and always use regular msleep() everywhere were intel_dsi_msleep() is used and drop the intel_dsi_msleep() helper.
Changes in v2: - Replace all intel_dsi_msleep() calls instead of just the intel_dsi_msleep(panel_on_delay) call
Cc: stable@vger.kernel.org Fixes: 6fdb335f1c9c ("drm/i915/dsi: Use unconditional msleep for the panel_on_delay when there is no reset-deassert MIPI-sequence") Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230425194441.68086-1-hdegoed... (cherry picked from commit fa83c12132f71302f7d4b02758dc0d46048d3f5f) Signed-off-by: Joonas Lahtinen joonas.lahtinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/display/icl_dsi.c | 2 +- drivers/gpu/drm/i915/display/intel_dsi_vbt.c | 11 ----------- drivers/gpu/drm/i915/display/intel_dsi_vbt.h | 1 - drivers/gpu/drm/i915/display/vlv_dsi.c | 22 +++++----------------- 4 files changed, 6 insertions(+), 30 deletions(-)
--- a/drivers/gpu/drm/i915/display/icl_dsi.c +++ b/drivers/gpu/drm/i915/display/icl_dsi.c @@ -1210,7 +1210,7 @@ static void gen11_dsi_powerup_panel(stru
/* panel power on related mipi dsi vbt sequences */ intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_ON); - intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay); + msleep(intel_dsi->panel_on_delay); intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET); intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_INIT_OTP); intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON); --- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c +++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c @@ -762,17 +762,6 @@ void intel_dsi_vbt_exec_sequence(struct gpiod_set_value_cansleep(intel_dsi->gpio_backlight, 0); }
-void intel_dsi_msleep(struct intel_dsi *intel_dsi, int msec) -{ - struct intel_connector *connector = intel_dsi->attached_connector; - - /* For v3 VBTs in vid-mode the delays are part of the VBT sequences */ - if (is_vid_mode(intel_dsi) && connector->panel.vbt.dsi.seq_version >= 3) - return; - - msleep(msec); -} - void intel_dsi_log_params(struct intel_dsi *intel_dsi) { struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev); --- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.h +++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.h @@ -16,7 +16,6 @@ void intel_dsi_vbt_gpio_init(struct inte void intel_dsi_vbt_gpio_cleanup(struct intel_dsi *intel_dsi); void intel_dsi_vbt_exec_sequence(struct intel_dsi *intel_dsi, enum mipi_seq seq_id); -void intel_dsi_msleep(struct intel_dsi *intel_dsi, int msec); void intel_dsi_log_params(struct intel_dsi *intel_dsi);
#endif /* __INTEL_DSI_VBT_H__ */ --- a/drivers/gpu/drm/i915/display/vlv_dsi.c +++ b/drivers/gpu/drm/i915/display/vlv_dsi.c @@ -782,7 +782,6 @@ static void intel_dsi_pre_enable(struct { struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc); - struct intel_connector *connector = to_intel_connector(conn_state->connector); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); enum pipe pipe = crtc->pipe; enum port port; @@ -830,21 +829,10 @@ static void intel_dsi_pre_enable(struct if (!IS_GEMINILAKE(dev_priv)) intel_dsi_prepare(encoder, pipe_config);
+ /* Give the panel time to power-on and then deassert its reset */ intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_ON); - - /* - * Give the panel time to power-on and then deassert its reset. - * Depending on the VBT MIPI sequences version the deassert-seq - * may contain the necessary delay, intel_dsi_msleep() will skip - * the delay in that case. If there is no deassert-seq, then an - * unconditional msleep is used to give the panel time to power-on. - */ - if (connector->panel.vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET]) { - intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay); - intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET); - } else { - msleep(intel_dsi->panel_on_delay); - } + msleep(intel_dsi->panel_on_delay); + intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
if (IS_GEMINILAKE(dev_priv)) { glk_cold_boot = glk_dsi_enable_io(encoder); @@ -878,7 +866,7 @@ static void intel_dsi_pre_enable(struct msleep(20); /* XXX */ for_each_dsi_port(port, intel_dsi->ports) dpi_send_cmd(intel_dsi, TURN_ON, false, port); - intel_dsi_msleep(intel_dsi, 100); + msleep(100);
intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON);
@@ -1006,7 +994,7 @@ static void intel_dsi_post_disable(struc /* Assert reset */ intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_ASSERT_RESET);
- intel_dsi_msleep(intel_dsi, intel_dsi->panel_off_delay); + msleep(intel_dsi->panel_off_delay); intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_OFF);
intel_dsi->panel_power_off_time = ktime_get_boottime();
From: Jaegeuk Kim jaegeuk@kernel.org
commit da6ea0b050fa720302b56fbb59307e7c7531a342 upstream.
We got a kernel panic if old_addr is NULL.
https://bugzilla.kernel.org/show_bug.cgi?id=217266
BUG: kernel NULL pointer dereference, address: 0000000000000000 Call Trace: <TASK> f2fs_commit_atomic_write+0x619/0x990 [f2fs a1b985b80f5babd6f3ea778384908880812bfa43] __f2fs_ioctl+0xd8e/0x4080 [f2fs a1b985b80f5babd6f3ea778384908880812bfa43] ? vfs_write+0x2ae/0x3f0 ? vfs_write+0x2ae/0x3f0 __x64_sys_ioctl+0x91/0xd0 do_syscall_64+0x5c/0x90 entry_SYSCALL_64_after_hwframe+0x72/0xdc RIP: 0033:0x7f69095fe53f
Fixes: 2f3a9ae990a7 ("f2fs: introduce trace_f2fs_replace_atomic_write_block") Cc: stable@vger.kernel.org Reviewed-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/f2fs/segment.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -262,7 +262,7 @@ retry: f2fs_put_dnode(&dn);
trace_f2fs_replace_atomic_write_block(inode, F2FS_I(inode)->cow_inode, - index, *old_addr, new_addr, recover); + index, old_addr ? *old_addr : 0, new_addr, recover); return 0; }
From: Jaegeuk Kim jaegeuk@kernel.org
commit d94772154e524b329a168678836745d2773a6e02 upstream.
F2FS has the same issue in ext4_rename causing crash revealed by xfstests/generic/707.
See also commit 0813299c586b ("ext4: Fix possible corruption when moving a directory")
CC: stable@vger.kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/f2fs/namei.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-)
--- a/fs/f2fs/namei.c +++ b/fs/f2fs/namei.c @@ -1002,12 +1002,20 @@ static int f2fs_rename(struct user_names goto out; }
+ /* + * Copied from ext4_rename: we need to protect against old.inode + * directory getting converted from inline directory format into + * a normal one. + */ + if (S_ISDIR(old_inode->i_mode)) + inode_lock_nested(old_inode, I_MUTEX_NONDIR2); + err = -ENOENT; old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_page); if (!old_entry) { if (IS_ERR(old_page)) err = PTR_ERR(old_page); - goto out; + goto out_unlock_old; }
if (S_ISDIR(old_inode->i_mode)) { @@ -1115,6 +1123,9 @@ static int f2fs_rename(struct user_names
f2fs_unlock_op(sbi);
+ if (S_ISDIR(old_inode->i_mode)) + inode_unlock(old_inode); + if (IS_DIRSYNC(old_dir) || IS_DIRSYNC(new_dir)) f2fs_sync_fs(sbi->sb, 1);
@@ -1129,6 +1140,9 @@ out_dir: f2fs_put_page(old_dir_page, 0); out_old: f2fs_put_page(old_page, 0); +out_unlock_old: + if (S_ISDIR(old_inode->i_mode)) + inode_unlock(old_inode); out: iput(whiteout); return err;
From: Jianmin Lv lvjianmin@loongson.cn
commit 48ce2d722f7f108f27bedddf54bee3423a57ce57 upstream.
For dual-bridges scenario, pch_pic_acpi_init() will be called in following path:
cpuintc_acpi_init acpi_cascade_irqdomain_init(in cpuintc driver) acpi_table_parse_madt eiointc_parse_madt eiointc_acpi_init /* this will be called two times correspondingto parsing two eiointc entries in MADT under dual-bridges scenario*/ acpi_cascade_irqdomain_init(in eiointc driver) acpi_table_parse_madt pch_pic_parse_madt pch_pic_acpi_init /* this will be called depend on valid parent IRQ domain handle for one or two times corresponding to parsing two pchpic entries in MADT druring calling eiointc_acpi_init() under dual-bridges scenario*/
During the first eiointc_acpi_init() calling, the pch_pic_acpi_init() will be called just one time since only one valid parent IRQ domain handle will be found for current eiointc IRQ domain.
During the second eiointc_acpi_init() calling, the pch_pic_acpi_init() will be called two times since two valid parent IRQ domain handles will be found. So in pch_pic_acpi_init(), we must have a reasonable way to prevent from creating second same pch_pic IRQ domain.
The patch matches gsi base information in created pch_pic IRQ domains to check if the target domain has been created to avoid the bug mentioned above.
Cc: stable@vger.kernel.org Signed-off-by: Jianmin Lv lvjianmin@loongson.cn Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20230407083453.6305-6-lvjianmin@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/irqchip/irq-loongson-pch-pic.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/irqchip/irq-loongson-pch-pic.c +++ b/drivers/irqchip/irq-loongson-pch-pic.c @@ -350,6 +350,9 @@ int __init pch_pic_acpi_init(struct irq_ int ret, vec_base; struct fwnode_handle *domain_handle;
+ if (find_pch_pic(acpi_pchpic->gsi_base) >= 0) + return 0; + vec_base = acpi_pchpic->gsi_base - GSI_MIN_PCH_IRQ;
domain_handle = irq_domain_alloc_fwnode(&acpi_pchpic->address);
From: Jianmin Lv lvjianmin@loongson.cn
commit 112eaa8fec5ea75f1be003ec55760b09a86799f8 upstream.
In pch_pic_parse_madt(), a NULL parent pointer will be returned from acpi_get_vec_parent() for second pch-pic domain related to second bridge while calling eiointc_acpi_init() at first time, where the parent of it has not been initialized yet, and will be initialized during second time calling eiointc_acpi_init(). So, it's reasonable to return zero so that failure of acpi_table_parse_madt() will be avoided, or else acpi_cascade_irqdomain_init() will return and initialization of followed pch_msi domain will be skipped.
Although it does not matter when pch_msi_parse_madt() returns -EINVAL if no invalid parent is found, it's also reasonable to return zero for that.
Cc: stable@vger.kernel.org Signed-off-by: Jianmin Lv lvjianmin@loongson.cn Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20230407083453.6305-2-lvjianmin@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/irqchip/irq-loongson-eiointc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/irqchip/irq-loongson-eiointc.c +++ b/drivers/irqchip/irq-loongson-eiointc.c @@ -312,7 +312,7 @@ pch_pic_parse_madt(union acpi_subtable_h if (parent) return pch_pic_acpi_init(parent, pchpic_entry);
- return -EINVAL; + return 0; }
static int __init @@ -325,7 +325,7 @@ pch_msi_parse_madt(union acpi_subtable_h if (parent) return pch_msi_acpi_init(parent, pchmsi_entry);
- return -EINVAL; + return 0; }
static int __init acpi_cascade_irqdomain_init(void)
From: James Cowgill james.cowgill@blaize.com
commit ab4f869fba6119997f7630d600049762a2b014fa upstream.
This is the logical place to put the backlight device, and it also fixes a kernel crash if the MIPI host is removed. Previously the backlight device would be unregistered twice when this happened - once as a child of the MIPI host through `mipi_dsi_host_unregister`, and once when the panel device is destroyed.
Fixes: 12a6cbd4f3f1 ("drm/panel: otm8009a: Use new backlight API") Signed-off-by: James Cowgill james.cowgill@blaize.com Cc: stable@vger.kernel.org Reviewed-by: Neil Armstrong neil.armstrong@linaro.org Signed-off-by: Neil Armstrong neil.armstrong@linaro.org Link: https://patchwork.freedesktop.org/patch/msgid/20230412173450.199592-1-james.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/panel/panel-orisetech-otm8009a.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c +++ b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c @@ -471,7 +471,7 @@ static int otm8009a_probe(struct mipi_ds DRM_MODE_CONNECTOR_DSI);
ctx->bl_dev = devm_backlight_device_register(dev, dev_name(dev), - dsi->host->dev, ctx, + dev, ctx, &otm8009a_backlight_ops, NULL); if (IS_ERR(ctx->bl_dev)) {
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
commit d29fb7baab09b6a1dc484c9c67933253883e770a upstream.
[Why] While scanning the top_pipe connections we can run into a case where the bottom pipe is still connected to a top_pipe but with a NULL plane_state.
[How] Treat a NULL plane_state the same as the plane being invisible for pipe cursor disable logic.
Cc: stable@vger.kernel.org Cc: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Charlene Liu Charlene.Liu@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c @@ -3369,7 +3369,9 @@ static bool dcn10_can_pipe_disable_curso for (test_pipe = pipe_ctx->top_pipe; test_pipe; test_pipe = test_pipe->top_pipe) { // Skip invisible layer and pipe-split plane on same layer - if (!test_pipe->plane_state->visible || test_pipe->plane_state->layer_index == cur_layer) + if (!test_pipe->plane_state || + !test_pipe->plane_state->visible || + test_pipe->plane_state->layer_index == cur_layer) continue;
r2 = test_pipe->plane_res.scl_data.recout;
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
commit bf224e00a9f54e2bf14b4d720a09c3d2f4aa4aa8 upstream.
[Why] DPP Root clock optimization when combined with 4to1 MPC combine results in the screen turning black.
This is because the DPPCLK is stopped during the middle of an optimize_bandwidth sequence during commit_minimal_transition without going through plane power down/power up.
[How] The intent of a 0Hz DPP clock through update_clocks is to disable the DTO. This differs from the behavior of stopping the DPPCLK entirely (utilizing a 0Hz clock on some ASIC) so it's better to move this logic to reside next to plane power up/power down where we gate the HUBP/DPP DOMAIN.
The new sequence should be: Power down: PG enabled -> RCO on Power up: RCO off -> PG disabled
Rename power_on_plane to power_on_plane_resources to reflect the actual operation that's occurring.
Cc: stable@vger.kernel.org Cc: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 12 ++++++- drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 8 +++- drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c | 13 +------ drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c | 23 ++++++++++++++ drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c | 10 ++++++ drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h | 2 + drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c | 1 drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h | 23 +++++++------- drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h | 4 ++ 9 files changed, 71 insertions(+), 25 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c @@ -726,11 +726,15 @@ void dcn10_hubp_pg_control( } }
-static void power_on_plane( +static void power_on_plane_resources( struct dce_hwseq *hws, int plane_id) { DC_LOGGER_INIT(hws->ctx->logger); + + if (hws->funcs.dpp_root_clock_control) + hws->funcs.dpp_root_clock_control(hws, plane_id, true); + if (REG(DC_IP_REQUEST_CNTL)) { REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1); @@ -1237,11 +1241,15 @@ void dcn10_plane_atomic_power_down(struc hws->funcs.hubp_pg_control(hws, hubp->inst, false);
dpp->funcs->dpp_reset(dpp); + REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 0); DC_LOG_DEBUG( "Power gated front end %d\n", hubp->inst); } + + if (hws->funcs.dpp_root_clock_control) + hws->funcs.dpp_root_clock_control(hws, dpp->inst, false); }
/* disable HW used by plane. @@ -2450,7 +2458,7 @@ static void dcn10_enable_plane(
undo_DEGVIDCN10_253_wa(dc);
- power_on_plane(dc->hwseq, + power_on_plane_resources(dc->hwseq, pipe_ctx->plane_res.hubp->inst);
/* enable DCFCLK current DCHUB */ --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c @@ -1087,11 +1087,15 @@ void dcn20_blank_pixel_data( }
-static void dcn20_power_on_plane( +static void dcn20_power_on_plane_resources( struct dce_hwseq *hws, struct pipe_ctx *pipe_ctx) { DC_LOGGER_INIT(hws->ctx->logger); + + if (hws->funcs.dpp_root_clock_control) + hws->funcs.dpp_root_clock_control(hws, pipe_ctx->plane_res.dpp->inst, true); + if (REG(DC_IP_REQUEST_CNTL)) { REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1); @@ -1115,7 +1119,7 @@ static void dcn20_enable_plane(struct dc //if (dc->debug.sanity_checks) { // dcn10_verify_allow_pstate_change_high(dc); //} - dcn20_power_on_plane(dc->hwseq, pipe_ctx); + dcn20_power_on_plane_resources(dc->hwseq, pipe_ctx);
/* enable DCFCLK current DCHUB */ pipe_ctx->plane_res.hubp->funcs->hubp_clk_cntl(pipe_ctx->plane_res.hubp, true); --- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c +++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c @@ -66,17 +66,8 @@ void dccg31_update_dpp_dto(struct dccg * REG_UPDATE(DPPCLK_DTO_CTRL, DPPCLK_DTO_ENABLE[dpp_inst], 1); } else { - //DTO must be enabled to generate a 0Hz clock output - if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp) { - REG_UPDATE(DPPCLK_DTO_CTRL, - DPPCLK_DTO_ENABLE[dpp_inst], 1); - REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0, - DPPCLK0_DTO_PHASE, 0, - DPPCLK0_DTO_MODULO, 1); - } else { - REG_UPDATE(DPPCLK_DTO_CTRL, - DPPCLK_DTO_ENABLE[dpp_inst], 0); - } + REG_UPDATE(DPPCLK_DTO_CTRL, + DPPCLK_DTO_ENABLE[dpp_inst], 0); } dccg->pipe_dppclk_khz[dpp_inst] = req_dppclk; } --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c @@ -289,8 +289,31 @@ static void dccg314_set_valid_pixel_rate dccg314_set_dtbclk_dto(dccg, &dto_params); }
+static void dccg314_dpp_root_clock_control( + struct dccg *dccg, + unsigned int dpp_inst, + bool clock_on) +{ + struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg); + + if (clock_on) { + /* turn off the DTO and leave phase/modulo at max */ + REG_UPDATE(DPPCLK_DTO_CTRL, DPPCLK_DTO_ENABLE[dpp_inst], 0); + REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0, + DPPCLK0_DTO_PHASE, 0xFF, + DPPCLK0_DTO_MODULO, 0xFF); + } else { + /* turn on the DTO to generate a 0hz clock */ + REG_UPDATE(DPPCLK_DTO_CTRL, DPPCLK_DTO_ENABLE[dpp_inst], 1); + REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0, + DPPCLK0_DTO_PHASE, 0, + DPPCLK0_DTO_MODULO, 1); + } +} + static const struct dccg_funcs dccg314_funcs = { .update_dpp_dto = dccg31_update_dpp_dto, + .dpp_root_clock_control = dccg314_dpp_root_clock_control, .get_dccg_ref_freq = dccg31_get_dccg_ref_freq, .dccg_init = dccg31_init, .set_dpstreamclk = dccg314_set_dpstreamclk, --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c @@ -392,6 +392,16 @@ void dcn314_set_pixels_per_cycle(struct pix_per_cycle); }
+void dcn314_dpp_root_clock_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool clock_on) +{ + if (!hws->ctx->dc->debug.root_clock_optimization.bits.dpp) + return; + + if (hws->ctx->dc->res_pool->dccg->funcs->dpp_root_clock_control) + hws->ctx->dc->res_pool->dccg->funcs->dpp_root_clock_control( + hws->ctx->dc->res_pool->dccg, dpp_inst, clock_on); +} + void dcn314_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on) { struct dc_context *ctx = hws->ctx; --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h @@ -43,4 +43,6 @@ void dcn314_set_pixels_per_cycle(struct
void dcn314_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on);
+void dcn314_dpp_root_clock_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool clock_on); + #endif /* __DC_HWSS_DCN314_H__ */ --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c @@ -137,6 +137,7 @@ static const struct hwseq_private_funcs .plane_atomic_disable = dcn20_plane_atomic_disable, .plane_atomic_power_down = dcn10_plane_atomic_power_down, .enable_power_gating_plane = dcn314_enable_power_gating_plane, + .dpp_root_clock_control = dcn314_dpp_root_clock_control, .hubp_pg_control = dcn314_hubp_pg_control, .program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree, .update_odm = dcn314_update_odm, --- a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h @@ -148,18 +148,21 @@ struct dccg_funcs { struct dccg *dccg, int inst);
-void (*set_pixel_rate_div)( - struct dccg *dccg, - uint32_t otg_inst, - enum pixel_rate_div k1, - enum pixel_rate_div k2); + void (*set_pixel_rate_div)(struct dccg *dccg, + uint32_t otg_inst, + enum pixel_rate_div k1, + enum pixel_rate_div k2);
-void (*set_valid_pixel_rate)( - struct dccg *dccg, - int ref_dtbclk_khz, - int otg_inst, - int pixclk_khz); + void (*set_valid_pixel_rate)( + struct dccg *dccg, + int ref_dtbclk_khz, + int otg_inst, + int pixclk_khz);
+ void (*dpp_root_clock_control)( + struct dccg *dccg, + unsigned int dpp_inst, + bool clock_on); };
#endif //__DAL_DCCG_H__ --- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h @@ -115,6 +115,10 @@ struct hwseq_private_funcs { void (*plane_atomic_disable)(struct dc *dc, struct pipe_ctx *pipe_ctx); void (*enable_power_gating_plane)(struct dce_hwseq *hws, bool enable); + void (*dpp_root_clock_control)( + struct dce_hwseq *hws, + unsigned int dpp_inst, + bool clock_on); void (*dpp_pg_control)(struct dce_hwseq *hws, unsigned int dpp_inst, bool power_on);
From: Samson Tam Samson.Tam@amd.com
commit 682439fffad9fa9a38d37dd1b1318e9374232213 upstream.
[Why] Reading pipe_fuses from register may have invalid bits set, which may affect the num_pipes erroneously.
[How] Add read_pipes_fuses() call and filter bits based on expected number of pipes.
Reviewed-by: Alvin Lee Alvin.Lee2@amd.com Acked-by: Alan Liu HaoPing.Liu@amd.com Signed-off-by: Samson Tam Samson.Tam@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org # 6.1.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c | 10 +++++++++- drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c | 10 +++++++++- 2 files changed, 18 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c @@ -2037,6 +2037,14 @@ static struct resource_funcs dcn32_res_p .remove_phantom_pipes = dcn32_remove_phantom_pipes, };
+static uint32_t read_pipe_fuses(struct dc_context *ctx) +{ + uint32_t value = REG_READ(CC_DC_PIPE_DIS); + /* DCN32 support max 4 pipes */ + value = value & 0xf; + return value; +} +
static bool dcn32_resource_construct( uint8_t num_virtual_links, @@ -2079,7 +2087,7 @@ static bool dcn32_resource_construct( pool->base.res_cap = &res_cap_dcn32; /* max number of pipes for ASIC before checking for pipe fuses */ num_pipes = pool->base.res_cap->num_timing_generator; - pipe_fuses = REG_READ(CC_DC_PIPE_DIS); + pipe_fuses = read_pipe_fuses(ctx);
for (i = 0; i < pool->base.res_cap->num_timing_generator; i++) if (pipe_fuses & 1 << i) --- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c @@ -1621,6 +1621,14 @@ static struct resource_funcs dcn321_res_ .remove_phantom_pipes = dcn32_remove_phantom_pipes, };
+static uint32_t read_pipe_fuses(struct dc_context *ctx) +{ + uint32_t value = REG_READ(CC_DC_PIPE_DIS); + /* DCN321 support max 4 pipes */ + value = value & 0xf; + return value; +} +
static bool dcn321_resource_construct( uint8_t num_virtual_links, @@ -1663,7 +1671,7 @@ static bool dcn321_resource_construct( pool->base.res_cap = &res_cap_dcn321; /* max number of pipes for ASIC before checking for pipe fuses */ num_pipes = pool->base.res_cap->num_timing_generator; - pipe_fuses = REG_READ(CC_DC_PIPE_DIS); + pipe_fuses = read_pipe_fuses(ctx);
for (i = 0; i < pool->base.res_cap->num_timing_generator; i++) if (pipe_fuses & 1 << i)
From: Hamza Mahfooz hamza.mahfooz@amd.com
commit 08da182175db4c7f80850354849d95f2670e8cd9 upstream.
Currently, on a handful of ASICs. We allow the framebuffer for a given plane to exist in either VRAM or GTT. However, if the plane's new framebuffer is in a different memory domain than it's previous framebuffer, flipping between them can cause the screen to flicker. So, to fix this, don't perform an immediate flip in the aforementioned case.
Cc: stable@vger.kernel.org Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2354 Reviewed-by: Roman Li Roman.Li@amd.com Fixes: 81d0bcf99009 ("drm/amdgpu: make display pinning more flexible (v2)") Signed-off-by: Hamza Mahfooz hamza.mahfooz@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7593,6 +7593,13 @@ static void amdgpu_dm_commit_cursors(str handle_cursor_update(plane, old_plane_state); }
+static inline uint32_t get_mem_type(struct drm_framebuffer *fb) +{ + struct amdgpu_bo *abo = gem_to_amdgpu_bo(fb->obj[0]); + + return abo->tbo.resource ? abo->tbo.resource->mem_type : 0; +} + static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, struct dc_state *dc_state, struct drm_device *dev, @@ -7710,11 +7717,13 @@ static void amdgpu_dm_commit_planes(stru
/* * Only allow immediate flips for fast updates that don't - * change FB pitch, DCC state, rotation or mirroing. + * change memory domain, FB pitch, DCC state, rotation or + * mirroring. */ bundle->flip_addrs[planes_count].flip_immediate = crtc->state->async_flip && - acrtc_state->update_type == UPDATE_TYPE_FAST; + acrtc_state->update_type == UPDATE_TYPE_FAST && + get_mem_type(old_plane_state->fb) == get_mem_type(fb);
timestamp_ns = ktime_get_ns(); bundle->flip_addrs[planes_count].flip_timestamp_in_us = div_u64(timestamp_ns, 1000);
From: Horatio Zhang Hongkun.Zhang@amd.com
commit 08c677cb0b436a96a836792bb35a8ec5de4999c2 upstream.
The gmc.ecc_irq is enabled by firmware per IFWI setting, and the host driver is not privileged to enable/disable the interrupt. So, it is meaningless to use the amdgpu_irq_put function in gmc_v10_0_hw_fini, which also leads to the call trace.
[ 82.340264] Call Trace: [ 82.340265] <TASK> [ 82.340269] gmc_v10_0_hw_fini+0x83/0xa0 [amdgpu] [ 82.340447] gmc_v10_0_suspend+0xe/0x20 [amdgpu] [ 82.340623] amdgpu_device_ip_suspend_phase2+0x127/0x1c0 [amdgpu] [ 82.340789] amdgpu_device_ip_suspend+0x3d/0x80 [amdgpu] [ 82.340955] amdgpu_device_pre_asic_reset+0xdd/0x2b0 [amdgpu] [ 82.341122] amdgpu_device_gpu_recover.cold+0x4dd/0xbb2 [amdgpu] [ 82.341359] amdgpu_debugfs_reset_work+0x4c/0x70 [amdgpu] [ 82.341529] process_one_work+0x21d/0x3f0 [ 82.341535] worker_thread+0x1fa/0x3c0 [ 82.341538] ? process_one_work+0x3f0/0x3f0 [ 82.341540] kthread+0xff/0x130 [ 82.341544] ? kthread_complete_and_exit+0x20/0x20 [ 82.341547] ret_from_fork+0x22/0x30
Signed-off-by: Horatio Zhang Hongkun.Zhang@amd.com Reviewed-by: Hawking Zhang Hawking.Zhang@amd.com Reviewed-by: Guchun Chen guchun.chen@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2522 Fixes: c8b5a95b5709 ("drm/amdgpu: Fix desktop freezed after gpu-reset") Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c @@ -1142,7 +1142,6 @@ static int gmc_v10_0_hw_fini(void *handl return 0; }
- amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0); amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
return 0;
From: Hamza Mahfooz hamza.mahfooz@amd.com
commit 922a76ba31adf84e72bc947267385be420c689ee upstream.
As made mention of in commit 08c677cb0b43 ("drm/amdgpu: fix amdgpu_irq_put call trace in gmc_v10_0_hw_fini") and commit 13af556104fa ("drm/amdgpu: fix amdgpu_irq_put call trace in gmc_v11_0_hw_fini"). It is meaningless to call amdgpu_irq_put() for gmc.ecc_irq. So, remove it from gmc_v9_0_hw_fini().
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2522 Fixes: 3029c855d79f ("drm/amdgpu: Fix desktop freezed after gpu-reset") Reviewed-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Hamza Mahfooz hamza.mahfooz@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c @@ -1898,7 +1898,6 @@ static int gmc_v9_0_hw_fini(void *handle if (adev->mmhub.funcs->update_power_gating) adev->mmhub.funcs->update_power_gating(adev, false);
- amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0); amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
return 0;
From: Horatio Zhang Hongkun.Zhang@amd.com
commit 13af556104fa93b1945c70bbf8a0a62cd2c92879 upstream.
The gmc.ecc_irq is enabled by firmware per IFWI setting, and the host driver is not privileged to enable/disable the interrupt. So, it is meaningless to use the amdgpu_irq_put function in gmc_v11_0_hw_fini, which also leads to the call trace.
[ 102.980303] Call Trace: [ 102.980303] <TASK> [ 102.980304] gmc_v11_0_hw_fini+0x54/0x90 [amdgpu] [ 102.980357] gmc_v11_0_suspend+0xe/0x20 [amdgpu] [ 102.980409] amdgpu_device_ip_suspend_phase2+0x240/0x460 [amdgpu] [ 102.980459] amdgpu_device_ip_suspend+0x3d/0x80 [amdgpu] [ 102.980520] amdgpu_device_pre_asic_reset+0xd9/0x490 [amdgpu] [ 102.980573] amdgpu_device_gpu_recover.cold+0x548/0xce6 [amdgpu] [ 102.980687] amdgpu_debugfs_reset_work+0x4c/0x70 [amdgpu] [ 102.980740] process_one_work+0x21f/0x3f0 [ 102.980741] worker_thread+0x200/0x3e0 [ 102.980742] ? process_one_work+0x3f0/0x3f0 [ 102.980743] kthread+0xfd/0x130 [ 102.980743] ? kthread_complete_and_exit+0x20/0x20 [ 102.980744] ret_from_fork+0x22/0x30
Signed-off-by: Horatio Zhang Hongkun.Zhang@amd.com Reviewed-by: Hawking Zhang Hawking.Zhang@amd.com Reviewed-by: Guchun Chen guchun.chen@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2522 Fixes: c8b5a95b5709 ("drm/amdgpu: Fix desktop freezed after gpu-reset") Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c @@ -931,7 +931,6 @@ static int gmc_v11_0_hw_fini(void *handl return 0; }
- amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0); amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0); gmc_v11_0_gart_disable(adev);
From: Guchun Chen guchun.chen@amd.com
commit 4a76680311330aefe5074bed8f06afa354b85c48 upstream.
gfx9 cp_ecc_error_irq is only enabled when legacy gfx ras is assert. So in gfx_v9_0_hw_fini, interrupt disablement for cp_ecc_error_irq should be executed under such condition, otherwise, an amdgpu_irq_put calltrace will occur.
[ 7283.170322] RIP: 0010:amdgpu_irq_put+0x45/0x70 [amdgpu] [ 7283.170964] RSP: 0018:ffff9a5fc3967d00 EFLAGS: 00010246 [ 7283.170967] RAX: ffff98d88afd3040 RBX: ffff98d89da20000 RCX: 0000000000000000 [ 7283.170969] RDX: 0000000000000000 RSI: ffff98d89da2bef8 RDI: ffff98d89da20000 [ 7283.170971] RBP: ffff98d89da20000 R08: ffff98d89da2ca18 R09: 0000000000000006 [ 7283.170973] R10: ffffd5764243c008 R11: 0000000000000000 R12: 0000000000001050 [ 7283.170975] R13: ffff98d89da38978 R14: ffffffff999ae15a R15: ffff98d880130105 [ 7283.170978] FS: 0000000000000000(0000) GS:ffff98d996f00000(0000) knlGS:0000000000000000 [ 7283.170981] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 7283.170983] CR2: 00000000f7a9d178 CR3: 00000001c42ea000 CR4: 00000000003506e0 [ 7283.170986] Call Trace: [ 7283.170988] <TASK> [ 7283.170989] gfx_v9_0_hw_fini+0x1c/0x6d0 [amdgpu] [ 7283.171655] amdgpu_device_ip_suspend_phase2+0x101/0x1a0 [amdgpu] [ 7283.172245] amdgpu_device_suspend+0x103/0x180 [amdgpu] [ 7283.172823] amdgpu_pmops_freeze+0x21/0x60 [amdgpu] [ 7283.173412] pci_pm_freeze+0x54/0xc0 [ 7283.173419] ? __pfx_pci_pm_freeze+0x10/0x10 [ 7283.173425] dpm_run_callback+0x98/0x200 [ 7283.173430] __device_suspend+0x164/0x5f0
v2: drop gfx11 as it's fixed in a different solution by retiring cp_ecc_irq funcs(Hawking)
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2522 Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Tao Zhou tao.zhou1@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c @@ -3797,7 +3797,8 @@ static int gfx_v9_0_hw_fini(void *handle { struct amdgpu_device *adev = (struct amdgpu_device *)handle;
- amdgpu_irq_put(adev, &adev->gfx.cp_ecc_error_irq, 0); + if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__GFX)) + amdgpu_irq_put(adev, &adev->gfx.cp_ecc_error_irq, 0); amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0); amdgpu_irq_put(adev, &adev->gfx.priv_inst_irq, 0);
From: Saleemkhan Jamadar saleemkhan.jamadar@amd.com
commit 5b94db73e45e2e6c2840f39c022fd71dfa47fc58 upstream.
Register CC_UVD_HARVESTING is obsolete for JPEG 3.1.2
Signed-off-by: Saleemkhan Jamadar saleemkhan.jamadar@amd.com Reviewed-by: Veerabadhran Gopalakrishnan Veerabadhran.Gopalakrishnan@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org # 6.1.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c index c55e09432e26..1c2292cc5f2c 100644 --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c @@ -54,6 +54,7 @@ static int jpeg_v3_0_early_init(void *handle)
switch (adev->ip_versions[UVD_HWIP][0]) { case IP_VERSION(3, 1, 1): + case IP_VERSION(3, 1, 2): break; default: harvest = RREG32_SOC15(JPEG, 0, mmCC_UVD_HARVESTING);
From: Yifan Zhang yifan1.zhang@amd.com
commit 996e93a3fe74dcf9d467ae3020aea42cc3ff65e3 upstream.
gfx 11.0.4 range starts from 0x80.
Fixes: 311d52367d0a ("drm/amdgpu: add soc21 common ip block support for GC 11.0.4") Cc: stable@vger.kernel.org Signed-off-by: Yifan Zhang yifan1.zhang@amd.com Reported-by: Yogesh Mohan Marimuthu Yogesh.Mohanmarimuthu@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Tim Huang Tim.Huang@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/soc21.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/soc21.c +++ b/drivers/gpu/drm/amd/amdgpu/soc21.c @@ -715,7 +715,7 @@ static int soc21_common_early_init(void AMD_PG_SUPPORT_VCN_DPG | AMD_PG_SUPPORT_GFX_PG | AMD_PG_SUPPORT_JPEG; - adev->external_rev_id = adev->rev_id + 0x1; + adev->external_rev_id = adev->rev_id + 0x80; break;
default:
From: Lin.Cao lincao12@amd.com
commit 6c032c37ac3ef3b7df30937c785ecc4da428edc0 upstream.
v1: Vmbo->shadow is used to back vram bo up when vram lost. So that we should set shadow as vmbo->shadow to recover vmbo->bo v2: Modify if(vmbo->shadow) shadow = vmbo->shadow as if(!vmbo->shadow) continue;
Fixes: e18aaea733da ("drm/amdgpu: move shadow_list to amdgpu_bo_vm") Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Lin.Cao lincao12@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -4483,7 +4483,11 @@ static int amdgpu_device_recover_vram(st dev_info(adev->dev, "recover vram bo from shadow start\n"); mutex_lock(&adev->shadow_list_lock); list_for_each_entry(vmbo, &adev->shadow_list, shadow_list) { - shadow = &vmbo->bo; + /* If vm is compute context or adev is APU, shadow will be NULL */ + if (!vmbo->shadow) + continue; + shadow = vmbo->shadow; + /* No need to recover an evicted BO */ if (shadow->tbo.resource->mem_type != TTM_PL_TT || shadow->tbo.resource->start == AMDGPU_BO_INVALID_OFFSET ||
From: Alvin Lee Alvin.Lee2@amd.com
commit b504f99ccaa64da364443431e388ecf30b604e38 upstream.
[Description] - Due to bandwidth / arbitration issues at 200Mhz DCFCLK, we want to enforce minimum 60us of prefetch to avoid intermittent underflow issues - Since 60us prefetch is already enforced for UCLK DPM0, and many DCFCLK's > 200Mhz are mapped to UCLK DPM1, in theory there should not be any UCLK DPM regressions by enforcing greater prefetch
Reviewed-by: Nevenko Stupar Nevenko.Stupar@amd.com Reviewed-by: Jun Lei Jun.Lei@amd.com Cc: Mario Limonciello mario.limonciello@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Acked-by: Alex Hung alex.hung@amd.com Signed-off-by: Alvin Lee Alvin.Lee2@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c | 5 +++-- drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h | 1 + 2 files changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c @@ -807,7 +807,8 @@ static void DISPCLKDPPCLKDCFCLKDeepSleep v->SwathHeightY[k], v->SwathHeightC[k], TWait, - v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ ? + (v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ || + v->DCFCLKPerState[mode_lib->vba.VoltageLevel] <= MIN_DCFCLK_FREQ_MHZ) ? mode_lib->vba.ip.min_prefetch_in_strobe_us : 0, /* Output */ &v->DSTXAfterScaler[k], @@ -3288,7 +3289,7 @@ void dml32_ModeSupportAndSystemConfigura v->swath_width_chroma_ub_this_state[k], v->SwathHeightYThisState[k], v->SwathHeightCThisState[k], v->TWait, - v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ ? + (v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ || v->DCFCLKState[i][j] <= MIN_DCFCLK_FREQ_MHZ) ? mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
/* Output */ --- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h @@ -52,6 +52,7 @@ #define BPP_BLENDED_PIPE 0xffffffff
#define MEM_STROBE_FREQ_MHZ 1600 +#define MIN_DCFCLK_FREQ_MHZ 200 #define MEM_STROBE_MAX_DELIVERY_TIME_US 60.0
struct display_mode_lib;
From: Guchun Chen guchun.chen@amd.com
commit 58d9b9a14b47c2a3da6effcbb01607ad7edc0275 upstream.
amdgpu_dpm_is_overdrive_supported is a common API across all asics, so we should cast pp_handle into correct structure under different power frameworks.
v2: using return directly to simplify code v3: SI asic does not carry od_enabled member in pp_handle, and update Fixes tag
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2541 Fixes: eb4900aa4c49 ("drm/amdgpu: Fix kernel NULL pointer dereference in dpm functions") Suggested-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/pm/amdgpu_dpm.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-)
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c @@ -1414,15 +1414,21 @@ int amdgpu_dpm_get_smu_prv_buf_details(s
int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev) { - struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle; - struct smu_context *smu = adev->powerplay.pp_handle; + if (is_support_sw_smu(adev)) { + struct smu_context *smu = adev->powerplay.pp_handle;
- if ((is_support_sw_smu(adev) && smu->od_enabled) || - (is_support_sw_smu(adev) && smu->is_apu) || - (!is_support_sw_smu(adev) && hwmgr->od_enabled)) - return true; + return (smu->od_enabled || smu->is_apu); + } else { + struct pp_hwmgr *hwmgr;
- return false; + /* SI asic does not carry od_enabled */ + if (adev->family == AMDGPU_FAMILY_SI) + return false; + + hwmgr = (struct pp_hwmgr *)adev->powerplay.pp_handle; + + return hwmgr->od_enabled; + } }
int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
From: Guchun Chen guchun.chen@amd.com
commit 8b229ada2669b74fdae06c83fbfda5a5a99fc253 upstream.
sdma_v4_0_ip is shared on a few asics, but in sdma_v4_0_hw_fini, driver unconditionally disables ecc_irq which is only enabled on those asics enabling sdma ecc. This will introduce a warning in suspend cycle on those chips with sdma ip v4.0, while without sdma ecc. So this patch correct this.
[ 7283.166354] RIP: 0010:amdgpu_irq_put+0x45/0x70 [amdgpu] [ 7283.167001] RSP: 0018:ffff9a5fc3967d08 EFLAGS: 00010246 [ 7283.167019] RAX: ffff98d88afd3770 RBX: 0000000000000001 RCX: 0000000000000000 [ 7283.167023] RDX: 0000000000000000 RSI: ffff98d89da30390 RDI: ffff98d89da20000 [ 7283.167025] RBP: ffff98d89da20000 R08: 0000000000036838 R09: 0000000000000006 [ 7283.167028] R10: ffffd5764243c008 R11: 0000000000000000 R12: ffff98d89da30390 [ 7283.167030] R13: ffff98d89da38978 R14: ffffffff999ae15a R15: ffff98d880130105 [ 7283.167032] FS: 0000000000000000(0000) GS:ffff98d996f00000(0000) knlGS:0000000000000000 [ 7283.167036] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 7283.167039] CR2: 00000000f7a9d178 CR3: 00000001c42ea000 CR4: 00000000003506e0 [ 7283.167041] Call Trace: [ 7283.167046] <TASK> [ 7283.167048] sdma_v4_0_hw_fini+0x38/0xa0 [amdgpu] [ 7283.167704] amdgpu_device_ip_suspend_phase2+0x101/0x1a0 [amdgpu] [ 7283.168296] amdgpu_device_suspend+0x103/0x180 [amdgpu] [ 7283.168875] amdgpu_pmops_freeze+0x21/0x60 [amdgpu] [ 7283.169464] pci_pm_freeze+0x54/0xc0
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2522 Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Tao Zhou tao.zhou1@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c @@ -1941,9 +1941,11 @@ static int sdma_v4_0_hw_fini(void *handl return 0; }
- for (i = 0; i < adev->sdma.num_instances; i++) { - amdgpu_irq_put(adev, &adev->sdma.ecc_irq, - AMDGPU_SDMA_IRQ_INSTANCE0 + i); + if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__SDMA)) { + for (i = 0; i < adev->sdma.num_instances; i++) { + amdgpu_irq_put(adev, &adev->sdma.ecc_irq, + AMDGPU_SDMA_IRQ_INSTANCE0 + i); + } }
sdma_v4_0_ctx_switch_enable(adev, false);
From: Guchun Chen guchun.chen@amd.com
commit 5247f05eadf1081a74b2233f291cee2efed25e3a upstream.
Prevent further dpm casting on legacy asics without od_enabled in amdgpu_dpm_is_overdrive_supported. This can avoid UBSAN complain in init sequence.
v2: add a macro to check legacy dpm instead of checking asic family/type v3: refine macro name for naming consistency
Suggested-by: Evan Quan evan.quan@amd.com Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Lijo Lazar lijo.lazar@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/pm/amdgpu_dpm.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c @@ -36,6 +36,8 @@ #define amdgpu_dpm_enable_bapm(adev, e) \ ((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
+#define amdgpu_dpm_is_legacy_dpm(adev) ((adev)->powerplay.pp_handle == (adev)) + int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low) { const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs; @@ -1421,8 +1423,11 @@ int amdgpu_dpm_is_overdrive_supported(st } else { struct pp_hwmgr *hwmgr;
- /* SI asic does not carry od_enabled */ - if (adev->family == AMDGPU_FAMILY_SI) + /* + * dpm on some legacy asics don't carry od_enabled member + * as its pp_handle is casted directly from adev. + */ + if (amdgpu_dpm_is_legacy_dpm(adev)) return false;
hwmgr = (struct pp_hwmgr *)adev->powerplay.pp_handle;
From: Graham Sider Graham.Sider@amd.com
commit 6040517e4a29d3828160c571681eec9ffe10043f upstream.
MES scheduler and kiq versions are stored in mes.sched_version and mes.kiq_version, respectively, which are read from a register after their queues are initialized. Remove mes.ucode_fw_version and mes.data_fw_version which tried to read this versioning info from the firmware headers (which don't contain this information).
Signed-off-by: Graham Sider Graham.Sider@amd.com Reviewed-by: Jack Xiao Jack.Xiao@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 2 -- drivers/gpu/drm/amd/amdgpu/mes_v10_1.c | 4 ---- drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 4 ---- 3 files changed, 10 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h @@ -91,14 +91,12 @@ struct amdgpu_mes { struct amdgpu_bo *ucode_fw_obj[AMDGPU_MAX_MES_PIPES]; uint64_t ucode_fw_gpu_addr[AMDGPU_MAX_MES_PIPES]; uint32_t *ucode_fw_ptr[AMDGPU_MAX_MES_PIPES]; - uint32_t ucode_fw_version[AMDGPU_MAX_MES_PIPES]; uint64_t uc_start_addr[AMDGPU_MAX_MES_PIPES];
/* mes ucode data */ struct amdgpu_bo *data_fw_obj[AMDGPU_MAX_MES_PIPES]; uint64_t data_fw_gpu_addr[AMDGPU_MAX_MES_PIPES]; uint32_t *data_fw_ptr[AMDGPU_MAX_MES_PIPES]; - uint32_t data_fw_version[AMDGPU_MAX_MES_PIPES]; uint64_t data_start_addr[AMDGPU_MAX_MES_PIPES];
/* eop gpu obj */ --- a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c +++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c @@ -415,10 +415,6 @@ static int mes_v10_1_init_microcode(stru
mes_hdr = (const struct mes_firmware_header_v1_0 *) adev->mes.fw[pipe]->data; - adev->mes.ucode_fw_version[pipe] = - le32_to_cpu(mes_hdr->mes_ucode_version); - adev->mes.ucode_fw_version[pipe] = - le32_to_cpu(mes_hdr->mes_ucode_data_version); adev->mes.uc_start_addr[pipe] = le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) | ((uint64_t)(le32_to_cpu(mes_hdr->mes_uc_start_addr_hi)) << 32); --- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c @@ -484,10 +484,6 @@ static int mes_v11_0_init_microcode(stru
mes_hdr = (const struct mes_firmware_header_v1_0 *) adev->mes.fw[pipe]->data; - adev->mes.ucode_fw_version[pipe] = - le32_to_cpu(mes_hdr->mes_ucode_version); - adev->mes.ucode_fw_version[pipe] = - le32_to_cpu(mes_hdr->mes_ucode_data_version); adev->mes.uc_start_addr[pipe] = le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) | ((uint64_t)(le32_to_cpu(mes_hdr->mes_uc_start_addr_hi)) << 32);
From: Mario Limonciello mario.limonciello@amd.com
commit cc42e76e7de5190a7da5dac9d7b2bbb458e050bf upstream.
Add an early_init phase to MES for fetching and validating microcode from the filesystem.
If MES microcode is required but not available during early init, the firmware framebuffer will have already been released and the screen will freeze.
Move the request for MES microcode into the early_init phase so that if it's not available, early_init will fail.
Reviewed-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Lijo Lazar lijo.lazar@amd.com Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c | 65 +++++++++++++++++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 1 drivers/gpu/drm/amd/amdgpu/mes_v10_1.c | 97 +++++--------------------------- drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 88 +++++------------------------ 4 files changed, 100 insertions(+), 151 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c @@ -21,6 +21,8 @@ * */
+#include <linux/firmware.h> + #include "amdgpu_mes.h" #include "amdgpu.h" #include "soc15_common.h" @@ -1423,3 +1425,66 @@ error_pasid: kfree(vm); return 0; } + +int amdgpu_mes_init_microcode(struct amdgpu_device *adev, int pipe) +{ + const struct mes_firmware_header_v1_0 *mes_hdr; + struct amdgpu_firmware_info *info; + char ucode_prefix[30]; + char fw_name[40]; + int r; + + amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix)); + snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes%s.bin", + ucode_prefix, + pipe == AMDGPU_MES_SCHED_PIPE ? "" : "1"); + r = request_firmware(&adev->mes.fw[pipe], fw_name, adev->dev); + if (r) + goto out; + + r = amdgpu_ucode_validate(adev->mes.fw[pipe]); + if (r) + goto out; + + mes_hdr = (const struct mes_firmware_header_v1_0 *) + adev->mes.fw[pipe]->data; + adev->mes.uc_start_addr[pipe] = + le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) | + ((uint64_t)(le32_to_cpu(mes_hdr->mes_uc_start_addr_hi)) << 32); + adev->mes.data_start_addr[pipe] = + le32_to_cpu(mes_hdr->mes_data_start_addr_lo) | + ((uint64_t)(le32_to_cpu(mes_hdr->mes_data_start_addr_hi)) << 32); + + if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { + int ucode, ucode_data; + + if (pipe == AMDGPU_MES_SCHED_PIPE) { + ucode = AMDGPU_UCODE_ID_CP_MES; + ucode_data = AMDGPU_UCODE_ID_CP_MES_DATA; + } else { + ucode = AMDGPU_UCODE_ID_CP_MES1; + ucode_data = AMDGPU_UCODE_ID_CP_MES1_DATA; + } + + info = &adev->firmware.ucode[ucode]; + info->ucode_id = ucode; + info->fw = adev->mes.fw[pipe]; + adev->firmware.fw_size += + ALIGN(le32_to_cpu(mes_hdr->mes_ucode_size_bytes), + PAGE_SIZE); + + info = &adev->firmware.ucode[ucode_data]; + info->ucode_id = ucode_data; + info->fw = adev->mes.fw[pipe]; + adev->firmware.fw_size += + ALIGN(le32_to_cpu(mes_hdr->mes_ucode_data_size_bytes), + PAGE_SIZE); + } + + return 0; + +out: + release_firmware(adev->mes.fw[pipe]); + adev->mes.fw[pipe] = NULL; + return r; +} --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h @@ -306,6 +306,7 @@ struct amdgpu_mes_funcs {
int amdgpu_mes_ctx_get_offs(struct amdgpu_ring *ring, unsigned int id_offs);
+int amdgpu_mes_init_microcode(struct amdgpu_device *adev, int pipe); int amdgpu_mes_init(struct amdgpu_device *adev); void amdgpu_mes_fini(struct amdgpu_device *adev);
--- a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c +++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c @@ -375,82 +375,6 @@ static const struct amdgpu_mes_funcs mes .resume_gang = mes_v10_1_resume_gang, };
-static int mes_v10_1_init_microcode(struct amdgpu_device *adev, - enum admgpu_mes_pipe pipe) -{ - const char *chip_name; - char fw_name[30]; - int err; - const struct mes_firmware_header_v1_0 *mes_hdr; - struct amdgpu_firmware_info *info; - - switch (adev->ip_versions[GC_HWIP][0]) { - case IP_VERSION(10, 1, 10): - chip_name = "navi10"; - break; - case IP_VERSION(10, 3, 0): - chip_name = "sienna_cichlid"; - break; - default: - BUG(); - } - - if (pipe == AMDGPU_MES_SCHED_PIPE) - snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes.bin", - chip_name); - else - snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes1.bin", - chip_name); - - err = request_firmware(&adev->mes.fw[pipe], fw_name, adev->dev); - if (err) - return err; - - err = amdgpu_ucode_validate(adev->mes.fw[pipe]); - if (err) { - release_firmware(adev->mes.fw[pipe]); - adev->mes.fw[pipe] = NULL; - return err; - } - - mes_hdr = (const struct mes_firmware_header_v1_0 *) - adev->mes.fw[pipe]->data; - adev->mes.uc_start_addr[pipe] = - le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) | - ((uint64_t)(le32_to_cpu(mes_hdr->mes_uc_start_addr_hi)) << 32); - adev->mes.data_start_addr[pipe] = - le32_to_cpu(mes_hdr->mes_data_start_addr_lo) | - ((uint64_t)(le32_to_cpu(mes_hdr->mes_data_start_addr_hi)) << 32); - - if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { - int ucode, ucode_data; - - if (pipe == AMDGPU_MES_SCHED_PIPE) { - ucode = AMDGPU_UCODE_ID_CP_MES; - ucode_data = AMDGPU_UCODE_ID_CP_MES_DATA; - } else { - ucode = AMDGPU_UCODE_ID_CP_MES1; - ucode_data = AMDGPU_UCODE_ID_CP_MES1_DATA; - } - - info = &adev->firmware.ucode[ucode]; - info->ucode_id = ucode; - info->fw = adev->mes.fw[pipe]; - adev->firmware.fw_size += - ALIGN(le32_to_cpu(mes_hdr->mes_ucode_size_bytes), - PAGE_SIZE); - - info = &adev->firmware.ucode[ucode_data]; - info->ucode_id = ucode_data; - info->fw = adev->mes.fw[pipe]; - adev->firmware.fw_size += - ALIGN(le32_to_cpu(mes_hdr->mes_ucode_data_size_bytes), - PAGE_SIZE); - } - - return 0; -} - static void mes_v10_1_free_microcode(struct amdgpu_device *adev, enum admgpu_mes_pipe pipe) { @@ -1015,10 +939,6 @@ static int mes_v10_1_sw_init(void *handl if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) continue;
- r = mes_v10_1_init_microcode(adev, pipe); - if (r) - return r; - r = mes_v10_1_allocate_eop_buf(adev, pipe); if (r) return r; @@ -1225,6 +1145,22 @@ static int mes_v10_1_resume(void *handle return amdgpu_mes_resume(adev); }
+static int mes_v10_0_early_init(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + int pipe, r; + + for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { + if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) + continue; + r = amdgpu_mes_init_microcode(adev, pipe); + if (r) + return r; + } + + return 0; +} + static int mes_v10_0_late_init(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle; @@ -1237,6 +1173,7 @@ static int mes_v10_0_late_init(void *han
static const struct amd_ip_funcs mes_v10_1_ip_funcs = { .name = "mes_v10_1", + .early_init = mes_v10_0_early_init, .late_init = mes_v10_0_late_init, .sw_init = mes_v10_1_sw_init, .sw_fini = mes_v10_1_sw_fini, --- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c @@ -453,73 +453,6 @@ static const struct amdgpu_mes_funcs mes .misc_op = mes_v11_0_misc_op, };
-static int mes_v11_0_init_microcode(struct amdgpu_device *adev, - enum admgpu_mes_pipe pipe) -{ - char fw_name[30]; - char ucode_prefix[30]; - int err; - const struct mes_firmware_header_v1_0 *mes_hdr; - struct amdgpu_firmware_info *info; - - amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix)); - - if (pipe == AMDGPU_MES_SCHED_PIPE) - snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes.bin", - ucode_prefix); - else - snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes1.bin", - ucode_prefix); - - err = request_firmware(&adev->mes.fw[pipe], fw_name, adev->dev); - if (err) - return err; - - err = amdgpu_ucode_validate(adev->mes.fw[pipe]); - if (err) { - release_firmware(adev->mes.fw[pipe]); - adev->mes.fw[pipe] = NULL; - return err; - } - - mes_hdr = (const struct mes_firmware_header_v1_0 *) - adev->mes.fw[pipe]->data; - adev->mes.uc_start_addr[pipe] = - le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) | - ((uint64_t)(le32_to_cpu(mes_hdr->mes_uc_start_addr_hi)) << 32); - adev->mes.data_start_addr[pipe] = - le32_to_cpu(mes_hdr->mes_data_start_addr_lo) | - ((uint64_t)(le32_to_cpu(mes_hdr->mes_data_start_addr_hi)) << 32); - - if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { - int ucode, ucode_data; - - if (pipe == AMDGPU_MES_SCHED_PIPE) { - ucode = AMDGPU_UCODE_ID_CP_MES; - ucode_data = AMDGPU_UCODE_ID_CP_MES_DATA; - } else { - ucode = AMDGPU_UCODE_ID_CP_MES1; - ucode_data = AMDGPU_UCODE_ID_CP_MES1_DATA; - } - - info = &adev->firmware.ucode[ucode]; - info->ucode_id = ucode; - info->fw = adev->mes.fw[pipe]; - adev->firmware.fw_size += - ALIGN(le32_to_cpu(mes_hdr->mes_ucode_size_bytes), - PAGE_SIZE); - - info = &adev->firmware.ucode[ucode_data]; - info->ucode_id = ucode_data; - info->fw = adev->mes.fw[pipe]; - adev->firmware.fw_size += - ALIGN(le32_to_cpu(mes_hdr->mes_ucode_data_size_bytes), - PAGE_SIZE); - } - - return 0; -} - static void mes_v11_0_free_microcode(struct amdgpu_device *adev, enum admgpu_mes_pipe pipe) { @@ -1094,10 +1027,6 @@ static int mes_v11_0_sw_init(void *handl if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) continue;
- r = mes_v11_0_init_microcode(adev, pipe); - if (r) - return r; - r = mes_v11_0_allocate_eop_buf(adev, pipe); if (r) return r; @@ -1330,6 +1259,22 @@ static int mes_v11_0_resume(void *handle return amdgpu_mes_resume(adev); }
+static int mes_v11_0_early_init(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + int pipe, r; + + for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { + if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) + continue; + r = amdgpu_mes_init_microcode(adev, pipe); + if (r) + return r; + } + + return 0; +} + static int mes_v11_0_late_init(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle; @@ -1344,6 +1289,7 @@ static int mes_v11_0_late_init(void *han
static const struct amd_ip_funcs mes_v11_0_ip_funcs = { .name = "mes_v11_0", + .early_init = mes_v11_0_early_init, .late_init = mes_v11_0_late_init, .sw_init = mes_v11_0_sw_init, .sw_fini = mes_v11_0_sw_fini,
From: Mario Limonciello mario.limonciello@amd.com
commit 2210af50ae7f4104269dfde7bafbbfbacdbe1a2b upstream.
All microcode runs a basic validation after it's been loaded. Each IP block as part of init will run both.
Introduce a wrapper for request_firmware and amdgpu_ucode_validate. This wrapper will also remap any error codes from request_firmware to -ENODEV. This is so that early_init will fail if firmware couldn't be loaded instead of the IP block being disabled.
Reviewed-by: Lijo Lazar lijo.lazar@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 36 ++++++++++++++++++++++++++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 3 ++ 2 files changed, 39 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c @@ -1091,3 +1091,39 @@ void amdgpu_ucode_ip_version_decode(stru
snprintf(ucode_prefix, len, "%s_%d_%d_%d", ip_name, maj, min, rev); } + +/* + * amdgpu_ucode_request - Fetch and validate amdgpu microcode + * + * @adev: amdgpu device + * @fw: pointer to load firmware to + * @fw_name: firmware to load + * + * This is a helper that will use request_firmware and amdgpu_ucode_validate + * to load and run basic validation on firmware. If the load fails, remap + * the error code to -ENODEV, so that early_init functions will fail to load. + */ +int amdgpu_ucode_request(struct amdgpu_device *adev, const struct firmware **fw, + const char *fw_name) +{ + int err = request_firmware(fw, fw_name, adev->dev); + + if (err) + return -ENODEV; + err = amdgpu_ucode_validate(*fw); + if (err) + dev_dbg(adev->dev, ""%s" failed to validate\n", fw_name); + + return err; +} + +/* + * amdgpu_ucode_release - Release firmware microcode + * + * @fw: pointer to firmware to release + */ +void amdgpu_ucode_release(const struct firmware **fw) +{ + release_firmware(*fw); + *fw = NULL; +} --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h @@ -543,6 +543,9 @@ void amdgpu_ucode_print_sdma_hdr(const s void amdgpu_ucode_print_psp_hdr(const struct common_firmware_header *hdr); void amdgpu_ucode_print_gpu_info_hdr(const struct common_firmware_header *hdr); int amdgpu_ucode_validate(const struct firmware *fw); +int amdgpu_ucode_request(struct amdgpu_device *adev, const struct firmware **fw, + const char *fw_name); +void amdgpu_ucode_release(const struct firmware **fw); bool amdgpu_ucode_hdr_version(union amdgpu_firmware_header *hdr, uint16_t hdr_major, uint16_t hdr_minor);
From: Mario Limonciello mario.limonciello@amd.com
commit 11e0b0067ec0707e8e598a5f9a547ab618ae7982 upstream.
The `amdgpu_ucode_request` helper will ensure that the return code for missing firmware is -ENODEV so that early_init can fail.
The `amdgpu_ucode_release` helper provides symmetry for releasing firmware.
Reviewed-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Lijo Lazar lijo.lazar@amd.com Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c | 10 ++-------- drivers/gpu/drm/amd/amdgpu/mes_v10_1.c | 10 +--------- drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 10 +--------- 3 files changed, 4 insertions(+), 26 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c @@ -1438,11 +1438,7 @@ int amdgpu_mes_init_microcode(struct amd snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes%s.bin", ucode_prefix, pipe == AMDGPU_MES_SCHED_PIPE ? "" : "1"); - r = request_firmware(&adev->mes.fw[pipe], fw_name, adev->dev); - if (r) - goto out; - - r = amdgpu_ucode_validate(adev->mes.fw[pipe]); + r = amdgpu_ucode_request(adev, &adev->mes.fw[pipe], fw_name); if (r) goto out;
@@ -1482,9 +1478,7 @@ int amdgpu_mes_init_microcode(struct amd }
return 0; - out: - release_firmware(adev->mes.fw[pipe]); - adev->mes.fw[pipe] = NULL; + amdgpu_ucode_release(&adev->mes.fw[pipe]); return r; } --- a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c +++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c @@ -375,13 +375,6 @@ static const struct amdgpu_mes_funcs mes .resume_gang = mes_v10_1_resume_gang, };
-static void mes_v10_1_free_microcode(struct amdgpu_device *adev, - enum admgpu_mes_pipe pipe) -{ - release_firmware(adev->mes.fw[pipe]); - adev->mes.fw[pipe] = NULL; -} - static int mes_v10_1_allocate_ucode_buffer(struct amdgpu_device *adev, enum admgpu_mes_pipe pipe) { @@ -975,8 +968,7 @@ static int mes_v10_1_sw_fini(void *handl amdgpu_bo_free_kernel(&adev->mes.eop_gpu_obj[pipe], &adev->mes.eop_gpu_addr[pipe], NULL); - - mes_v10_1_free_microcode(adev, pipe); + amdgpu_ucode_release(&adev->mes.fw[pipe]); }
amdgpu_bo_free_kernel(&adev->gfx.kiq.ring.mqd_obj, --- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c @@ -453,13 +453,6 @@ static const struct amdgpu_mes_funcs mes .misc_op = mes_v11_0_misc_op, };
-static void mes_v11_0_free_microcode(struct amdgpu_device *adev, - enum admgpu_mes_pipe pipe) -{ - release_firmware(adev->mes.fw[pipe]); - adev->mes.fw[pipe] = NULL; -} - static int mes_v11_0_allocate_ucode_buffer(struct amdgpu_device *adev, enum admgpu_mes_pipe pipe) { @@ -1063,8 +1056,7 @@ static int mes_v11_0_sw_fini(void *handl amdgpu_bo_free_kernel(&adev->mes.eop_gpu_obj[pipe], &adev->mes.eop_gpu_addr[pipe], NULL); - - mes_v11_0_free_microcode(adev, pipe); + amdgpu_ucode_release(&adev->mes.fw[pipe]); }
amdgpu_bo_free_kernel(&adev->gfx.kiq.ring.mqd_obj,
From: Ping Cheng pinglinux@gmail.com
commit 08a46b4190d345544d04ce4fe2e1844b772b8535 upstream.
Some older tablets may not report physical maximum for X/Y coordinates. Set a default to prevent undefined resolution.
Signed-off-by: Ping Cheng ping.cheng@wacom.com Link: https://lore.kernel.org/r/20230409164229.29777-1-ping.cheng@wacom.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hid/wacom_wac.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
--- a/drivers/hid/wacom_wac.c +++ b/drivers/hid/wacom_wac.c @@ -1895,6 +1895,7 @@ static void wacom_map_usage(struct input int fmax = field->logical_maximum; unsigned int equivalent_usage = wacom_equivalent_usage(usage->hid); int resolution_code = code; + int resolution = hidinput_calc_abs_res(field, resolution_code);
if (equivalent_usage == HID_DG_TWIST) { resolution_code = ABS_RZ; @@ -1915,8 +1916,15 @@ static void wacom_map_usage(struct input switch (type) { case EV_ABS: input_set_abs_params(input, code, fmin, fmax, fuzz, 0); - input_abs_set_res(input, code, - hidinput_calc_abs_res(field, resolution_code)); + + /* older tablet may miss physical usage */ + if ((code == ABS_X || code == ABS_Y) && !resolution) { + resolution = WACOM_INTUOS_RES; + hid_warn(input, + "Wacom usage (%d) missing resolution \n", + code); + } + input_abs_set_res(input, code, resolution); break; case EV_KEY: case EV_MSC:
From: Ping Cheng pinglinux@gmail.com
commit 17d793f3ed53080dab6bbeabfc82de890c901001 upstream.
To fully utilize the BT polling/refresh rate, a few input events are sent together to reduce event delay. This causes issue to the timestamp generated by input_sync since all the events in the same packet would pretty much have the same timestamp. This patch inserts time interval to the events by averaging the total time used for sending the packet.
This decision was mainly based on observing the actual time interval between each BT polling. The interval doesn't seem to be constant, due to the network and system environment. So, using solutions other than averaging doesn't end up with valid timestamps.
Signed-off-by: Ping Cheng ping.cheng@wacom.com Reviewed-by: Jason Gerecke jason.gerecke@wacom.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hid/wacom_wac.c | 26 ++++++++++++++++++++++++++ drivers/hid/wacom_wac.h | 1 + 2 files changed, 27 insertions(+)
--- a/drivers/hid/wacom_wac.c +++ b/drivers/hid/wacom_wac.c @@ -1308,6 +1308,9 @@ static void wacom_intuos_pro2_bt_pen(str
struct input_dev *pen_input = wacom->pen_input; unsigned char *data = wacom->data; + int number_of_valid_frames = 0; + int time_interval = 15000000; + ktime_t time_packet_received = ktime_get(); int i;
if (wacom->features.type == INTUOSP2_BT || @@ -1328,12 +1331,30 @@ static void wacom_intuos_pro2_bt_pen(str wacom->id[0] |= (wacom->serial[0] >> 32) & 0xFFFFF; }
+ /* number of valid frames */ for (i = 0; i < pen_frames; i++) { unsigned char *frame = &data[i*pen_frame_len + 1]; bool valid = frame[0] & 0x80; + + if (valid) + number_of_valid_frames++; + } + + if (number_of_valid_frames) { + if (wacom->hid_data.time_delayed) + time_interval = ktime_get() - wacom->hid_data.time_delayed; + time_interval /= number_of_valid_frames; + wacom->hid_data.time_delayed = time_packet_received; + } + + for (i = 0; i < number_of_valid_frames; i++) { + unsigned char *frame = &data[i*pen_frame_len + 1]; + bool valid = frame[0] & 0x80; bool prox = frame[0] & 0x40; bool range = frame[0] & 0x20; bool invert = frame[0] & 0x10; + int frames_number_reversed = number_of_valid_frames - i - 1; + int event_timestamp = time_packet_received - frames_number_reversed * time_interval;
if (!valid) continue; @@ -1346,6 +1367,7 @@ static void wacom_intuos_pro2_bt_pen(str wacom->tool[0] = 0; wacom->id[0] = 0; wacom->serial[0] = 0; + wacom->hid_data.time_delayed = 0; return; }
@@ -1382,6 +1404,7 @@ static void wacom_intuos_pro2_bt_pen(str get_unaligned_le16(&frame[11])); } } + if (wacom->tool[0]) { input_report_abs(pen_input, ABS_PRESSURE, get_unaligned_le16(&frame[5])); if (wacom->features.type == INTUOSP2_BT || @@ -1405,6 +1428,9 @@ static void wacom_intuos_pro2_bt_pen(str
wacom->shared->stylus_in_proximity = prox;
+ /* add timestamp to unpack the frames */ + input_set_timestamp(pen_input, event_timestamp); + input_sync(pen_input); } } --- a/drivers/hid/wacom_wac.h +++ b/drivers/hid/wacom_wac.h @@ -324,6 +324,7 @@ struct hid_data { int ps_connected; bool pad_input_event_flag; unsigned short sequence_number; + int time_delayed; };
struct wacom_remote_data {
From: Konstantin Komarov almaz.alexandrovich@paragon-software.com
commit 6827d50b2c430c329af442b64c9176d174f56521 upstream.
Removed unused macro. Changed null pointer checking. Fixed inconsistent indenting.
Signed-off-by: Konstantin Komarov almaz.alexandrovich@paragon-software.com Cc: Rudi Heitbaum rudi@heitbaum.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ntfs3/bitmap.c | 3 ++- fs/ntfs3/namei.c | 2 +- fs/ntfs3/ntfs.h | 3 --- 3 files changed, 3 insertions(+), 5 deletions(-)
--- a/fs/ntfs3/bitmap.c +++ b/fs/ntfs3/bitmap.c @@ -661,7 +661,8 @@ int wnd_init(struct wnd_bitmap *wnd, str if (!wnd->bits_last) wnd->bits_last = wbits;
- wnd->free_bits = kcalloc(wnd->nwnd, sizeof(u16), GFP_NOFS | __GFP_NOWARN); + wnd->free_bits = + kcalloc(wnd->nwnd, sizeof(u16), GFP_NOFS | __GFP_NOWARN); if (!wnd->free_bits) return -ENOMEM;
--- a/fs/ntfs3/namei.c +++ b/fs/ntfs3/namei.c @@ -91,7 +91,7 @@ static struct dentry *ntfs_lookup(struct * If the MFT record of ntfs inode is not a base record, inode->i_op can be NULL. * This causes null pointer dereference in d_splice_alias(). */ - if (!IS_ERR(inode) && inode->i_op == NULL) { + if (!IS_ERR_OR_NULL(inode) && !inode->i_op) { iput(inode); inode = ERR_PTR(-EINVAL); } --- a/fs/ntfs3/ntfs.h +++ b/fs/ntfs3/ntfs.h @@ -436,9 +436,6 @@ static inline u64 attr_svcn(const struct return attr->non_res ? le64_to_cpu(attr->nres.svcn) : 0; }
-/* The size of resident attribute by its resident size. */ -#define BYTES_PER_RESIDENT(b) (0x18 + (b)) - static_assert(sizeof(struct ATTRIB) == 0x48); static_assert(sizeof(((struct ATTRIB *)NULL)->res) == 0x08); static_assert(sizeof(((struct ATTRIB *)NULL)->nres) == 0x38);
From: Konrad Dybcio konrad.dybcio@linaro.org
commit 3eeca5e5f3100435b06a5b5d86daa3d135a8a4bd upstream.
The adreno_load_gpu() path is guarded by an error check on adreno_load_fw(). This function is responsible for loading Qualcomm-only-signed binaries (e.g. SQE and GMU FW for A6XX), but it does not take the vendor-signed ZAP blob into account.
By embedding the SQE (and GMU, if necessary) firmware into the initrd/kernel, we can trigger and unfortunate path that would not bail out early and proceed with gpu->hw_init(). That will fail, as the ZAP loader path will not find the firmware and return back to adreno_load_gpu().
This error path involves pm_runtime_put_sync() which then calls idle() instead of suspend(). This is suboptimal, as it means that we're not going through the clean shutdown sequence. With at least A619_holi, this makes the GPU not wake up until it goes through at least one more start-fail-stop cycle. The pm_runtime_put_sync that appears in the error path actually does not guarantee that because of the earlier enabling of runtime autosuspend.
Fix that by using pm_runtime_put_sync_suspend to force a clean shutdown.
Test cases: 1. All firmware baked into kernel 2. error loading ZAP fw in initrd -> load from rootfs at DE start
Both succeed on A619_holi (SM6375) and A630 (SDM845).
Fixes: 0d997f95b70f ("drm/msm/adreno: fix runtime PM imbalance at gpu load") Signed-off-by: Konrad Dybcio konrad.dybcio@linaro.org Reviewed-by: Johan Hovold johan+linaro@kernel.org Patchwork: https://patchwork.freedesktop.org/patch/530001/ Link: https://lore.kernel.org/r/20230330231517.2747024-1-konrad.dybcio@linaro.org Signed-off-by: Rob Clark robdclark@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/adreno/adreno_device.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -465,7 +465,7 @@ struct msm_gpu *adreno_load_gpu(struct d return gpu;
err_put_rpm: - pm_runtime_put_sync(&pdev->dev); + pm_runtime_put_sync_suspend(&pdev->dev); err_disable_rpm: pm_runtime_disable(&pdev->dev);
From: Jaegeuk Kim jaegeuk@kernel.org
[ Upstream commit 12607c1ba7637e750402f555b6695c50fce77a2b ]
Let's descrbie it's read extent cache.
Reviewed-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Stable-dep-of: 043d2d00b443 ("f2fs: factor out victim_entry usage from general rb_tree use") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/extent_cache.c | 4 ++-- fs/f2fs/f2fs.h | 10 +++++----- fs/f2fs/inode.c | 2 +- fs/f2fs/node.c | 2 +- fs/f2fs/node.h | 2 +- fs/f2fs/segment.c | 4 ++-- fs/f2fs/super.c | 12 ++++++------ 7 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c index 6c9e6f78a3e37..84078eda19ff1 100644 --- a/fs/f2fs/extent_cache.c +++ b/fs/f2fs/extent_cache.c @@ -383,7 +383,7 @@ static void __f2fs_init_extent_tree(struct inode *inode, struct page *ipage) if (!i_ext || !i_ext->len) return;
- get_extent_info(&ei, i_ext); + get_read_extent_info(&ei, i_ext);
write_lock(&et->lock); if (atomic_read(&et->node_cnt)) @@ -711,7 +711,7 @@ unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink) unsigned int node_cnt = 0, tree_cnt = 0; int remained;
- if (!test_opt(sbi, EXTENT_CACHE)) + if (!test_opt(sbi, READ_EXTENT_CACHE)) return 0;
if (!atomic_read(&sbi->total_zombie_tree)) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 4b44ca1decdd3..f2d1be26d0d05 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -91,7 +91,7 @@ extern const char *f2fs_fault_name[FAULT_MAX]; #define F2FS_MOUNT_FLUSH_MERGE 0x00000400 #define F2FS_MOUNT_NOBARRIER 0x00000800 #define F2FS_MOUNT_FASTBOOT 0x00001000 -#define F2FS_MOUNT_EXTENT_CACHE 0x00002000 +#define F2FS_MOUNT_READ_EXTENT_CACHE 0x00002000 #define F2FS_MOUNT_DATA_FLUSH 0x00008000 #define F2FS_MOUNT_FAULT_INJECTION 0x00010000 #define F2FS_MOUNT_USRQUOTA 0x00080000 @@ -597,7 +597,7 @@ enum { #define F2FS_MIN_EXTENT_LEN 64 /* minimum extent length */
/* number of extent info in extent cache we try to shrink */ -#define EXTENT_CACHE_SHRINK_NUMBER 128 +#define READ_EXTENT_CACHE_SHRINK_NUMBER 128
#define RECOVERY_MAX_RA_BLOCKS BIO_MAX_VECS #define RECOVERY_MIN_RA_BLOCKS 1 @@ -826,7 +826,7 @@ struct f2fs_inode_info { loff_t original_i_size; /* original i_size before atomic write */ };
-static inline void get_extent_info(struct extent_info *ext, +static inline void get_read_extent_info(struct extent_info *ext, struct f2fs_extent *i_ext) { ext->fofs = le32_to_cpu(i_ext->fofs); @@ -834,7 +834,7 @@ static inline void get_extent_info(struct extent_info *ext, ext->len = le32_to_cpu(i_ext->len); }
-static inline void set_raw_extent(struct extent_info *ext, +static inline void set_raw_read_extent(struct extent_info *ext, struct f2fs_extent *i_ext) { i_ext->fofs = cpu_to_le32(ext->fofs); @@ -4404,7 +4404,7 @@ static inline bool f2fs_may_extent_tree(struct inode *inode) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
- if (!test_opt(sbi, EXTENT_CACHE) || + if (!test_opt(sbi, READ_EXTENT_CACHE) || is_inode_flag_set(inode, FI_NO_EXTENT) || (is_inode_flag_set(inode, FI_COMPRESSED_FILE) && !f2fs_sb_has_readonly(sbi))) diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index 229ddc2f7b079..896b0d5e4ee3f 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -629,7 +629,7 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page)
if (et) { read_lock(&et->lock); - set_raw_extent(&et->largest, &ri->i_ext); + set_raw_read_extent(&et->largest, &ri->i_ext); read_unlock(&et->lock); } else { memset(&ri->i_ext, 0, sizeof(ri->i_ext)); diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index b9ee5a1176a07..84b147966080e 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -85,7 +85,7 @@ bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type) sizeof(struct ino_entry); mem_size >>= PAGE_SHIFT; res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 1); - } else if (type == EXTENT_CACHE) { + } else if (type == READ_EXTENT_CACHE) { mem_size = (atomic_read(&sbi->total_ext_tree) * sizeof(struct extent_tree) + atomic_read(&sbi->total_ext_node) * diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h index 3c09cae058b0a..0aa48704c77a0 100644 --- a/fs/f2fs/node.h +++ b/fs/f2fs/node.h @@ -146,7 +146,7 @@ enum mem_type { NAT_ENTRIES, /* indicates the cached nat entry */ DIRTY_DENTS, /* indicates dirty dentry pages */ INO_ENTRIES, /* indicates inode entries */ - EXTENT_CACHE, /* indicates extent cache */ + READ_EXTENT_CACHE, /* indicates read extent cache */ DISCARD_CACHE, /* indicates memory of cached discard cmds */ COMPRESS_PAGE, /* indicates memory of cached compressed pages */ BASE_CHECK, /* check kernel status */ diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 866335db78793..426559092930d 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -452,8 +452,8 @@ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi, bool from_bg) return;
/* try to shrink extent cache when there is no enough memory */ - if (!f2fs_available_free_memory(sbi, EXTENT_CACHE)) - f2fs_shrink_extent_tree(sbi, EXTENT_CACHE_SHRINK_NUMBER); + if (!f2fs_available_free_memory(sbi, READ_EXTENT_CACHE)) + f2fs_shrink_extent_tree(sbi, READ_EXTENT_CACHE_SHRINK_NUMBER);
/* check the # of cached NAT entries */ if (!f2fs_available_free_memory(sbi, NAT_ENTRIES)) diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index 5af05411818a5..c46533d65372c 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -810,10 +810,10 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount) set_opt(sbi, FASTBOOT); break; case Opt_extent_cache: - set_opt(sbi, EXTENT_CACHE); + set_opt(sbi, READ_EXTENT_CACHE); break; case Opt_noextent_cache: - clear_opt(sbi, EXTENT_CACHE); + clear_opt(sbi, READ_EXTENT_CACHE); break; case Opt_noinline_data: clear_opt(sbi, INLINE_DATA); @@ -1939,7 +1939,7 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root) seq_puts(seq, ",nobarrier"); if (test_opt(sbi, FASTBOOT)) seq_puts(seq, ",fastboot"); - if (test_opt(sbi, EXTENT_CACHE)) + if (test_opt(sbi, READ_EXTENT_CACHE)) seq_puts(seq, ",extent_cache"); else seq_puts(seq, ",noextent_cache"); @@ -2057,7 +2057,7 @@ static void default_options(struct f2fs_sb_info *sbi) set_opt(sbi, INLINE_XATTR); set_opt(sbi, INLINE_DATA); set_opt(sbi, INLINE_DENTRY); - set_opt(sbi, EXTENT_CACHE); + set_opt(sbi, READ_EXTENT_CACHE); set_opt(sbi, NOHEAP); clear_opt(sbi, DISABLE_CHECKPOINT); set_opt(sbi, MERGE_CHECKPOINT); @@ -2198,7 +2198,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) bool need_restart_ckpt = false, need_stop_ckpt = false; bool need_restart_flush = false, need_stop_flush = false; bool need_restart_discard = false, need_stop_discard = false; - bool no_extent_cache = !test_opt(sbi, EXTENT_CACHE); + bool no_read_extent_cache = !test_opt(sbi, READ_EXTENT_CACHE); bool enable_checkpoint = !test_opt(sbi, DISABLE_CHECKPOINT); bool no_io_align = !F2FS_IO_ALIGNED(sbi); bool no_atgc = !test_opt(sbi, ATGC); @@ -2288,7 +2288,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) }
/* disallow enable/disable extent_cache dynamically */ - if (no_extent_cache == !!test_opt(sbi, EXTENT_CACHE)) { + if (no_read_extent_cache == !!test_opt(sbi, READ_EXTENT_CACHE)) { err = -EINVAL; f2fs_warn(sbi, "switch extent_cache option is not allowed"); goto restore_opts;
From: Jaegeuk Kim jaegeuk@kernel.org
[ Upstream commit 3bac20a8f011b8ed4012b43f4f33010432b3c647 ]
No functional change.
Reviewed-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Stable-dep-of: 043d2d00b443 ("f2fs: factor out victim_entry usage from general rb_tree use") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/extent_cache.c | 88 +++++++++++++++++++++++++++++++++++++----- fs/f2fs/f2fs.h | 69 +-------------------------------- 2 files changed, 81 insertions(+), 76 deletions(-)
diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c index 84078eda19ff1..a626ce0b70a50 100644 --- a/fs/f2fs/extent_cache.c +++ b/fs/f2fs/extent_cache.c @@ -15,6 +15,77 @@ #include "node.h" #include <trace/events/f2fs.h>
+static void __set_extent_info(struct extent_info *ei, + unsigned int fofs, unsigned int len, + block_t blk, bool keep_clen) +{ + ei->fofs = fofs; + ei->blk = blk; + ei->len = len; + + if (keep_clen) + return; + +#ifdef CONFIG_F2FS_FS_COMPRESSION + ei->c_len = 0; +#endif +} + +static bool f2fs_may_extent_tree(struct inode *inode) +{ + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + + /* + * for recovered files during mount do not create extents + * if shrinker is not registered. + */ + if (list_empty(&sbi->s_list)) + return false; + + if (!test_opt(sbi, READ_EXTENT_CACHE) || + is_inode_flag_set(inode, FI_NO_EXTENT) || + (is_inode_flag_set(inode, FI_COMPRESSED_FILE) && + !f2fs_sb_has_readonly(sbi))) + return false; + + return S_ISREG(inode->i_mode); +} + +static void __try_update_largest_extent(struct extent_tree *et, + struct extent_node *en) +{ + if (en->ei.len <= et->largest.len) + return; + + et->largest = en->ei; + et->largest_updated = true; +} + +static bool __is_extent_mergeable(struct extent_info *back, + struct extent_info *front) +{ +#ifdef CONFIG_F2FS_FS_COMPRESSION + if (back->c_len && back->len != back->c_len) + return false; + if (front->c_len && front->len != front->c_len) + return false; +#endif + return (back->fofs + back->len == front->fofs && + back->blk + back->len == front->blk); +} + +static bool __is_back_mergeable(struct extent_info *cur, + struct extent_info *back) +{ + return __is_extent_mergeable(back, cur); +} + +static bool __is_front_mergeable(struct extent_info *cur, + struct extent_info *front) +{ + return __is_extent_mergeable(cur, front); +} + static struct rb_entry *__lookup_rb_tree_fast(struct rb_entry *cached_re, unsigned int ofs) { @@ -592,16 +663,16 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
if (end < org_end && org_end - end >= F2FS_MIN_EXTENT_LEN) { if (parts) { - set_extent_info(&ei, end, - end - dei.fofs + dei.blk, - org_end - end); + __set_extent_info(&ei, + end, org_end - end, + end - dei.fofs + dei.blk, false); en1 = __insert_extent_tree(sbi, et, &ei, NULL, NULL, true); next_en = en1; } else { - en->ei.fofs = end; - en->ei.blk += end - dei.fofs; - en->ei.len -= end - dei.fofs; + __set_extent_info(&en->ei, + end, en->ei.len - (end - dei.fofs), + en->ei.blk + (end - dei.fofs), true); next_en = en; } parts++; @@ -633,8 +704,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
/* 3. update extent in extent cache */ if (blkaddr) { - - set_extent_info(&ei, fofs, blkaddr, len); + __set_extent_info(&ei, fofs, len, blkaddr, false); if (!__try_merge_extent_node(sbi, et, &ei, prev_en, next_en)) __insert_extent_tree(sbi, et, &ei, insert_p, insert_parent, leftmost); @@ -693,7 +763,7 @@ void f2fs_update_extent_tree_range_compressed(struct inode *inode, if (en) goto unlock_out;
- set_extent_info(&ei, fofs, blkaddr, llen); + __set_extent_info(&ei, fofs, llen, blkaddr, true); ei.c_len = c_len;
if (!__try_merge_extent_node(sbi, et, &ei, prev_en, next_en)) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index f2d1be26d0d05..076bdf27df547 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -618,7 +618,7 @@ struct rb_entry { struct extent_info { unsigned int fofs; /* start offset in a file */ unsigned int len; /* length of the extent */ - u32 blk; /* start block address of the extent */ + block_t blk; /* start block address of the extent */ #ifdef CONFIG_F2FS_FS_COMPRESSION unsigned int c_len; /* physical extent length of compressed blocks */ #endif @@ -842,17 +842,6 @@ static inline void set_raw_read_extent(struct extent_info *ext, i_ext->len = cpu_to_le32(ext->len); }
-static inline void set_extent_info(struct extent_info *ei, unsigned int fofs, - u32 blk, unsigned int len) -{ - ei->fofs = fofs; - ei->blk = blk; - ei->len = len; -#ifdef CONFIG_F2FS_FS_COMPRESSION - ei->c_len = 0; -#endif -} - static inline bool __is_discard_mergeable(struct discard_info *back, struct discard_info *front, unsigned int max_len) { @@ -872,41 +861,6 @@ static inline bool __is_discard_front_mergeable(struct discard_info *cur, return __is_discard_mergeable(cur, front, max_len); }
-static inline bool __is_extent_mergeable(struct extent_info *back, - struct extent_info *front) -{ -#ifdef CONFIG_F2FS_FS_COMPRESSION - if (back->c_len && back->len != back->c_len) - return false; - if (front->c_len && front->len != front->c_len) - return false; -#endif - return (back->fofs + back->len == front->fofs && - back->blk + back->len == front->blk); -} - -static inline bool __is_back_mergeable(struct extent_info *cur, - struct extent_info *back) -{ - return __is_extent_mergeable(back, cur); -} - -static inline bool __is_front_mergeable(struct extent_info *cur, - struct extent_info *front) -{ - return __is_extent_mergeable(cur, front); -} - -extern void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync); -static inline void __try_update_largest_extent(struct extent_tree *et, - struct extent_node *en) -{ - if (en->ei.len > et->largest.len) { - et->largest = en->ei; - et->largest_updated = true; - } -} - /* * For free nid management */ @@ -2578,6 +2532,7 @@ static inline block_t __start_sum_addr(struct f2fs_sb_info *sbi) return le32_to_cpu(F2FS_CKPT(sbi)->cp_pack_start_sum); }
+extern void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync); static inline int inc_valid_node_count(struct f2fs_sb_info *sbi, struct inode *inode, bool is_inode) { @@ -4400,26 +4355,6 @@ F2FS_FEATURE_FUNCS(casefold, CASEFOLD); F2FS_FEATURE_FUNCS(compression, COMPRESSION); F2FS_FEATURE_FUNCS(readonly, RO);
-static inline bool f2fs_may_extent_tree(struct inode *inode) -{ - struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - - if (!test_opt(sbi, READ_EXTENT_CACHE) || - is_inode_flag_set(inode, FI_NO_EXTENT) || - (is_inode_flag_set(inode, FI_COMPRESSED_FILE) && - !f2fs_sb_has_readonly(sbi))) - return false; - - /* - * for recovered files during mount do not create extents - * if shrinker is not registered. - */ - if (list_empty(&sbi->s_list)) - return false; - - return S_ISREG(inode->i_mode); -} - #ifdef CONFIG_BLK_DEV_ZONED static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi, block_t blkaddr)
From: Jaegeuk Kim jaegeuk@kernel.org
[ Upstream commit 749d543c0d451fff31e8f7a3e0a031ffcbf1ebb1 ]
Added into the caller.
Reviewed-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Stable-dep-of: 043d2d00b443 ("f2fs: factor out victim_entry usage from general rb_tree use") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/extent_cache.c | 21 +++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-)
diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c index a626ce0b70a50..d3c3b1b627c63 100644 --- a/fs/f2fs/extent_cache.c +++ b/fs/f2fs/extent_cache.c @@ -386,21 +386,6 @@ static struct extent_tree *__grab_extent_tree(struct inode *inode) return et; }
-static struct extent_node *__init_extent_tree(struct f2fs_sb_info *sbi, - struct extent_tree *et, struct extent_info *ei) -{ - struct rb_node **p = &et->root.rb_root.rb_node; - struct extent_node *en; - - en = __attach_extent_node(sbi, et, ei, NULL, p, true); - if (!en) - return NULL; - - et->largest = en->ei; - et->cached_en = en; - return en; -} - static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi, struct extent_tree *et) { @@ -460,8 +445,12 @@ static void __f2fs_init_extent_tree(struct inode *inode, struct page *ipage) if (atomic_read(&et->node_cnt)) goto out;
- en = __init_extent_tree(sbi, et, &ei); + en = __attach_extent_node(sbi, et, &ei, NULL, + &et->root.rb_root.rb_node, true); if (en) { + et->largest = en->ei; + et->cached_en = en; + spin_lock(&sbi->extent_lock); list_add_tail(&en->list, &sbi->extent_list); spin_unlock(&sbi->extent_lock);
From: Jaegeuk Kim jaegeuk@kernel.org
[ Upstream commit e7547daccd6a37522f0af74ec4b5a3036f3dd328 ]
This patch prepares extent_cache to be ready for addition.
Reviewed-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Stable-dep-of: 043d2d00b443 ("f2fs: factor out victim_entry usage from general rb_tree use") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/data.c | 20 +- fs/f2fs/debug.c | 65 +++-- fs/f2fs/extent_cache.c | 463 +++++++++++++++++++++--------------- fs/f2fs/f2fs.h | 119 +++++---- fs/f2fs/file.c | 8 +- fs/f2fs/gc.c | 4 +- fs/f2fs/inode.c | 6 +- fs/f2fs/node.c | 8 +- fs/f2fs/segment.c | 3 +- fs/f2fs/shrinker.c | 19 +- include/trace/events/f2fs.h | 62 +++-- 11 files changed, 470 insertions(+), 307 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 770a606eb3f6a..de6b056f090b3 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -1134,7 +1134,7 @@ void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr) { dn->data_blkaddr = blkaddr; f2fs_set_data_blkaddr(dn); - f2fs_update_extent_cache(dn); + f2fs_update_read_extent_cache(dn); }
/* dn->ofs_in_node will be returned with up-to-date last block pointer */ @@ -1203,7 +1203,7 @@ int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index) struct extent_info ei = {0, }; struct inode *inode = dn->inode;
- if (f2fs_lookup_extent_cache(inode, index, &ei)) { + if (f2fs_lookup_read_extent_cache(inode, index, &ei)) { dn->data_blkaddr = ei.blk + index - ei.fofs; return 0; } @@ -1224,7 +1224,7 @@ struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index, if (!page) return ERR_PTR(-ENOMEM);
- if (f2fs_lookup_extent_cache(inode, index, &ei)) { + if (f2fs_lookup_read_extent_cache(inode, index, &ei)) { dn.data_blkaddr = ei.blk + index - ei.fofs; if (!f2fs_is_valid_blkaddr(F2FS_I_SB(inode), dn.data_blkaddr, DATA_GENERIC_ENHANCE_READ)) { @@ -1486,7 +1486,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, pgofs = (pgoff_t)map->m_lblk; end = pgofs + maxblocks;
- if (!create && f2fs_lookup_extent_cache(inode, pgofs, &ei)) { + if (!create && f2fs_lookup_read_extent_cache(inode, pgofs, &ei)) { if (f2fs_lfs_mode(sbi) && flag == F2FS_GET_BLOCK_DIO && map->m_may_create) goto next_dnode; @@ -1696,7 +1696,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, if (map->m_flags & F2FS_MAP_MAPPED) { unsigned int ofs = start_pgofs - map->m_lblk;
- f2fs_update_extent_cache_range(&dn, + f2fs_update_read_extent_cache_range(&dn, start_pgofs, map->m_pblk + ofs, map->m_len - ofs); } @@ -1741,7 +1741,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, if (map->m_flags & F2FS_MAP_MAPPED) { unsigned int ofs = start_pgofs - map->m_lblk;
- f2fs_update_extent_cache_range(&dn, + f2fs_update_read_extent_cache_range(&dn, start_pgofs, map->m_pblk + ofs, map->m_len - ofs); } @@ -2202,7 +2202,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, if (f2fs_cluster_is_empty(cc)) goto out;
- if (f2fs_lookup_extent_cache(inode, start_idx, &ei)) + if (f2fs_lookup_read_extent_cache(inode, start_idx, &ei)) from_dnode = false;
if (!from_dnode) @@ -2636,7 +2636,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio) set_new_dnode(&dn, inode, NULL, NULL, 0);
if (need_inplace_update(fio) && - f2fs_lookup_extent_cache(inode, page->index, &ei)) { + f2fs_lookup_read_extent_cache(inode, page->index, &ei)) { fio->old_blkaddr = ei.blk + page->index - ei.fofs;
if (!f2fs_is_valid_blkaddr(fio->sbi, fio->old_blkaddr, @@ -3361,7 +3361,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, } else if (locked) { err = f2fs_get_block(&dn, index); } else { - if (f2fs_lookup_extent_cache(inode, index, &ei)) { + if (f2fs_lookup_read_extent_cache(inode, index, &ei)) { dn.data_blkaddr = ei.blk + index - ei.fofs; } else { /* hole case */ @@ -3402,7 +3402,7 @@ static int __find_data_block(struct inode *inode, pgoff_t index,
set_new_dnode(&dn, inode, ipage, ipage, 0);
- if (f2fs_lookup_extent_cache(inode, index, &ei)) { + if (f2fs_lookup_read_extent_cache(inode, index, &ei)) { dn.data_blkaddr = ei.blk + index - ei.fofs; } else { /* hole case */ diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c index a216dcdf69418..a9baa121d829f 100644 --- a/fs/f2fs/debug.c +++ b/fs/f2fs/debug.c @@ -72,15 +72,23 @@ static void update_general_status(struct f2fs_sb_info *sbi) si->main_area_zones = si->main_area_sections / le32_to_cpu(raw_super->secs_per_zone);
- /* validation check of the segment numbers */ + /* general extent cache stats */ + for (i = 0; i < NR_EXTENT_CACHES; i++) { + struct extent_tree_info *eti = &sbi->extent_tree[i]; + + si->hit_cached[i] = atomic64_read(&sbi->read_hit_cached[i]); + si->hit_rbtree[i] = atomic64_read(&sbi->read_hit_rbtree[i]); + si->total_ext[i] = atomic64_read(&sbi->total_hit_ext[i]); + si->hit_total[i] = si->hit_cached[i] + si->hit_rbtree[i]; + si->ext_tree[i] = atomic_read(&eti->total_ext_tree); + si->zombie_tree[i] = atomic_read(&eti->total_zombie_tree); + si->ext_node[i] = atomic_read(&eti->total_ext_node); + } + /* read extent_cache only */ si->hit_largest = atomic64_read(&sbi->read_hit_largest); - si->hit_cached = atomic64_read(&sbi->read_hit_cached); - si->hit_rbtree = atomic64_read(&sbi->read_hit_rbtree); - si->hit_total = si->hit_largest + si->hit_cached + si->hit_rbtree; - si->total_ext = atomic64_read(&sbi->total_hit_ext); - si->ext_tree = atomic_read(&sbi->total_ext_tree); - si->zombie_tree = atomic_read(&sbi->total_zombie_tree); - si->ext_node = atomic_read(&sbi->total_ext_node); + si->hit_total[EX_READ] += si->hit_largest; + + /* validation check of the segment numbers */ si->ndirty_node = get_pages(sbi, F2FS_DIRTY_NODES); si->ndirty_dent = get_pages(sbi, F2FS_DIRTY_DENTS); si->ndirty_meta = get_pages(sbi, F2FS_DIRTY_META); @@ -294,10 +302,16 @@ static void update_mem_info(struct f2fs_sb_info *sbi) sizeof(struct nat_entry_set); for (i = 0; i < MAX_INO_ENTRY; i++) si->cache_mem += sbi->im[i].ino_num * sizeof(struct ino_entry); - si->cache_mem += atomic_read(&sbi->total_ext_tree) * + + for (i = 0; i < NR_EXTENT_CACHES; i++) { + struct extent_tree_info *eti = &sbi->extent_tree[i]; + + si->ext_mem[i] = atomic_read(&eti->total_ext_tree) * sizeof(struct extent_tree); - si->cache_mem += atomic_read(&sbi->total_ext_node) * + si->ext_mem[i] += atomic_read(&eti->total_ext_node) * sizeof(struct extent_node); + si->cache_mem += si->ext_mem[i]; + }
si->page_mem = 0; if (sbi->node_inode) { @@ -490,16 +504,18 @@ static int stat_show(struct seq_file *s, void *v) si->bg_node_blks); seq_printf(s, "BG skip : IO: %u, Other: %u\n", si->io_skip_bggc, si->other_skip_bggc); - seq_puts(s, "\nExtent Cache:\n"); + seq_puts(s, "\nExtent Cache (Read):\n"); seq_printf(s, " - Hit Count: L1-1:%llu L1-2:%llu L2:%llu\n", - si->hit_largest, si->hit_cached, - si->hit_rbtree); + si->hit_largest, si->hit_cached[EX_READ], + si->hit_rbtree[EX_READ]); seq_printf(s, " - Hit Ratio: %llu%% (%llu / %llu)\n", - !si->total_ext ? 0 : - div64_u64(si->hit_total * 100, si->total_ext), - si->hit_total, si->total_ext); + !si->total_ext[EX_READ] ? 0 : + div64_u64(si->hit_total[EX_READ] * 100, + si->total_ext[EX_READ]), + si->hit_total[EX_READ], si->total_ext[EX_READ]); seq_printf(s, " - Inner Struct Count: tree: %d(%d), node: %d\n", - si->ext_tree, si->zombie_tree, si->ext_node); + si->ext_tree[EX_READ], si->zombie_tree[EX_READ], + si->ext_node[EX_READ]); seq_puts(s, "\nBalancing F2FS Async:\n"); seq_printf(s, " - DIO (R: %4d, W: %4d)\n", si->nr_dio_read, si->nr_dio_write); @@ -566,8 +582,10 @@ static int stat_show(struct seq_file *s, void *v) (si->base_mem + si->cache_mem + si->page_mem) >> 10); seq_printf(s, " - static: %llu KB\n", si->base_mem >> 10); - seq_printf(s, " - cached: %llu KB\n", + seq_printf(s, " - cached all: %llu KB\n", si->cache_mem >> 10); + seq_printf(s, " - read extent cache: %llu KB\n", + si->ext_mem[EX_READ] >> 10); seq_printf(s, " - paged : %llu KB\n", si->page_mem >> 10); } @@ -600,10 +618,15 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi) si->sbi = sbi; sbi->stat_info = si;
- atomic64_set(&sbi->total_hit_ext, 0); - atomic64_set(&sbi->read_hit_rbtree, 0); + /* general extent cache stats */ + for (i = 0; i < NR_EXTENT_CACHES; i++) { + atomic64_set(&sbi->total_hit_ext[i], 0); + atomic64_set(&sbi->read_hit_rbtree[i], 0); + atomic64_set(&sbi->read_hit_cached[i], 0); + } + + /* read extent_cache only */ atomic64_set(&sbi->read_hit_largest, 0); - atomic64_set(&sbi->read_hit_cached, 0);
atomic_set(&sbi->inline_xattr, 0); atomic_set(&sbi->inline_inode, 0); diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c index d3c3b1b627c63..4217076df1024 100644 --- a/fs/f2fs/extent_cache.c +++ b/fs/f2fs/extent_cache.c @@ -17,21 +17,37 @@
static void __set_extent_info(struct extent_info *ei, unsigned int fofs, unsigned int len, - block_t blk, bool keep_clen) + block_t blk, bool keep_clen, + enum extent_type type) { ei->fofs = fofs; - ei->blk = blk; ei->len = len;
- if (keep_clen) - return; - + if (type == EX_READ) { + ei->blk = blk; + if (keep_clen) + return; #ifdef CONFIG_F2FS_FS_COMPRESSION - ei->c_len = 0; + ei->c_len = 0; #endif + } +} + +static bool __may_read_extent_tree(struct inode *inode) +{ + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + + if (!test_opt(sbi, READ_EXTENT_CACHE)) + return false; + if (is_inode_flag_set(inode, FI_NO_EXTENT)) + return false; + if (is_inode_flag_set(inode, FI_COMPRESSED_FILE) && + !f2fs_sb_has_readonly(sbi)) + return false; + return S_ISREG(inode->i_mode); }
-static bool f2fs_may_extent_tree(struct inode *inode) +static bool __may_extent_tree(struct inode *inode, enum extent_type type) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
@@ -42,18 +58,16 @@ static bool f2fs_may_extent_tree(struct inode *inode) if (list_empty(&sbi->s_list)) return false;
- if (!test_opt(sbi, READ_EXTENT_CACHE) || - is_inode_flag_set(inode, FI_NO_EXTENT) || - (is_inode_flag_set(inode, FI_COMPRESSED_FILE) && - !f2fs_sb_has_readonly(sbi))) - return false; - - return S_ISREG(inode->i_mode); + if (type == EX_READ) + return __may_read_extent_tree(inode); + return false; }
static void __try_update_largest_extent(struct extent_tree *et, struct extent_node *en) { + if (et->type != EX_READ) + return; if (en->ei.len <= et->largest.len) return;
@@ -62,28 +76,31 @@ static void __try_update_largest_extent(struct extent_tree *et, }
static bool __is_extent_mergeable(struct extent_info *back, - struct extent_info *front) + struct extent_info *front, enum extent_type type) { + if (type == EX_READ) { #ifdef CONFIG_F2FS_FS_COMPRESSION - if (back->c_len && back->len != back->c_len) - return false; - if (front->c_len && front->len != front->c_len) - return false; + if (back->c_len && back->len != back->c_len) + return false; + if (front->c_len && front->len != front->c_len) + return false; #endif - return (back->fofs + back->len == front->fofs && - back->blk + back->len == front->blk); + return (back->fofs + back->len == front->fofs && + back->blk + back->len == front->blk); + } + return false; }
static bool __is_back_mergeable(struct extent_info *cur, - struct extent_info *back) + struct extent_info *back, enum extent_type type) { - return __is_extent_mergeable(back, cur); + return __is_extent_mergeable(back, cur, type); }
static bool __is_front_mergeable(struct extent_info *cur, - struct extent_info *front) + struct extent_info *front, enum extent_type type) { - return __is_extent_mergeable(cur, front); + return __is_extent_mergeable(cur, front, type); }
static struct rb_entry *__lookup_rb_tree_fast(struct rb_entry *cached_re, @@ -308,6 +325,7 @@ static struct extent_node *__attach_extent_node(struct f2fs_sb_info *sbi, struct rb_node *parent, struct rb_node **p, bool leftmost) { + struct extent_tree_info *eti = &sbi->extent_tree[et->type]; struct extent_node *en;
en = f2fs_kmem_cache_alloc(extent_node_slab, GFP_ATOMIC, false, sbi); @@ -321,16 +339,18 @@ static struct extent_node *__attach_extent_node(struct f2fs_sb_info *sbi, rb_link_node(&en->rb_node, parent, p); rb_insert_color_cached(&en->rb_node, &et->root, leftmost); atomic_inc(&et->node_cnt); - atomic_inc(&sbi->total_ext_node); + atomic_inc(&eti->total_ext_node); return en; }
static void __detach_extent_node(struct f2fs_sb_info *sbi, struct extent_tree *et, struct extent_node *en) { + struct extent_tree_info *eti = &sbi->extent_tree[et->type]; + rb_erase_cached(&en->rb_node, &et->root); atomic_dec(&et->node_cnt); - atomic_dec(&sbi->total_ext_node); + atomic_dec(&eti->total_ext_node);
if (et->cached_en == en) et->cached_en = NULL; @@ -346,42 +366,47 @@ static void __detach_extent_node(struct f2fs_sb_info *sbi, static void __release_extent_node(struct f2fs_sb_info *sbi, struct extent_tree *et, struct extent_node *en) { - spin_lock(&sbi->extent_lock); + struct extent_tree_info *eti = &sbi->extent_tree[et->type]; + + spin_lock(&eti->extent_lock); f2fs_bug_on(sbi, list_empty(&en->list)); list_del_init(&en->list); - spin_unlock(&sbi->extent_lock); + spin_unlock(&eti->extent_lock);
__detach_extent_node(sbi, et, en); }
-static struct extent_tree *__grab_extent_tree(struct inode *inode) +static struct extent_tree *__grab_extent_tree(struct inode *inode, + enum extent_type type) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + struct extent_tree_info *eti = &sbi->extent_tree[type]; struct extent_tree *et; nid_t ino = inode->i_ino;
- mutex_lock(&sbi->extent_tree_lock); - et = radix_tree_lookup(&sbi->extent_tree_root, ino); + mutex_lock(&eti->extent_tree_lock); + et = radix_tree_lookup(&eti->extent_tree_root, ino); if (!et) { et = f2fs_kmem_cache_alloc(extent_tree_slab, GFP_NOFS, true, NULL); - f2fs_radix_tree_insert(&sbi->extent_tree_root, ino, et); + f2fs_radix_tree_insert(&eti->extent_tree_root, ino, et); memset(et, 0, sizeof(struct extent_tree)); et->ino = ino; + et->type = type; et->root = RB_ROOT_CACHED; et->cached_en = NULL; rwlock_init(&et->lock); INIT_LIST_HEAD(&et->list); atomic_set(&et->node_cnt, 0); - atomic_inc(&sbi->total_ext_tree); + atomic_inc(&eti->total_ext_tree); } else { - atomic_dec(&sbi->total_zombie_tree); + atomic_dec(&eti->total_zombie_tree); list_del_init(&et->list); } - mutex_unlock(&sbi->extent_tree_lock); + mutex_unlock(&eti->extent_tree_lock);
/* never died until evict_inode */ - F2FS_I(inode)->extent_tree = et; + F2FS_I(inode)->extent_tree[type] = et;
return et; } @@ -415,35 +440,38 @@ static void __drop_largest_extent(struct extent_tree *et, }
/* return true, if inode page is changed */ -static void __f2fs_init_extent_tree(struct inode *inode, struct page *ipage) +static void __f2fs_init_extent_tree(struct inode *inode, struct page *ipage, + enum extent_type type) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + struct extent_tree_info *eti = &sbi->extent_tree[type]; struct f2fs_extent *i_ext = ipage ? &F2FS_INODE(ipage)->i_ext : NULL; struct extent_tree *et; struct extent_node *en; struct extent_info ei;
- if (!f2fs_may_extent_tree(inode)) { - /* drop largest extent */ - if (i_ext && i_ext->len) { + if (!__may_extent_tree(inode, type)) { + /* drop largest read extent */ + if (type == EX_READ && i_ext && i_ext->len) { f2fs_wait_on_page_writeback(ipage, NODE, true, true); i_ext->len = 0; set_page_dirty(ipage); - return; } - return; + goto out; }
- et = __grab_extent_tree(inode); + et = __grab_extent_tree(inode, type);
if (!i_ext || !i_ext->len) - return; + goto out; + + BUG_ON(type != EX_READ);
get_read_extent_info(&ei, i_ext);
write_lock(&et->lock); if (atomic_read(&et->node_cnt)) - goto out; + goto unlock_out;
en = __attach_extent_node(sbi, et, &ei, NULL, &et->root.rb_root.rb_node, true); @@ -451,38 +479,41 @@ static void __f2fs_init_extent_tree(struct inode *inode, struct page *ipage) et->largest = en->ei; et->cached_en = en;
- spin_lock(&sbi->extent_lock); - list_add_tail(&en->list, &sbi->extent_list); - spin_unlock(&sbi->extent_lock); + spin_lock(&eti->extent_lock); + list_add_tail(&en->list, &eti->extent_list); + spin_unlock(&eti->extent_lock); } -out: +unlock_out: write_unlock(&et->lock); +out: + if (type == EX_READ && !F2FS_I(inode)->extent_tree[EX_READ]) + set_inode_flag(inode, FI_NO_EXTENT); }
void f2fs_init_extent_tree(struct inode *inode, struct page *ipage) { - __f2fs_init_extent_tree(inode, ipage); - - if (!F2FS_I(inode)->extent_tree) - set_inode_flag(inode, FI_NO_EXTENT); + /* initialize read cache */ + __f2fs_init_extent_tree(inode, ipage, EX_READ); }
-static bool f2fs_lookup_extent_tree(struct inode *inode, pgoff_t pgofs, - struct extent_info *ei) +static bool __lookup_extent_tree(struct inode *inode, pgoff_t pgofs, + struct extent_info *ei, enum extent_type type) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - struct extent_tree *et = F2FS_I(inode)->extent_tree; + struct extent_tree_info *eti = &sbi->extent_tree[type]; + struct extent_tree *et = F2FS_I(inode)->extent_tree[type]; struct extent_node *en; bool ret = false;
if (!et) return false;
- trace_f2fs_lookup_extent_tree_start(inode, pgofs); + trace_f2fs_lookup_extent_tree_start(inode, pgofs, type);
read_lock(&et->lock);
- if (et->largest.fofs <= pgofs && + if (type == EX_READ && + et->largest.fofs <= pgofs && et->largest.fofs + et->largest.len > pgofs) { *ei = et->largest; ret = true; @@ -496,23 +527,24 @@ static bool f2fs_lookup_extent_tree(struct inode *inode, pgoff_t pgofs, goto out;
if (en == et->cached_en) - stat_inc_cached_node_hit(sbi); + stat_inc_cached_node_hit(sbi, type); else - stat_inc_rbtree_node_hit(sbi); + stat_inc_rbtree_node_hit(sbi, type);
*ei = en->ei; - spin_lock(&sbi->extent_lock); + spin_lock(&eti->extent_lock); if (!list_empty(&en->list)) { - list_move_tail(&en->list, &sbi->extent_list); + list_move_tail(&en->list, &eti->extent_list); et->cached_en = en; } - spin_unlock(&sbi->extent_lock); + spin_unlock(&eti->extent_lock); ret = true; out: - stat_inc_total_hit(sbi); + stat_inc_total_hit(sbi, type); read_unlock(&et->lock);
- trace_f2fs_lookup_extent_tree_end(inode, pgofs, ei); + if (type == EX_READ) + trace_f2fs_lookup_read_extent_tree_end(inode, pgofs, ei); return ret; }
@@ -521,18 +553,20 @@ static struct extent_node *__try_merge_extent_node(struct f2fs_sb_info *sbi, struct extent_node *prev_ex, struct extent_node *next_ex) { + struct extent_tree_info *eti = &sbi->extent_tree[et->type]; struct extent_node *en = NULL;
- if (prev_ex && __is_back_mergeable(ei, &prev_ex->ei)) { + if (prev_ex && __is_back_mergeable(ei, &prev_ex->ei, et->type)) { prev_ex->ei.len += ei->len; ei = &prev_ex->ei; en = prev_ex; }
- if (next_ex && __is_front_mergeable(ei, &next_ex->ei)) { + if (next_ex && __is_front_mergeable(ei, &next_ex->ei, et->type)) { next_ex->ei.fofs = ei->fofs; - next_ex->ei.blk = ei->blk; next_ex->ei.len += ei->len; + if (et->type == EX_READ) + next_ex->ei.blk = ei->blk; if (en) __release_extent_node(sbi, et, prev_ex);
@@ -544,12 +578,12 @@ static struct extent_node *__try_merge_extent_node(struct f2fs_sb_info *sbi,
__try_update_largest_extent(et, en);
- spin_lock(&sbi->extent_lock); + spin_lock(&eti->extent_lock); if (!list_empty(&en->list)) { - list_move_tail(&en->list, &sbi->extent_list); + list_move_tail(&en->list, &eti->extent_list); et->cached_en = en; } - spin_unlock(&sbi->extent_lock); + spin_unlock(&eti->extent_lock); return en; }
@@ -559,6 +593,7 @@ static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi, struct rb_node *insert_parent, bool leftmost) { + struct extent_tree_info *eti = &sbi->extent_tree[et->type]; struct rb_node **p; struct rb_node *parent = NULL; struct extent_node *en = NULL; @@ -581,47 +616,50 @@ static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi, __try_update_largest_extent(et, en);
/* update in global extent list */ - spin_lock(&sbi->extent_lock); - list_add_tail(&en->list, &sbi->extent_list); + spin_lock(&eti->extent_lock); + list_add_tail(&en->list, &eti->extent_list); et->cached_en = en; - spin_unlock(&sbi->extent_lock); + spin_unlock(&eti->extent_lock); return en; }
-static void f2fs_update_extent_tree_range(struct inode *inode, - pgoff_t fofs, block_t blkaddr, unsigned int len) +static void __update_extent_tree_range(struct inode *inode, + struct extent_info *tei, enum extent_type type) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - struct extent_tree *et = F2FS_I(inode)->extent_tree; + struct extent_tree *et = F2FS_I(inode)->extent_tree[type]; struct extent_node *en = NULL, *en1 = NULL; struct extent_node *prev_en = NULL, *next_en = NULL; struct extent_info ei, dei, prev; struct rb_node **insert_p = NULL, *insert_parent = NULL; + unsigned int fofs = tei->fofs, len = tei->len; unsigned int end = fofs + len; - unsigned int pos = (unsigned int)fofs; bool updated = false; bool leftmost = false;
if (!et) return;
- trace_f2fs_update_extent_tree_range(inode, fofs, blkaddr, len, 0); - + if (type == EX_READ) + trace_f2fs_update_read_extent_tree_range(inode, fofs, len, + tei->blk, 0); write_lock(&et->lock);
- if (is_inode_flag_set(inode, FI_NO_EXTENT)) { - write_unlock(&et->lock); - return; - } + if (type == EX_READ) { + if (is_inode_flag_set(inode, FI_NO_EXTENT)) { + write_unlock(&et->lock); + return; + }
- prev = et->largest; - dei.len = 0; + prev = et->largest; + dei.len = 0;
- /* - * drop largest extent before lookup, in case it's already - * been shrunk from extent tree - */ - __drop_largest_extent(et, fofs, len); + /* + * drop largest extent before lookup, in case it's already + * been shrunk from extent tree + */ + __drop_largest_extent(et, fofs, len); + }
/* 1. lookup first extent node in range [fofs, fofs + len - 1] */ en = (struct extent_node *)f2fs_lookup_rb_tree_ret(&et->root, @@ -642,26 +680,30 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
dei = en->ei; org_end = dei.fofs + dei.len; - f2fs_bug_on(sbi, pos >= org_end); + f2fs_bug_on(sbi, fofs >= org_end);
- if (pos > dei.fofs && pos - dei.fofs >= F2FS_MIN_EXTENT_LEN) { - en->ei.len = pos - en->ei.fofs; + if (fofs > dei.fofs && (type != EX_READ || + fofs - dei.fofs >= F2FS_MIN_EXTENT_LEN)) { + en->ei.len = fofs - en->ei.fofs; prev_en = en; parts = 1; }
- if (end < org_end && org_end - end >= F2FS_MIN_EXTENT_LEN) { + if (end < org_end && (type != EX_READ || + org_end - end >= F2FS_MIN_EXTENT_LEN)) { if (parts) { __set_extent_info(&ei, end, org_end - end, - end - dei.fofs + dei.blk, false); + end - dei.fofs + dei.blk, false, + type); en1 = __insert_extent_tree(sbi, et, &ei, NULL, NULL, true); next_en = en1; } else { __set_extent_info(&en->ei, end, en->ei.len - (end - dei.fofs), - en->ei.blk + (end - dei.fofs), true); + en->ei.blk + (end - dei.fofs), true, + type); next_en = en; } parts++; @@ -691,9 +733,11 @@ static void f2fs_update_extent_tree_range(struct inode *inode, en = next_en; }
- /* 3. update extent in extent cache */ - if (blkaddr) { - __set_extent_info(&ei, fofs, len, blkaddr, false); + /* 3. update extent in read extent cache */ + BUG_ON(type != EX_READ); + + if (tei->blk) { + __set_extent_info(&ei, fofs, len, tei->blk, false, EX_READ); if (!__try_merge_extent_node(sbi, et, &ei, prev_en, next_en)) __insert_extent_tree(sbi, et, &ei, insert_p, insert_parent, leftmost); @@ -723,19 +767,20 @@ static void f2fs_update_extent_tree_range(struct inode *inode, }
#ifdef CONFIG_F2FS_FS_COMPRESSION -void f2fs_update_extent_tree_range_compressed(struct inode *inode, +void f2fs_update_read_extent_tree_range_compressed(struct inode *inode, pgoff_t fofs, block_t blkaddr, unsigned int llen, unsigned int c_len) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - struct extent_tree *et = F2FS_I(inode)->extent_tree; + struct extent_tree *et = F2FS_I(inode)->extent_tree[EX_READ]; struct extent_node *en = NULL; struct extent_node *prev_en = NULL, *next_en = NULL; struct extent_info ei; struct rb_node **insert_p = NULL, *insert_parent = NULL; bool leftmost = false;
- trace_f2fs_update_extent_tree_range(inode, fofs, blkaddr, llen, c_len); + trace_f2fs_update_read_extent_tree_range(inode, fofs, llen, + blkaddr, c_len);
/* it is safe here to check FI_NO_EXTENT w/o et->lock in ro image */ if (is_inode_flag_set(inode, FI_NO_EXTENT)) @@ -752,7 +797,7 @@ void f2fs_update_extent_tree_range_compressed(struct inode *inode, if (en) goto unlock_out;
- __set_extent_info(&ei, fofs, llen, blkaddr, true); + __set_extent_info(&ei, fofs, llen, blkaddr, true, EX_READ); ei.c_len = c_len;
if (!__try_merge_extent_node(sbi, et, &ei, prev_en, next_en)) @@ -763,24 +808,43 @@ void f2fs_update_extent_tree_range_compressed(struct inode *inode, } #endif
-unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink) +static void __update_extent_cache(struct dnode_of_data *dn, enum extent_type type) { + struct extent_info ei; + + if (!__may_extent_tree(dn->inode, type)) + return; + + ei.fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) + + dn->ofs_in_node; + ei.len = 1; + + if (type == EX_READ) { + if (dn->data_blkaddr == NEW_ADDR) + ei.blk = NULL_ADDR; + else + ei.blk = dn->data_blkaddr; + } + __update_extent_tree_range(dn->inode, &ei, type); +} + +static unsigned int __shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink, + enum extent_type type) +{ + struct extent_tree_info *eti = &sbi->extent_tree[type]; struct extent_tree *et, *next; struct extent_node *en; unsigned int node_cnt = 0, tree_cnt = 0; int remained;
- if (!test_opt(sbi, READ_EXTENT_CACHE)) - return 0; - - if (!atomic_read(&sbi->total_zombie_tree)) + if (!atomic_read(&eti->total_zombie_tree)) goto free_node;
- if (!mutex_trylock(&sbi->extent_tree_lock)) + if (!mutex_trylock(&eti->extent_tree_lock)) goto out;
/* 1. remove unreferenced extent tree */ - list_for_each_entry_safe(et, next, &sbi->zombie_list, list) { + list_for_each_entry_safe(et, next, &eti->zombie_list, list) { if (atomic_read(&et->node_cnt)) { write_lock(&et->lock); node_cnt += __free_extent_tree(sbi, et); @@ -788,61 +852,100 @@ unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink) } f2fs_bug_on(sbi, atomic_read(&et->node_cnt)); list_del_init(&et->list); - radix_tree_delete(&sbi->extent_tree_root, et->ino); + radix_tree_delete(&eti->extent_tree_root, et->ino); kmem_cache_free(extent_tree_slab, et); - atomic_dec(&sbi->total_ext_tree); - atomic_dec(&sbi->total_zombie_tree); + atomic_dec(&eti->total_ext_tree); + atomic_dec(&eti->total_zombie_tree); tree_cnt++;
if (node_cnt + tree_cnt >= nr_shrink) goto unlock_out; cond_resched(); } - mutex_unlock(&sbi->extent_tree_lock); + mutex_unlock(&eti->extent_tree_lock);
free_node: /* 2. remove LRU extent entries */ - if (!mutex_trylock(&sbi->extent_tree_lock)) + if (!mutex_trylock(&eti->extent_tree_lock)) goto out;
remained = nr_shrink - (node_cnt + tree_cnt);
- spin_lock(&sbi->extent_lock); + spin_lock(&eti->extent_lock); for (; remained > 0; remained--) { - if (list_empty(&sbi->extent_list)) + if (list_empty(&eti->extent_list)) break; - en = list_first_entry(&sbi->extent_list, + en = list_first_entry(&eti->extent_list, struct extent_node, list); et = en->et; if (!write_trylock(&et->lock)) { /* refresh this extent node's position in extent list */ - list_move_tail(&en->list, &sbi->extent_list); + list_move_tail(&en->list, &eti->extent_list); continue; }
list_del_init(&en->list); - spin_unlock(&sbi->extent_lock); + spin_unlock(&eti->extent_lock);
__detach_extent_node(sbi, et, en);
write_unlock(&et->lock); node_cnt++; - spin_lock(&sbi->extent_lock); + spin_lock(&eti->extent_lock); } - spin_unlock(&sbi->extent_lock); + spin_unlock(&eti->extent_lock);
unlock_out: - mutex_unlock(&sbi->extent_tree_lock); + mutex_unlock(&eti->extent_tree_lock); out: - trace_f2fs_shrink_extent_tree(sbi, node_cnt, tree_cnt); + trace_f2fs_shrink_extent_tree(sbi, node_cnt, tree_cnt, type);
return node_cnt + tree_cnt; }
-unsigned int f2fs_destroy_extent_node(struct inode *inode) +/* read extent cache operations */ +bool f2fs_lookup_read_extent_cache(struct inode *inode, pgoff_t pgofs, + struct extent_info *ei) +{ + if (!__may_extent_tree(inode, EX_READ)) + return false; + + return __lookup_extent_tree(inode, pgofs, ei, EX_READ); +} + +void f2fs_update_read_extent_cache(struct dnode_of_data *dn) +{ + return __update_extent_cache(dn, EX_READ); +} + +void f2fs_update_read_extent_cache_range(struct dnode_of_data *dn, + pgoff_t fofs, block_t blkaddr, unsigned int len) +{ + struct extent_info ei = { + .fofs = fofs, + .len = len, + .blk = blkaddr, + }; + + if (!__may_extent_tree(dn->inode, EX_READ)) + return; + + __update_extent_tree_range(dn->inode, &ei, EX_READ); +} + +unsigned int f2fs_shrink_read_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink) +{ + if (!test_opt(sbi, READ_EXTENT_CACHE)) + return 0; + + return __shrink_extent_tree(sbi, nr_shrink, EX_READ); +} + +static unsigned int __destroy_extent_node(struct inode *inode, + enum extent_type type) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - struct extent_tree *et = F2FS_I(inode)->extent_tree; + struct extent_tree *et = F2FS_I(inode)->extent_tree[type]; unsigned int node_cnt = 0;
if (!et || !atomic_read(&et->node_cnt)) @@ -855,31 +958,44 @@ unsigned int f2fs_destroy_extent_node(struct inode *inode) return node_cnt; }
-void f2fs_drop_extent_tree(struct inode *inode) +void f2fs_destroy_extent_node(struct inode *inode) +{ + __destroy_extent_node(inode, EX_READ); +} + +static void __drop_extent_tree(struct inode *inode, enum extent_type type) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - struct extent_tree *et = F2FS_I(inode)->extent_tree; + struct extent_tree *et = F2FS_I(inode)->extent_tree[type]; bool updated = false;
- if (!f2fs_may_extent_tree(inode)) + if (!__may_extent_tree(inode, type)) return;
write_lock(&et->lock); - set_inode_flag(inode, FI_NO_EXTENT); __free_extent_tree(sbi, et); - if (et->largest.len) { - et->largest.len = 0; - updated = true; + if (type == EX_READ) { + set_inode_flag(inode, FI_NO_EXTENT); + if (et->largest.len) { + et->largest.len = 0; + updated = true; + } } write_unlock(&et->lock); if (updated) f2fs_mark_inode_dirty_sync(inode, true); }
-void f2fs_destroy_extent_tree(struct inode *inode) +void f2fs_drop_extent_tree(struct inode *inode) +{ + __drop_extent_tree(inode, EX_READ); +} + +static void __destroy_extent_tree(struct inode *inode, enum extent_type type) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - struct extent_tree *et = F2FS_I(inode)->extent_tree; + struct extent_tree_info *eti = &sbi->extent_tree[type]; + struct extent_tree *et = F2FS_I(inode)->extent_tree[type]; unsigned int node_cnt = 0;
if (!et) @@ -887,76 +1003,49 @@ void f2fs_destroy_extent_tree(struct inode *inode)
if (inode->i_nlink && !is_bad_inode(inode) && atomic_read(&et->node_cnt)) { - mutex_lock(&sbi->extent_tree_lock); - list_add_tail(&et->list, &sbi->zombie_list); - atomic_inc(&sbi->total_zombie_tree); - mutex_unlock(&sbi->extent_tree_lock); + mutex_lock(&eti->extent_tree_lock); + list_add_tail(&et->list, &eti->zombie_list); + atomic_inc(&eti->total_zombie_tree); + mutex_unlock(&eti->extent_tree_lock); return; }
/* free all extent info belong to this extent tree */ - node_cnt = f2fs_destroy_extent_node(inode); + node_cnt = __destroy_extent_node(inode, type);
/* delete extent tree entry in radix tree */ - mutex_lock(&sbi->extent_tree_lock); + mutex_lock(&eti->extent_tree_lock); f2fs_bug_on(sbi, atomic_read(&et->node_cnt)); - radix_tree_delete(&sbi->extent_tree_root, inode->i_ino); + radix_tree_delete(&eti->extent_tree_root, inode->i_ino); kmem_cache_free(extent_tree_slab, et); - atomic_dec(&sbi->total_ext_tree); - mutex_unlock(&sbi->extent_tree_lock); + atomic_dec(&eti->total_ext_tree); + mutex_unlock(&eti->extent_tree_lock);
- F2FS_I(inode)->extent_tree = NULL; + F2FS_I(inode)->extent_tree[type] = NULL;
- trace_f2fs_destroy_extent_tree(inode, node_cnt); + trace_f2fs_destroy_extent_tree(inode, node_cnt, type); }
-bool f2fs_lookup_extent_cache(struct inode *inode, pgoff_t pgofs, - struct extent_info *ei) -{ - if (!f2fs_may_extent_tree(inode)) - return false; - - return f2fs_lookup_extent_tree(inode, pgofs, ei); -} - -void f2fs_update_extent_cache(struct dnode_of_data *dn) +void f2fs_destroy_extent_tree(struct inode *inode) { - pgoff_t fofs; - block_t blkaddr; - - if (!f2fs_may_extent_tree(dn->inode)) - return; - - if (dn->data_blkaddr == NEW_ADDR) - blkaddr = NULL_ADDR; - else - blkaddr = dn->data_blkaddr; - - fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) + - dn->ofs_in_node; - f2fs_update_extent_tree_range(dn->inode, fofs, blkaddr, 1); + __destroy_extent_tree(inode, EX_READ); }
-void f2fs_update_extent_cache_range(struct dnode_of_data *dn, - pgoff_t fofs, block_t blkaddr, unsigned int len) - +static void __init_extent_tree_info(struct extent_tree_info *eti) { - if (!f2fs_may_extent_tree(dn->inode)) - return; - - f2fs_update_extent_tree_range(dn->inode, fofs, blkaddr, len); + INIT_RADIX_TREE(&eti->extent_tree_root, GFP_NOIO); + mutex_init(&eti->extent_tree_lock); + INIT_LIST_HEAD(&eti->extent_list); + spin_lock_init(&eti->extent_lock); + atomic_set(&eti->total_ext_tree, 0); + INIT_LIST_HEAD(&eti->zombie_list); + atomic_set(&eti->total_zombie_tree, 0); + atomic_set(&eti->total_ext_node, 0); }
void f2fs_init_extent_cache_info(struct f2fs_sb_info *sbi) { - INIT_RADIX_TREE(&sbi->extent_tree_root, GFP_NOIO); - mutex_init(&sbi->extent_tree_lock); - INIT_LIST_HEAD(&sbi->extent_list); - spin_lock_init(&sbi->extent_lock); - atomic_set(&sbi->total_ext_tree, 0); - INIT_LIST_HEAD(&sbi->zombie_list); - atomic_set(&sbi->total_zombie_tree, 0); - atomic_set(&sbi->total_ext_node, 0); + __init_extent_tree_info(&sbi->extent_tree[EX_READ]); }
int __init f2fs_create_extent_cache(void) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 076bdf27df547..cf45af3a44a7a 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -593,16 +593,22 @@ enum { /* dirty segments threshold for triggering CP */ #define DEFAULT_DIRTY_THRESHOLD 4
+#define RECOVERY_MAX_RA_BLOCKS BIO_MAX_VECS +#define RECOVERY_MIN_RA_BLOCKS 1 + +#define F2FS_ONSTACK_PAGES 16 /* nr of onstack pages */ + /* for in-memory extent cache entry */ #define F2FS_MIN_EXTENT_LEN 64 /* minimum extent length */
/* number of extent info in extent cache we try to shrink */ #define READ_EXTENT_CACHE_SHRINK_NUMBER 128
-#define RECOVERY_MAX_RA_BLOCKS BIO_MAX_VECS -#define RECOVERY_MIN_RA_BLOCKS 1 - -#define F2FS_ONSTACK_PAGES 16 /* nr of onstack pages */ +/* extent cache type */ +enum extent_type { + EX_READ, + NR_EXTENT_CACHES, +};
struct rb_entry { struct rb_node rb_node; /* rb node located in rb-tree */ @@ -618,10 +624,17 @@ struct rb_entry { struct extent_info { unsigned int fofs; /* start offset in a file */ unsigned int len; /* length of the extent */ - block_t blk; /* start block address of the extent */ + union { + /* read extent_cache */ + struct { + /* start block address of the extent */ + block_t blk; #ifdef CONFIG_F2FS_FS_COMPRESSION - unsigned int c_len; /* physical extent length of compressed blocks */ + /* physical extent length of compressed blocks */ + unsigned int c_len; #endif + }; + }; };
struct extent_node { @@ -633,13 +646,25 @@ struct extent_node {
struct extent_tree { nid_t ino; /* inode number */ + enum extent_type type; /* keep the extent tree type */ struct rb_root_cached root; /* root of extent info rb-tree */ struct extent_node *cached_en; /* recently accessed extent node */ - struct extent_info largest; /* largested extent info */ struct list_head list; /* to be used by sbi->zombie_list */ rwlock_t lock; /* protect extent info rb-tree */ atomic_t node_cnt; /* # of extent node in rb-tree*/ bool largest_updated; /* largest extent updated */ + struct extent_info largest; /* largest cached extent for EX_READ */ +}; + +struct extent_tree_info { + struct radix_tree_root extent_tree_root;/* cache extent cache entries */ + struct mutex extent_tree_lock; /* locking extent radix tree */ + struct list_head extent_list; /* lru list for shrinker */ + spinlock_t extent_lock; /* locking extent lru list */ + atomic_t total_ext_tree; /* extent tree count */ + struct list_head zombie_list; /* extent zombie tree list */ + atomic_t total_zombie_tree; /* extent zombie tree count */ + atomic_t total_ext_node; /* extent info count */ };
/* @@ -801,7 +826,8 @@ struct f2fs_inode_info { struct list_head dirty_list; /* dirty list for dirs and files */ struct list_head gdirty_list; /* linked in global dirty list */ struct task_struct *atomic_write_task; /* store atomic write task */ - struct extent_tree *extent_tree; /* cached extent_tree entry */ + struct extent_tree *extent_tree[NR_EXTENT_CACHES]; + /* cached extent_tree entry */ struct inode *cow_inode; /* copy-on-write inode for atomic write */
/* avoid racing between foreground op and gc */ @@ -1624,14 +1650,7 @@ struct f2fs_sb_info { struct mutex flush_lock; /* for flush exclusion */
/* for extent tree cache */ - struct radix_tree_root extent_tree_root;/* cache extent cache entries */ - struct mutex extent_tree_lock; /* locking extent radix tree */ - struct list_head extent_list; /* lru list for shrinker */ - spinlock_t extent_lock; /* locking extent lru list */ - atomic_t total_ext_tree; /* extent tree count */ - struct list_head zombie_list; /* extent zombie tree list */ - atomic_t total_zombie_tree; /* extent zombie tree count */ - atomic_t total_ext_node; /* extent info count */ + struct extent_tree_info extent_tree[NR_EXTENT_CACHES];
/* basic filesystem units */ unsigned int log_sectors_per_block; /* log2 sectors per block */ @@ -1715,10 +1734,14 @@ struct f2fs_sb_info { unsigned int segment_count[2]; /* # of allocated segments */ unsigned int block_count[2]; /* # of allocated blocks */ atomic_t inplace_count; /* # of inplace update */ - atomic64_t total_hit_ext; /* # of lookup extent cache */ - atomic64_t read_hit_rbtree; /* # of hit rbtree extent node */ - atomic64_t read_hit_largest; /* # of hit largest extent node */ - atomic64_t read_hit_cached; /* # of hit cached extent node */ + /* # of lookup extent cache */ + atomic64_t total_hit_ext[NR_EXTENT_CACHES]; + /* # of hit rbtree extent node */ + atomic64_t read_hit_rbtree[NR_EXTENT_CACHES]; + /* # of hit cached extent node */ + atomic64_t read_hit_cached[NR_EXTENT_CACHES]; + /* # of hit largest extent node in read extent cache */ + atomic64_t read_hit_largest; atomic_t inline_xattr; /* # of inline_xattr inodes */ atomic_t inline_inode; /* # of inline_data inodes */ atomic_t inline_dir; /* # of inline_dentry inodes */ @@ -3820,9 +3843,17 @@ struct f2fs_stat_info { struct f2fs_sb_info *sbi; int all_area_segs, sit_area_segs, nat_area_segs, ssa_area_segs; int main_area_segs, main_area_sections, main_area_zones; - unsigned long long hit_largest, hit_cached, hit_rbtree; - unsigned long long hit_total, total_ext; - int ext_tree, zombie_tree, ext_node; + unsigned long long hit_cached[NR_EXTENT_CACHES]; + unsigned long long hit_rbtree[NR_EXTENT_CACHES]; + unsigned long long total_ext[NR_EXTENT_CACHES]; + unsigned long long hit_total[NR_EXTENT_CACHES]; + int ext_tree[NR_EXTENT_CACHES]; + int zombie_tree[NR_EXTENT_CACHES]; + int ext_node[NR_EXTENT_CACHES]; + /* to count memory footprint */ + unsigned long long ext_mem[NR_EXTENT_CACHES]; + /* for read extent cache */ + unsigned long long hit_largest; int ndirty_node, ndirty_dent, ndirty_meta, ndirty_imeta; int ndirty_data, ndirty_qdata; unsigned int ndirty_dirs, ndirty_files, nquota_files, ndirty_all; @@ -3881,10 +3912,10 @@ static inline struct f2fs_stat_info *F2FS_STAT(struct f2fs_sb_info *sbi) #define stat_other_skip_bggc_count(sbi) ((sbi)->other_skip_bggc++) #define stat_inc_dirty_inode(sbi, type) ((sbi)->ndirty_inode[type]++) #define stat_dec_dirty_inode(sbi, type) ((sbi)->ndirty_inode[type]--) -#define stat_inc_total_hit(sbi) (atomic64_inc(&(sbi)->total_hit_ext)) -#define stat_inc_rbtree_node_hit(sbi) (atomic64_inc(&(sbi)->read_hit_rbtree)) +#define stat_inc_total_hit(sbi, type) (atomic64_inc(&(sbi)->total_hit_ext[type])) +#define stat_inc_rbtree_node_hit(sbi, type) (atomic64_inc(&(sbi)->read_hit_rbtree[type])) #define stat_inc_largest_node_hit(sbi) (atomic64_inc(&(sbi)->read_hit_largest)) -#define stat_inc_cached_node_hit(sbi) (atomic64_inc(&(sbi)->read_hit_cached)) +#define stat_inc_cached_node_hit(sbi, type) (atomic64_inc(&(sbi)->read_hit_cached[type])) #define stat_inc_inline_xattr(inode) \ do { \ if (f2fs_has_inline_xattr(inode)) \ @@ -4007,10 +4038,10 @@ void f2fs_update_sit_info(struct f2fs_sb_info *sbi); #define stat_other_skip_bggc_count(sbi) do { } while (0) #define stat_inc_dirty_inode(sbi, type) do { } while (0) #define stat_dec_dirty_inode(sbi, type) do { } while (0) -#define stat_inc_total_hit(sbi) do { } while (0) -#define stat_inc_rbtree_node_hit(sbi) do { } while (0) +#define stat_inc_total_hit(sbi, type) do { } while (0) +#define stat_inc_rbtree_node_hit(sbi, type) do { } while (0) #define stat_inc_largest_node_hit(sbi) do { } while (0) -#define stat_inc_cached_node_hit(sbi) do { } while (0) +#define stat_inc_cached_node_hit(sbi, type) do { } while (0) #define stat_inc_inline_xattr(inode) do { } while (0) #define stat_dec_inline_xattr(inode) do { } while (0) #define stat_inc_inline_inode(inode) do { } while (0) @@ -4116,20 +4147,23 @@ struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root, bool force, bool *leftmost); bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi, struct rb_root_cached *root, bool check_key); -unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink); void f2fs_init_extent_tree(struct inode *inode, struct page *ipage); void f2fs_drop_extent_tree(struct inode *inode); -unsigned int f2fs_destroy_extent_node(struct inode *inode); +void f2fs_destroy_extent_node(struct inode *inode); void f2fs_destroy_extent_tree(struct inode *inode); -bool f2fs_lookup_extent_cache(struct inode *inode, pgoff_t pgofs, - struct extent_info *ei); -void f2fs_update_extent_cache(struct dnode_of_data *dn); -void f2fs_update_extent_cache_range(struct dnode_of_data *dn, - pgoff_t fofs, block_t blkaddr, unsigned int len); void f2fs_init_extent_cache_info(struct f2fs_sb_info *sbi); int __init f2fs_create_extent_cache(void); void f2fs_destroy_extent_cache(void);
+/* read extent cache ops */ +bool f2fs_lookup_read_extent_cache(struct inode *inode, pgoff_t pgofs, + struct extent_info *ei); +void f2fs_update_read_extent_cache(struct dnode_of_data *dn); +void f2fs_update_read_extent_cache_range(struct dnode_of_data *dn, + pgoff_t fofs, block_t blkaddr, unsigned int len); +unsigned int f2fs_shrink_read_extent_tree(struct f2fs_sb_info *sbi, + int nr_shrink); + /* * sysfs.c */ @@ -4199,9 +4233,9 @@ int f2fs_write_multi_pages(struct compress_ctx *cc, struct writeback_control *wbc, enum iostat_type io_type); int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index); -void f2fs_update_extent_tree_range_compressed(struct inode *inode, - pgoff_t fofs, block_t blkaddr, unsigned int llen, - unsigned int c_len); +void f2fs_update_read_extent_tree_range_compressed(struct inode *inode, + pgoff_t fofs, block_t blkaddr, + unsigned int llen, unsigned int c_len); int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, unsigned nr_pages, sector_t *last_block_in_bio, bool is_readahead, bool for_write); @@ -4282,9 +4316,10 @@ static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino) { } #define inc_compr_inode_stat(inode) do { } while (0) -static inline void f2fs_update_extent_tree_range_compressed(struct inode *inode, - pgoff_t fofs, block_t blkaddr, unsigned int llen, - unsigned int c_len) { } +static inline void f2fs_update_read_extent_tree_range_compressed( + struct inode *inode, + pgoff_t fofs, block_t blkaddr, + unsigned int llen, unsigned int c_len) { } #endif
static inline int set_compress_context(struct inode *inode) diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index bf37983304a33..dbad2db68f1bc 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -618,7 +618,7 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count) */ fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) + ofs; - f2fs_update_extent_cache_range(dn, fofs, 0, len); + f2fs_update_read_extent_cache_range(dn, fofs, 0, len); dec_valid_block_count(sbi, dn->inode, nr_free); } dn->ofs_in_node = ofs; @@ -1496,7 +1496,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start, f2fs_set_data_blkaddr(dn); }
- f2fs_update_extent_cache_range(dn, start, 0, index - start); + f2fs_update_read_extent_cache_range(dn, start, 0, index - start);
return ret; } @@ -2573,7 +2573,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi, struct f2fs_map_blocks map = { .m_next_extent = NULL, .m_seg_type = NO_CHECK_TYPE, .m_may_create = false }; - struct extent_info ei = {0, 0, 0}; + struct extent_info ei = {0, }; pgoff_t pg_start, pg_end, next_pgofs; unsigned int blk_per_seg = sbi->blocks_per_seg; unsigned int total = 0, sec_num; @@ -2605,7 +2605,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi, * lookup mapping info in extent cache, skip defragmenting if physical * block addresses are continuous. */ - if (f2fs_lookup_extent_cache(inode, pg_start, &ei)) { + if (f2fs_lookup_read_extent_cache(inode, pg_start, &ei)) { if (ei.fofs + ei.len >= pg_end) goto out; } diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c index aa928d1c81597..543de12bf88c2 100644 --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -1147,7 +1147,7 @@ static int ra_data_block(struct inode *inode, pgoff_t index) struct address_space *mapping = inode->i_mapping; struct dnode_of_data dn; struct page *page; - struct extent_info ei = {0, 0, 0}; + struct extent_info ei = {0, }; struct f2fs_io_info fio = { .sbi = sbi, .ino = inode->i_ino, @@ -1165,7 +1165,7 @@ static int ra_data_block(struct inode *inode, pgoff_t index) if (!page) return -ENOMEM;
- if (f2fs_lookup_extent_cache(inode, index, &ei)) { + if (f2fs_lookup_read_extent_cache(inode, index, &ei)) { dn.data_blkaddr = ei.blk + index - ei.fofs; if (unlikely(!f2fs_is_valid_blkaddr(sbi, dn.data_blkaddr, DATA_GENERIC_ENHANCE_READ))) { diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index 896b0d5e4ee3f..7bfe29626024d 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -262,8 +262,8 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page) return false; }
- if (fi->extent_tree) { - struct extent_info *ei = &fi->extent_tree->largest; + if (fi->extent_tree[EX_READ]) { + struct extent_info *ei = &fi->extent_tree[EX_READ]->largest;
if (ei->len && (!f2fs_is_valid_blkaddr(sbi, ei->blk, @@ -607,7 +607,7 @@ struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino) void f2fs_update_inode(struct inode *inode, struct page *node_page) { struct f2fs_inode *ri; - struct extent_tree *et = F2FS_I(inode)->extent_tree; + struct extent_tree *et = F2FS_I(inode)->extent_tree[EX_READ];
f2fs_wait_on_page_writeback(node_page, NODE, true, true); set_page_dirty(node_page); diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index 84b147966080e..07419c3e42a52 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -86,9 +86,11 @@ bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type) mem_size >>= PAGE_SHIFT; res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 1); } else if (type == READ_EXTENT_CACHE) { - mem_size = (atomic_read(&sbi->total_ext_tree) * + struct extent_tree_info *eti = &sbi->extent_tree[EX_READ]; + + mem_size = (atomic_read(&eti->total_ext_tree) * sizeof(struct extent_tree) + - atomic_read(&sbi->total_ext_node) * + atomic_read(&eti->total_ext_node) * sizeof(struct extent_node)) >> PAGE_SHIFT; res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 1); } else if (type == DISCARD_CACHE) { @@ -859,7 +861,7 @@ int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) blkaddr = data_blkaddr(dn->inode, dn->node_page, dn->ofs_in_node + 1);
- f2fs_update_extent_tree_range_compressed(dn->inode, + f2fs_update_read_extent_tree_range_compressed(dn->inode, index, blkaddr, F2FS_I(dn->inode)->i_cluster_size, c_len); diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 426559092930d..7d5bea9d92641 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -453,7 +453,8 @@ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi, bool from_bg)
/* try to shrink extent cache when there is no enough memory */ if (!f2fs_available_free_memory(sbi, READ_EXTENT_CACHE)) - f2fs_shrink_extent_tree(sbi, READ_EXTENT_CACHE_SHRINK_NUMBER); + f2fs_shrink_read_extent_tree(sbi, + READ_EXTENT_CACHE_SHRINK_NUMBER);
/* check the # of cached NAT entries */ if (!f2fs_available_free_memory(sbi, NAT_ENTRIES)) diff --git a/fs/f2fs/shrinker.c b/fs/f2fs/shrinker.c index dd3c3c7a90ec8..33c490e69ae30 100644 --- a/fs/f2fs/shrinker.c +++ b/fs/f2fs/shrinker.c @@ -28,10 +28,13 @@ static unsigned long __count_free_nids(struct f2fs_sb_info *sbi) return count > 0 ? count : 0; }
-static unsigned long __count_extent_cache(struct f2fs_sb_info *sbi) +static unsigned long __count_extent_cache(struct f2fs_sb_info *sbi, + enum extent_type type) { - return atomic_read(&sbi->total_zombie_tree) + - atomic_read(&sbi->total_ext_node); + struct extent_tree_info *eti = &sbi->extent_tree[type]; + + return atomic_read(&eti->total_zombie_tree) + + atomic_read(&eti->total_ext_node); }
unsigned long f2fs_shrink_count(struct shrinker *shrink, @@ -53,8 +56,8 @@ unsigned long f2fs_shrink_count(struct shrinker *shrink, } spin_unlock(&f2fs_list_lock);
- /* count extent cache entries */ - count += __count_extent_cache(sbi); + /* count read extent cache entries */ + count += __count_extent_cache(sbi, EX_READ);
/* count clean nat cache entries */ count += __count_nat_entries(sbi); @@ -99,8 +102,8 @@ unsigned long f2fs_shrink_scan(struct shrinker *shrink,
sbi->shrinker_run_no = run_no;
- /* shrink extent cache entries */ - freed += f2fs_shrink_extent_tree(sbi, nr >> 1); + /* shrink read extent cache entries */ + freed += f2fs_shrink_read_extent_tree(sbi, nr >> 1);
/* shrink clean nat cache entries */ if (freed < nr) @@ -130,7 +133,7 @@ void f2fs_join_shrinker(struct f2fs_sb_info *sbi)
void f2fs_leave_shrinker(struct f2fs_sb_info *sbi) { - f2fs_shrink_extent_tree(sbi, __count_extent_cache(sbi)); + f2fs_shrink_read_extent_tree(sbi, __count_extent_cache(sbi, EX_READ));
spin_lock(&f2fs_list_lock); list_del_init(&sbi->s_list); diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h index eb53e96b7a29c..5f58684f6107a 100644 --- a/include/trace/events/f2fs.h +++ b/include/trace/events/f2fs.h @@ -48,6 +48,7 @@ TRACE_DEFINE_ENUM(CP_DISCARD); TRACE_DEFINE_ENUM(CP_TRIMMED); TRACE_DEFINE_ENUM(CP_PAUSE); TRACE_DEFINE_ENUM(CP_RESIZE); +TRACE_DEFINE_ENUM(EX_READ);
#define show_block_type(type) \ __print_symbolic(type, \ @@ -1559,28 +1560,31 @@ TRACE_EVENT(f2fs_issue_flush,
TRACE_EVENT(f2fs_lookup_extent_tree_start,
- TP_PROTO(struct inode *inode, unsigned int pgofs), + TP_PROTO(struct inode *inode, unsigned int pgofs, enum extent_type type),
- TP_ARGS(inode, pgofs), + TP_ARGS(inode, pgofs, type),
TP_STRUCT__entry( __field(dev_t, dev) __field(ino_t, ino) __field(unsigned int, pgofs) + __field(enum extent_type, type) ),
TP_fast_assign( __entry->dev = inode->i_sb->s_dev; __entry->ino = inode->i_ino; __entry->pgofs = pgofs; + __entry->type = type; ),
- TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u", + TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u, type = %s", show_dev_ino(__entry), - __entry->pgofs) + __entry->pgofs, + __entry->type == EX_READ ? "Read" : "N/A") );
-TRACE_EVENT_CONDITION(f2fs_lookup_extent_tree_end, +TRACE_EVENT_CONDITION(f2fs_lookup_read_extent_tree_end,
TP_PROTO(struct inode *inode, unsigned int pgofs, struct extent_info *ei), @@ -1594,8 +1598,8 @@ TRACE_EVENT_CONDITION(f2fs_lookup_extent_tree_end, __field(ino_t, ino) __field(unsigned int, pgofs) __field(unsigned int, fofs) - __field(u32, blk) __field(unsigned int, len) + __field(u32, blk) ),
TP_fast_assign( @@ -1603,26 +1607,26 @@ TRACE_EVENT_CONDITION(f2fs_lookup_extent_tree_end, __entry->ino = inode->i_ino; __entry->pgofs = pgofs; __entry->fofs = ei->fofs; - __entry->blk = ei->blk; __entry->len = ei->len; + __entry->blk = ei->blk; ),
TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u, " - "ext_info(fofs: %u, blk: %u, len: %u)", + "read_ext_info(fofs: %u, len: %u, blk: %u)", show_dev_ino(__entry), __entry->pgofs, __entry->fofs, - __entry->blk, - __entry->len) + __entry->len, + __entry->blk) );
-TRACE_EVENT(f2fs_update_extent_tree_range, +TRACE_EVENT(f2fs_update_read_extent_tree_range,
- TP_PROTO(struct inode *inode, unsigned int pgofs, block_t blkaddr, - unsigned int len, + TP_PROTO(struct inode *inode, unsigned int pgofs, unsigned int len, + block_t blkaddr, unsigned int c_len),
- TP_ARGS(inode, pgofs, blkaddr, len, c_len), + TP_ARGS(inode, pgofs, len, blkaddr, c_len),
TP_STRUCT__entry( __field(dev_t, dev) @@ -1637,67 +1641,73 @@ TRACE_EVENT(f2fs_update_extent_tree_range, __entry->dev = inode->i_sb->s_dev; __entry->ino = inode->i_ino; __entry->pgofs = pgofs; - __entry->blk = blkaddr; __entry->len = len; + __entry->blk = blkaddr; __entry->c_len = c_len; ),
TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u, " - "blkaddr = %u, len = %u, " - "c_len = %u", + "len = %u, blkaddr = %u, c_len = %u", show_dev_ino(__entry), __entry->pgofs, - __entry->blk, __entry->len, + __entry->blk, __entry->c_len) );
TRACE_EVENT(f2fs_shrink_extent_tree,
TP_PROTO(struct f2fs_sb_info *sbi, unsigned int node_cnt, - unsigned int tree_cnt), + unsigned int tree_cnt, enum extent_type type),
- TP_ARGS(sbi, node_cnt, tree_cnt), + TP_ARGS(sbi, node_cnt, tree_cnt, type),
TP_STRUCT__entry( __field(dev_t, dev) __field(unsigned int, node_cnt) __field(unsigned int, tree_cnt) + __field(enum extent_type, type) ),
TP_fast_assign( __entry->dev = sbi->sb->s_dev; __entry->node_cnt = node_cnt; __entry->tree_cnt = tree_cnt; + __entry->type = type; ),
- TP_printk("dev = (%d,%d), shrunk: node_cnt = %u, tree_cnt = %u", + TP_printk("dev = (%d,%d), shrunk: node_cnt = %u, tree_cnt = %u, type = %s", show_dev(__entry->dev), __entry->node_cnt, - __entry->tree_cnt) + __entry->tree_cnt, + __entry->type == EX_READ ? "Read" : "N/A") );
TRACE_EVENT(f2fs_destroy_extent_tree,
- TP_PROTO(struct inode *inode, unsigned int node_cnt), + TP_PROTO(struct inode *inode, unsigned int node_cnt, + enum extent_type type),
- TP_ARGS(inode, node_cnt), + TP_ARGS(inode, node_cnt, type),
TP_STRUCT__entry( __field(dev_t, dev) __field(ino_t, ino) __field(unsigned int, node_cnt) + __field(enum extent_type, type) ),
TP_fast_assign( __entry->dev = inode->i_sb->s_dev; __entry->ino = inode->i_ino; __entry->node_cnt = node_cnt; + __entry->type = type; ),
- TP_printk("dev = (%d,%d), ino = %lu, destroyed: node_cnt = %u", + TP_printk("dev = (%d,%d), ino = %lu, destroyed: node_cnt = %u, type = %s", show_dev_ino(__entry), - __entry->node_cnt) + __entry->node_cnt, + __entry->type == EX_READ ? "Read" : "N/A") );
DECLARE_EVENT_CLASS(f2fs_sync_dirty_inodes,
From: Jaegeuk Kim jaegeuk@kernel.org
[ Upstream commit 72840cccc0a1a0a0dc1bb27b669a9111be6d0f6a ]
Let's allocate it to remove the runtime complexity.
Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Stable-dep-of: 043d2d00b443 ("f2fs: factor out victim_entry usage from general rb_tree use") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/extent_cache.c | 38 +++++++++++++++++++------------------- fs/f2fs/f2fs.h | 3 ++- fs/f2fs/inode.c | 6 ++++-- fs/f2fs/namei.c | 4 ++-- 4 files changed, 27 insertions(+), 24 deletions(-)
diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c index 4217076df1024..794a8134687ae 100644 --- a/fs/f2fs/extent_cache.c +++ b/fs/f2fs/extent_cache.c @@ -47,20 +47,23 @@ static bool __may_read_extent_tree(struct inode *inode) return S_ISREG(inode->i_mode); }
-static bool __may_extent_tree(struct inode *inode, enum extent_type type) +static bool __init_may_extent_tree(struct inode *inode, enum extent_type type) { - struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + if (type == EX_READ) + return __may_read_extent_tree(inode); + return false; +}
+static bool __may_extent_tree(struct inode *inode, enum extent_type type) +{ /* * for recovered files during mount do not create extents * if shrinker is not registered. */ - if (list_empty(&sbi->s_list)) + if (list_empty(&F2FS_I_SB(inode)->s_list)) return false;
- if (type == EX_READ) - return __may_read_extent_tree(inode); - return false; + return __init_may_extent_tree(inode, type); }
static void __try_update_largest_extent(struct extent_tree *et, @@ -439,20 +442,18 @@ static void __drop_largest_extent(struct extent_tree *et, } }
-/* return true, if inode page is changed */ -static void __f2fs_init_extent_tree(struct inode *inode, struct page *ipage, - enum extent_type type) +void f2fs_init_read_extent_tree(struct inode *inode, struct page *ipage) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - struct extent_tree_info *eti = &sbi->extent_tree[type]; - struct f2fs_extent *i_ext = ipage ? &F2FS_INODE(ipage)->i_ext : NULL; + struct extent_tree_info *eti = &sbi->extent_tree[EX_READ]; + struct f2fs_extent *i_ext = &F2FS_INODE(ipage)->i_ext; struct extent_tree *et; struct extent_node *en; struct extent_info ei;
- if (!__may_extent_tree(inode, type)) { + if (!__may_extent_tree(inode, EX_READ)) { /* drop largest read extent */ - if (type == EX_READ && i_ext && i_ext->len) { + if (i_ext && i_ext->len) { f2fs_wait_on_page_writeback(ipage, NODE, true, true); i_ext->len = 0; set_page_dirty(ipage); @@ -460,13 +461,11 @@ static void __f2fs_init_extent_tree(struct inode *inode, struct page *ipage, goto out; }
- et = __grab_extent_tree(inode, type); + et = __grab_extent_tree(inode, EX_READ);
if (!i_ext || !i_ext->len) goto out;
- BUG_ON(type != EX_READ); - get_read_extent_info(&ei, i_ext);
write_lock(&et->lock); @@ -486,14 +485,15 @@ static void __f2fs_init_extent_tree(struct inode *inode, struct page *ipage, unlock_out: write_unlock(&et->lock); out: - if (type == EX_READ && !F2FS_I(inode)->extent_tree[EX_READ]) + if (!F2FS_I(inode)->extent_tree[EX_READ]) set_inode_flag(inode, FI_NO_EXTENT); }
-void f2fs_init_extent_tree(struct inode *inode, struct page *ipage) +void f2fs_init_extent_tree(struct inode *inode) { /* initialize read cache */ - __f2fs_init_extent_tree(inode, ipage, EX_READ); + if (__init_may_extent_tree(inode, EX_READ)) + __grab_extent_tree(inode, EX_READ); }
static bool __lookup_extent_tree(struct inode *inode, pgoff_t pgofs, diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index cf45af3a44a7a..3fc9d98112166 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -4147,7 +4147,7 @@ struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root, bool force, bool *leftmost); bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi, struct rb_root_cached *root, bool check_key); -void f2fs_init_extent_tree(struct inode *inode, struct page *ipage); +void f2fs_init_extent_tree(struct inode *inode); void f2fs_drop_extent_tree(struct inode *inode); void f2fs_destroy_extent_node(struct inode *inode); void f2fs_destroy_extent_tree(struct inode *inode); @@ -4156,6 +4156,7 @@ int __init f2fs_create_extent_cache(void); void f2fs_destroy_extent_cache(void);
/* read extent cache ops */ +void f2fs_init_read_extent_tree(struct inode *inode, struct page *ipage); bool f2fs_lookup_read_extent_cache(struct inode *inode, pgoff_t pgofs, struct extent_info *ei); void f2fs_update_read_extent_cache(struct dnode_of_data *dn); diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index 7bfe29626024d..2bda4e73fc1ce 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -392,8 +392,6 @@ static int do_read_inode(struct inode *inode) fi->i_pino = le32_to_cpu(ri->i_pino); fi->i_dir_level = ri->i_dir_level;
- f2fs_init_extent_tree(inode, node_page); - get_inline_info(inode, ri);
fi->i_extra_isize = f2fs_has_extra_attr(inode) ? @@ -479,6 +477,10 @@ static int do_read_inode(struct inode *inode) }
init_idisk_time(inode); + + /* Need all the flag bits */ + f2fs_init_read_extent_tree(inode, node_page); + f2fs_put_page(node_page, 1);
stat_inc_inline_xattr(inode); diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c index 51d0030bddb27..d879a295b688e 100644 --- a/fs/f2fs/namei.c +++ b/fs/f2fs/namei.c @@ -258,8 +258,6 @@ static struct inode *f2fs_new_inode(struct user_namespace *mnt_userns, } F2FS_I(inode)->i_inline_xattr_size = xattr_size;
- f2fs_init_extent_tree(inode, NULL); - F2FS_I(inode)->i_flags = f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED);
@@ -282,6 +280,8 @@ static struct inode *f2fs_new_inode(struct user_namespace *mnt_userns,
f2fs_set_inode_flags(inode);
+ f2fs_init_extent_tree(inode); + trace_f2fs_new_inode(inode, 0); return inode;
From: Jaegeuk Kim jaegeuk@kernel.org
[ Upstream commit 043d2d00b44310f84c0593c63e51fae88c829cdd ]
Let's reduce the complexity of mixed use of rb_tree in victim_entry from extent_cache and discard_cmd.
This should fix arm32 memory alignment issue caused by shared rb_entry.
[struct victim_entry] [struct rb_entry] [0] struct rb_node rb_node; [0] struct rb_node rb_node; union { struct { unsigned int ofs; unsigned int len; }; [16] unsigned long long mtime; [12] unsigned long long key; } __packed;
Cc: stable@vger.kernel.org Fixes: 093749e296e2 ("f2fs: support age threshold based garbage collection") Reviewed-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/extent_cache.c | 36 +---------- fs/f2fs/f2fs.h | 15 +---- fs/f2fs/gc.c | 139 +++++++++++++++++++++++++---------------- fs/f2fs/gc.h | 14 +---- fs/f2fs/segment.c | 4 +- 5 files changed, 93 insertions(+), 115 deletions(-)
diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c index 794a8134687ae..cb8fb2fdfce2a 100644 --- a/fs/f2fs/extent_cache.c +++ b/fs/f2fs/extent_cache.c @@ -149,29 +149,6 @@ struct rb_entry *f2fs_lookup_rb_tree(struct rb_root_cached *root, return re; }
-struct rb_node **f2fs_lookup_rb_tree_ext(struct f2fs_sb_info *sbi, - struct rb_root_cached *root, - struct rb_node **parent, - unsigned long long key, bool *leftmost) -{ - struct rb_node **p = &root->rb_root.rb_node; - struct rb_entry *re; - - while (*p) { - *parent = *p; - re = rb_entry(*parent, struct rb_entry, rb_node); - - if (key < re->key) { - p = &(*p)->rb_left; - } else { - p = &(*p)->rb_right; - *leftmost = false; - } - } - - return p; -} - struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi, struct rb_root_cached *root, struct rb_node **parent, @@ -280,7 +257,7 @@ struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root, }
bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi, - struct rb_root_cached *root, bool check_key) + struct rb_root_cached *root) { #ifdef CONFIG_F2FS_CHECK_FS struct rb_node *cur = rb_first_cached(root), *next; @@ -297,23 +274,12 @@ bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi, cur_re = rb_entry(cur, struct rb_entry, rb_node); next_re = rb_entry(next, struct rb_entry, rb_node);
- if (check_key) { - if (cur_re->key > next_re->key) { - f2fs_info(sbi, "inconsistent rbtree, " - "cur(%llu) next(%llu)", - cur_re->key, next_re->key); - return false; - } - goto next; - } - if (cur_re->ofs + cur_re->len > next_re->ofs) { f2fs_info(sbi, "inconsistent rbtree, cur(%u, %u) next(%u, %u)", cur_re->ofs, cur_re->len, next_re->ofs, next_re->len); return false; } -next: cur = next; } #endif diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 3fc9d98112166..6fb08b3520a68 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -612,13 +612,8 @@ enum extent_type {
struct rb_entry { struct rb_node rb_node; /* rb node located in rb-tree */ - union { - struct { - unsigned int ofs; /* start offset of the entry */ - unsigned int len; /* length of the entry */ - }; - unsigned long long key; /* 64-bits key */ - } __packed; + unsigned int ofs; /* start offset of the entry */ + unsigned int len; /* length of the entry */ };
struct extent_info { @@ -4132,10 +4127,6 @@ void f2fs_leave_shrinker(struct f2fs_sb_info *sbi); */ struct rb_entry *f2fs_lookup_rb_tree(struct rb_root_cached *root, struct rb_entry *cached_re, unsigned int ofs); -struct rb_node **f2fs_lookup_rb_tree_ext(struct f2fs_sb_info *sbi, - struct rb_root_cached *root, - struct rb_node **parent, - unsigned long long key, bool *left_most); struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi, struct rb_root_cached *root, struct rb_node **parent, @@ -4146,7 +4137,7 @@ struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root, struct rb_node ***insert_p, struct rb_node **insert_parent, bool force, bool *leftmost); bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi, - struct rb_root_cached *root, bool check_key); + struct rb_root_cached *root); void f2fs_init_extent_tree(struct inode *inode); void f2fs_drop_extent_tree(struct inode *inode); void f2fs_destroy_extent_node(struct inode *inode); diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c index 543de12bf88c2..5cd19fdc10596 100644 --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -389,40 +389,95 @@ static unsigned int count_bits(const unsigned long *addr, return sum; }
-static struct victim_entry *attach_victim_entry(struct f2fs_sb_info *sbi, - unsigned long long mtime, unsigned int segno, - struct rb_node *parent, struct rb_node **p, - bool left_most) +static bool f2fs_check_victim_tree(struct f2fs_sb_info *sbi, + struct rb_root_cached *root) +{ +#ifdef CONFIG_F2FS_CHECK_FS + struct rb_node *cur = rb_first_cached(root), *next; + struct victim_entry *cur_ve, *next_ve; + + while (cur) { + next = rb_next(cur); + if (!next) + return true; + + cur_ve = rb_entry(cur, struct victim_entry, rb_node); + next_ve = rb_entry(next, struct victim_entry, rb_node); + + if (cur_ve->mtime > next_ve->mtime) { + f2fs_info(sbi, "broken victim_rbtree, " + "cur_mtime(%llu) next_mtime(%llu)", + cur_ve->mtime, next_ve->mtime); + return false; + } + cur = next; + } +#endif + return true; +} + +static struct victim_entry *__lookup_victim_entry(struct f2fs_sb_info *sbi, + unsigned long long mtime) +{ + struct atgc_management *am = &sbi->am; + struct rb_node *node = am->root.rb_root.rb_node; + struct victim_entry *ve = NULL; + + while (node) { + ve = rb_entry(node, struct victim_entry, rb_node); + + if (mtime < ve->mtime) + node = node->rb_left; + else + node = node->rb_right; + } + return ve; +} + +static struct victim_entry *__create_victim_entry(struct f2fs_sb_info *sbi, + unsigned long long mtime, unsigned int segno) { struct atgc_management *am = &sbi->am; struct victim_entry *ve;
- ve = f2fs_kmem_cache_alloc(victim_entry_slab, - GFP_NOFS, true, NULL); + ve = f2fs_kmem_cache_alloc(victim_entry_slab, GFP_NOFS, true, NULL);
ve->mtime = mtime; ve->segno = segno;
- rb_link_node(&ve->rb_node, parent, p); - rb_insert_color_cached(&ve->rb_node, &am->root, left_most); - list_add_tail(&ve->list, &am->victim_list); - am->victim_count++;
return ve; }
-static void insert_victim_entry(struct f2fs_sb_info *sbi, +static void __insert_victim_entry(struct f2fs_sb_info *sbi, unsigned long long mtime, unsigned int segno) { struct atgc_management *am = &sbi->am; - struct rb_node **p; + struct rb_root_cached *root = &am->root; + struct rb_node **p = &root->rb_root.rb_node; struct rb_node *parent = NULL; + struct victim_entry *ve; bool left_most = true;
- p = f2fs_lookup_rb_tree_ext(sbi, &am->root, &parent, mtime, &left_most); - attach_victim_entry(sbi, mtime, segno, parent, p, left_most); + /* look up rb tree to find parent node */ + while (*p) { + parent = *p; + ve = rb_entry(parent, struct victim_entry, rb_node); + + if (mtime < ve->mtime) { + p = &(*p)->rb_left; + } else { + p = &(*p)->rb_right; + left_most = false; + } + } + + ve = __create_victim_entry(sbi, mtime, segno); + + rb_link_node(&ve->rb_node, parent, p); + rb_insert_color_cached(&ve->rb_node, root, left_most); }
static void add_victim_entry(struct f2fs_sb_info *sbi, @@ -458,19 +513,7 @@ static void add_victim_entry(struct f2fs_sb_info *sbi, if (sit_i->dirty_max_mtime - mtime < p->age_threshold) return;
- insert_victim_entry(sbi, mtime, segno); -} - -static struct rb_node *lookup_central_victim(struct f2fs_sb_info *sbi, - struct victim_sel_policy *p) -{ - struct atgc_management *am = &sbi->am; - struct rb_node *parent = NULL; - bool left_most; - - f2fs_lookup_rb_tree_ext(sbi, &am->root, &parent, p->age, &left_most); - - return parent; + __insert_victim_entry(sbi, mtime, segno); }
static void atgc_lookup_victim(struct f2fs_sb_info *sbi, @@ -480,7 +523,6 @@ static void atgc_lookup_victim(struct f2fs_sb_info *sbi, struct atgc_management *am = &sbi->am; struct rb_root_cached *root = &am->root; struct rb_node *node; - struct rb_entry *re; struct victim_entry *ve; unsigned long long total_time; unsigned long long age, u, accu; @@ -507,12 +549,10 @@ static void atgc_lookup_victim(struct f2fs_sb_info *sbi,
node = rb_first_cached(root); next: - re = rb_entry_safe(node, struct rb_entry, rb_node); - if (!re) + ve = rb_entry_safe(node, struct victim_entry, rb_node); + if (!ve) return;
- ve = (struct victim_entry *)re; - if (ve->mtime >= max_mtime || ve->mtime < min_mtime) goto skip;
@@ -554,8 +594,6 @@ static void atssr_lookup_victim(struct f2fs_sb_info *sbi, { struct sit_info *sit_i = SIT_I(sbi); struct atgc_management *am = &sbi->am; - struct rb_node *node; - struct rb_entry *re; struct victim_entry *ve; unsigned long long age; unsigned long long max_mtime = sit_i->dirty_max_mtime; @@ -565,25 +603,22 @@ static void atssr_lookup_victim(struct f2fs_sb_info *sbi, unsigned int dirty_threshold = max(am->max_candidate_count, am->candidate_ratio * am->victim_count / 100); - unsigned int cost; - unsigned int iter = 0; + unsigned int cost, iter; int stage = 0;
if (max_mtime < min_mtime) return; max_mtime += 1; next_stage: - node = lookup_central_victim(sbi, p); + iter = 0; + ve = __lookup_victim_entry(sbi, p->age); next_node: - re = rb_entry_safe(node, struct rb_entry, rb_node); - if (!re) { - if (stage == 0) - goto skip_stage; + if (!ve) { + if (stage++ == 0) + goto next_stage; return; }
- ve = (struct victim_entry *)re; - if (ve->mtime >= max_mtime || ve->mtime < min_mtime) goto skip_node;
@@ -609,24 +644,20 @@ static void atssr_lookup_victim(struct f2fs_sb_info *sbi, } skip_node: if (iter < dirty_threshold) { - if (stage == 0) - node = rb_prev(node); - else if (stage == 1) - node = rb_next(node); + ve = rb_entry(stage == 0 ? rb_prev(&ve->rb_node) : + rb_next(&ve->rb_node), + struct victim_entry, rb_node); goto next_node; } -skip_stage: - if (stage < 1) { - stage++; - iter = 0; + + if (stage++ == 0) goto next_stage; - } } + static void lookup_victim_by_age(struct f2fs_sb_info *sbi, struct victim_sel_policy *p) { - f2fs_bug_on(sbi, !f2fs_check_rb_tree_consistence(sbi, - &sbi->am.root, true)); + f2fs_bug_on(sbi, !f2fs_check_victim_tree(sbi, &sbi->am.root));
if (p->gc_mode == GC_AT) atgc_lookup_victim(sbi, p); diff --git a/fs/f2fs/gc.h b/fs/f2fs/gc.h index 19b956c2d697a..ca84024b9c9e7 100644 --- a/fs/f2fs/gc.h +++ b/fs/f2fs/gc.h @@ -55,20 +55,10 @@ struct gc_inode_list { struct radix_tree_root iroot; };
-struct victim_info { - unsigned long long mtime; /* mtime of section */ - unsigned int segno; /* section No. */ -}; - struct victim_entry { struct rb_node rb_node; /* rb node located in rb-tree */ - union { - struct { - unsigned long long mtime; /* mtime of section */ - unsigned int segno; /* segment No. */ - }; - struct victim_info vi; /* victim info */ - }; + unsigned long long mtime; /* mtime of section */ + unsigned int segno; /* segment No. */ struct list_head list; };
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 7d5bea9d92641..cbbf95b995414 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -1474,7 +1474,7 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi, goto next; if (unlikely(dcc->rbtree_check)) f2fs_bug_on(sbi, !f2fs_check_rb_tree_consistence(sbi, - &dcc->root, false)); + &dcc->root)); blk_start_plug(&plug); list_for_each_entry_safe(dc, tmp, pend_list, list) { f2fs_bug_on(sbi, dc->state != D_PREP); @@ -3002,7 +3002,7 @@ static unsigned int __issue_discard_cmd_range(struct f2fs_sb_info *sbi, mutex_lock(&dcc->cmd_lock); if (unlikely(dcc->rbtree_check)) f2fs_bug_on(sbi, !f2fs_check_rb_tree_consistence(sbi, - &dcc->root, false)); + &dcc->root));
dc = (struct discard_cmd *)f2fs_lookup_rb_tree_ret(&dcc->root, NULL, start,
From: Rob Clark robdclark@chromium.org
[ Upstream commit cade05b2a88558847984287dd389fae0c7de31d6 ]
The _HI reg is always following the _LO reg, so no need to pass these offsets seprately.
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Reviewed-by: Akhil P Oommen quic_akhilpo@quicinc.com Patchwork: https://patchwork.freedesktop.org/patch/511581/ Link: https://lore.kernel.org/r/20221114193049.1533391-2-robdclark@gmail.com Stable-dep-of: ca090c837b43 ("drm/msm: fix missing wq allocation error handling") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 3 +-- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 27 ++++++++------------- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 4 +-- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 24 ++++++------------ drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 3 +-- drivers/gpu/drm/msm/msm_gpu.h | 12 ++++----- 6 files changed, 27 insertions(+), 46 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index 7cb8d9849c073..a10feb8a4194a 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -606,8 +606,7 @@ static int a4xx_pm_suspend(struct msm_gpu *gpu) {
static int a4xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) { - *value = gpu_read64(gpu, REG_A4XX_RBBM_PERFCTR_CP_0_LO, - REG_A4XX_RBBM_PERFCTR_CP_0_HI); + *value = gpu_read64(gpu, REG_A4XX_RBBM_PERFCTR_CP_0_LO);
return 0; } diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 02ff306f96f42..24feae285ccd6 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -605,11 +605,9 @@ static int a5xx_ucode_init(struct msm_gpu *gpu) a5xx_ucode_check_version(a5xx_gpu, a5xx_gpu->pfp_bo); }
- gpu_write64(gpu, REG_A5XX_CP_ME_INSTR_BASE_LO, - REG_A5XX_CP_ME_INSTR_BASE_HI, a5xx_gpu->pm4_iova); + gpu_write64(gpu, REG_A5XX_CP_ME_INSTR_BASE_LO, a5xx_gpu->pm4_iova);
- gpu_write64(gpu, REG_A5XX_CP_PFP_INSTR_BASE_LO, - REG_A5XX_CP_PFP_INSTR_BASE_HI, a5xx_gpu->pfp_iova); + gpu_write64(gpu, REG_A5XX_CP_PFP_INSTR_BASE_LO, a5xx_gpu->pfp_iova);
return 0; } @@ -868,8 +866,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu) * memory rendering at this point in time and we don't want to block off * part of the virtual memory space. */ - gpu_write64(gpu, REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO, - REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_HI, 0x00000000); + gpu_write64(gpu, REG_A5XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO, 0x00000000); gpu_write(gpu, REG_A5XX_RBBM_SECVID_TSB_TRUSTED_SIZE, 0x00000000);
/* Put the GPU into 64 bit by default */ @@ -908,8 +905,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu) return ret;
/* Set the ringbuffer address */ - gpu_write64(gpu, REG_A5XX_CP_RB_BASE, REG_A5XX_CP_RB_BASE_HI, - gpu->rb[0]->iova); + gpu_write64(gpu, REG_A5XX_CP_RB_BASE, gpu->rb[0]->iova);
/* * If the microcode supports the WHERE_AM_I opcode then we can use that @@ -936,7 +932,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu) }
gpu_write64(gpu, REG_A5XX_CP_RB_RPTR_ADDR, - REG_A5XX_CP_RB_RPTR_ADDR_HI, shadowptr(a5xx_gpu, gpu->rb[0])); + shadowptr(a5xx_gpu, gpu->rb[0])); } else if (gpu->nr_rings > 1) { /* Disable preemption if WHERE_AM_I isn't available */ a5xx_preempt_fini(gpu); @@ -1239,9 +1235,9 @@ static void a5xx_fault_detect_irq(struct msm_gpu *gpu) gpu_read(gpu, REG_A5XX_RBBM_STATUS), gpu_read(gpu, REG_A5XX_CP_RB_RPTR), gpu_read(gpu, REG_A5XX_CP_RB_WPTR), - gpu_read64(gpu, REG_A5XX_CP_IB1_BASE, REG_A5XX_CP_IB1_BASE_HI), + gpu_read64(gpu, REG_A5XX_CP_IB1_BASE), gpu_read(gpu, REG_A5XX_CP_IB1_BUFSZ), - gpu_read64(gpu, REG_A5XX_CP_IB2_BASE, REG_A5XX_CP_IB2_BASE_HI), + gpu_read64(gpu, REG_A5XX_CP_IB2_BASE), gpu_read(gpu, REG_A5XX_CP_IB2_BUFSZ));
/* Turn off the hangcheck timer to keep it from bothering us */ @@ -1427,8 +1423,7 @@ static int a5xx_pm_suspend(struct msm_gpu *gpu)
static int a5xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) { - *value = gpu_read64(gpu, REG_A5XX_RBBM_ALWAYSON_COUNTER_LO, - REG_A5XX_RBBM_ALWAYSON_COUNTER_HI); + *value = gpu_read64(gpu, REG_A5XX_RBBM_ALWAYSON_COUNTER_LO);
return 0; } @@ -1465,8 +1460,7 @@ static int a5xx_crashdumper_run(struct msm_gpu *gpu, if (IS_ERR_OR_NULL(dumper->ptr)) return -EINVAL;
- gpu_write64(gpu, REG_A5XX_CP_CRASH_SCRIPT_BASE_LO, - REG_A5XX_CP_CRASH_SCRIPT_BASE_HI, dumper->iova); + gpu_write64(gpu, REG_A5XX_CP_CRASH_SCRIPT_BASE_LO, dumper->iova);
gpu_write(gpu, REG_A5XX_CP_CRASH_DUMP_CNTL, 1);
@@ -1666,8 +1660,7 @@ static u64 a5xx_gpu_busy(struct msm_gpu *gpu, unsigned long *out_sample_rate) { u64 busy_cycles;
- busy_cycles = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_RBBM_0_LO, - REG_A5XX_RBBM_PERFCTR_RBBM_0_HI); + busy_cycles = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_RBBM_0_LO); *out_sample_rate = clk_get_rate(gpu->core_clk);
return busy_cycles; diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index e0eef47dae632..f58dd564d122b 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -137,7 +137,6 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu)
/* Set the address of the incoming preemption record */ gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_LO, - REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_HI, a5xx_gpu->preempt_iova[ring->id]);
a5xx_gpu->next_ring = ring; @@ -212,8 +211,7 @@ void a5xx_preempt_hw_init(struct msm_gpu *gpu) }
/* Write a 0 to signal that we aren't switching pagetables */ - gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_SMMU_INFO_LO, - REG_A5XX_CP_CONTEXT_SWITCH_SMMU_INFO_HI, 0); + gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_SMMU_INFO_LO, 0);
/* Reset the preemption state */ set_preempt_state(a5xx_gpu, PREEMPT_NONE); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 9d7fc44c1e2a9..dc53466864b05 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -247,8 +247,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) OUT_RING(ring, submit->seqno);
trace_msm_gpu_submit_flush(submit, - gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, - REG_A6XX_CP_ALWAYS_ON_COUNTER_HI)); + gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO));
a6xx_flush(gpu, ring); } @@ -947,8 +946,7 @@ static int a6xx_ucode_init(struct msm_gpu *gpu) } }
- gpu_write64(gpu, REG_A6XX_CP_SQE_INSTR_BASE, - REG_A6XX_CP_SQE_INSTR_BASE+1, a6xx_gpu->sqe_iova); + gpu_write64(gpu, REG_A6XX_CP_SQE_INSTR_BASE, a6xx_gpu->sqe_iova);
return 0; } @@ -999,8 +997,7 @@ static int hw_init(struct msm_gpu *gpu) * memory rendering at this point in time and we don't want to block off * part of the virtual memory space. */ - gpu_write64(gpu, REG_A6XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO, - REG_A6XX_RBBM_SECVID_TSB_TRUSTED_BASE_HI, 0x00000000); + gpu_write64(gpu, REG_A6XX_RBBM_SECVID_TSB_TRUSTED_BASE_LO, 0x00000000); gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_TRUSTED_SIZE, 0x00000000);
/* Turn on 64 bit addressing for all blocks */ @@ -1049,11 +1046,9 @@ static int hw_init(struct msm_gpu *gpu)
if (!adreno_is_a650_family(adreno_gpu)) { /* Set the GMEM VA range [0x100000:0x100000 + gpu->gmem - 1] */ - gpu_write64(gpu, REG_A6XX_UCHE_GMEM_RANGE_MIN_LO, - REG_A6XX_UCHE_GMEM_RANGE_MIN_HI, 0x00100000); + gpu_write64(gpu, REG_A6XX_UCHE_GMEM_RANGE_MIN_LO, 0x00100000);
gpu_write64(gpu, REG_A6XX_UCHE_GMEM_RANGE_MAX_LO, - REG_A6XX_UCHE_GMEM_RANGE_MAX_HI, 0x00100000 + adreno_gpu->gmem - 1); }
@@ -1145,8 +1140,7 @@ static int hw_init(struct msm_gpu *gpu) goto out;
/* Set the ringbuffer address */ - gpu_write64(gpu, REG_A6XX_CP_RB_BASE, REG_A6XX_CP_RB_BASE_HI, - gpu->rb[0]->iova); + gpu_write64(gpu, REG_A6XX_CP_RB_BASE, gpu->rb[0]->iova);
/* Targets that support extended APRIV can use the RPTR shadow from * hardware but all the other ones need to disable the feature. Targets @@ -1178,7 +1172,6 @@ static int hw_init(struct msm_gpu *gpu) }
gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR_LO, - REG_A6XX_CP_RB_RPTR_ADDR_HI, shadowptr(a6xx_gpu, gpu->rb[0])); }
@@ -1506,9 +1499,9 @@ static void a6xx_fault_detect_irq(struct msm_gpu *gpu) gpu_read(gpu, REG_A6XX_RBBM_STATUS), gpu_read(gpu, REG_A6XX_CP_RB_RPTR), gpu_read(gpu, REG_A6XX_CP_RB_WPTR), - gpu_read64(gpu, REG_A6XX_CP_IB1_BASE, REG_A6XX_CP_IB1_BASE_HI), + gpu_read64(gpu, REG_A6XX_CP_IB1_BASE), gpu_read(gpu, REG_A6XX_CP_IB1_REM_SIZE), - gpu_read64(gpu, REG_A6XX_CP_IB2_BASE, REG_A6XX_CP_IB2_BASE_HI), + gpu_read64(gpu, REG_A6XX_CP_IB2_BASE), gpu_read(gpu, REG_A6XX_CP_IB2_REM_SIZE));
/* Turn off the hangcheck timer to keep it from bothering us */ @@ -1719,8 +1712,7 @@ static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) /* Force the GPU power on so we can read this register */ a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
- *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, - REG_A6XX_CP_ALWAYS_ON_COUNTER_HI); + *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO);
a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c index a5c3d1ed255a6..a023d5f962dce 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -147,8 +147,7 @@ static int a6xx_crashdumper_run(struct msm_gpu *gpu, /* Make sure all pending memory writes are posted */ wmb();
- gpu_write64(gpu, REG_A6XX_CP_CRASH_SCRIPT_BASE_LO, - REG_A6XX_CP_CRASH_SCRIPT_BASE_HI, dumper->iova); + gpu_write64(gpu, REG_A6XX_CP_CRASH_SCRIPT_BASE_LO, dumper->iova);
gpu_write(gpu, REG_A6XX_CP_CRASH_DUMP_CNTL, 1);
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index a89bfdc3d7f90..7a36e0784f067 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -548,7 +548,7 @@ static inline void gpu_rmw(struct msm_gpu *gpu, u32 reg, u32 mask, u32 or) msm_rmw(gpu->mmio + (reg << 2), mask, or); }
-static inline u64 gpu_read64(struct msm_gpu *gpu, u32 lo, u32 hi) +static inline u64 gpu_read64(struct msm_gpu *gpu, u32 reg) { u64 val;
@@ -566,17 +566,17 @@ static inline u64 gpu_read64(struct msm_gpu *gpu, u32 lo, u32 hi) * when the lo is read, so make sure to read the lo first to trigger * that */ - val = (u64) msm_readl(gpu->mmio + (lo << 2)); - val |= ((u64) msm_readl(gpu->mmio + (hi << 2)) << 32); + val = (u64) msm_readl(gpu->mmio + (reg << 2)); + val |= ((u64) msm_readl(gpu->mmio + ((reg + 1) << 2)) << 32);
return val; }
-static inline void gpu_write64(struct msm_gpu *gpu, u32 lo, u32 hi, u64 val) +static inline void gpu_write64(struct msm_gpu *gpu, u32 reg, u64 val) { /* Why not a writeq here? Read the screed above */ - msm_writel(lower_32_bits(val), gpu->mmio + (lo << 2)); - msm_writel(upper_32_bits(val), gpu->mmio + (hi << 2)); + msm_writel(lower_32_bits(val), gpu->mmio + (reg << 2)); + msm_writel(upper_32_bits(val), gpu->mmio + ((reg + 1) << 2)); }
int msm_gpu_pm_suspend(struct msm_gpu *gpu);
From: Rob Clark robdclark@chromium.org
[ Upstream commit d73b1d02de0858b96f743e1e8b767fb092ae4c1b ]
If the hangcheck timer expires, check if the fw's position in the cmdstream has advanced (changed) since last timer expiration, and allow it up to three additional "extensions" to it's alotted time. The intention is to continue to catch "shader stuck in a loop" type hangs quickly, but allow more time for things that are actually making forward progress.
Because we need to sample the CP state twice to detect if there has not been progress, this also cuts the the timer's duration in half.
v2: Fix typo (REG_A6XX_CP_CSQ_IB2_STAT), add comment v3: Only halve hangcheck timer duration for generations which support progress detection (hdanton); removed unused a5xx progress (without knowing how to adjust for data buffered in ROQ it is too likely to report a false negative) v4: Comment updates to better describe the total hangcheck duration when progress detection is applied
Reviewed-by: Chia-I Wu olvaffe@gmail.com Tested-by: Chia-I Wu olvaffe@gmail.com # dEQP-GLES2.functional.flush_finish.wait Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Akhil P Oommen quic_akhilpo@quicinc.com Patchwork: https://patchwork.freedesktop.org/patch/511584/ Link: https://lore.kernel.org/r/20221114193049.1533391-3-robdclark@gmail.com Stable-dep-of: ca090c837b43 ("drm/msm: fix missing wq allocation error handling") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 34 +++++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_drv.c | 1 - drivers/gpu/drm/msm/msm_drv.h | 8 ++++++- drivers/gpu/drm/msm/msm_gpu.c | 31 +++++++++++++++++++++++- drivers/gpu/drm/msm/msm_gpu.h | 10 ++++++++ drivers/gpu/drm/msm/msm_ringbuffer.h | 28 ++++++++++++++++++++++ 6 files changed, 109 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index dc53466864b05..95e73eddc5e91 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1850,6 +1850,39 @@ static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) return ring->memptrs->rptr = gpu_read(gpu, REG_A6XX_CP_RB_RPTR); }
+static bool a6xx_progress(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +{ + struct msm_cp_state cp_state = { + .ib1_base = gpu_read64(gpu, REG_A6XX_CP_IB1_BASE), + .ib2_base = gpu_read64(gpu, REG_A6XX_CP_IB2_BASE), + .ib1_rem = gpu_read(gpu, REG_A6XX_CP_IB1_REM_SIZE), + .ib2_rem = gpu_read(gpu, REG_A6XX_CP_IB2_REM_SIZE), + }; + bool progress; + + /* + * Adjust the remaining data to account for what has already been + * fetched from memory, but not yet consumed by the SQE. + * + * This is not *technically* correct, the amount buffered could + * exceed the IB size due to hw prefetching ahead, but: + * + * (1) We aren't trying to find the exact position, just whether + * progress has been made + * (2) The CP_REG_TO_MEM at the end of a submit should be enough + * to prevent prefetching into an unrelated submit. (And + * either way, at some point the ROQ will be full.) + */ + cp_state.ib1_rem += gpu_read(gpu, REG_A6XX_CP_CSQ_IB1_STAT) >> 16; + cp_state.ib2_rem += gpu_read(gpu, REG_A6XX_CP_CSQ_IB2_STAT) >> 16; + + progress = !!memcmp(&cp_state, &ring->last_cp_state, sizeof(cp_state)); + + ring->last_cp_state = cp_state; + + return progress; +} + static u32 a618_get_speed_bin(u32 fuse) { if (fuse == 0) @@ -1966,6 +1999,7 @@ static const struct adreno_gpu_funcs funcs = { .create_address_space = a6xx_create_address_space, .create_private_address_space = a6xx_create_private_address_space, .get_rptr = a6xx_get_rptr, + .progress = a6xx_progress, }, .get_timestamp = a6xx_get_timestamp, }; diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 69bc2e5b4aaeb..5eeb4655fbf17 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -433,7 +433,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) priv->dev = ddev;
priv->wq = alloc_ordered_workqueue("msm", 0); - priv->hangcheck_period = DRM_MSM_HANGCHECK_DEFAULT_PERIOD;
INIT_LIST_HEAD(&priv->objects); mutex_init(&priv->obj_lock); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index b2ea262296a4f..d4e0ef608950e 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -224,7 +224,13 @@ struct msm_drm_private {
struct drm_atomic_state *pm_state;
- /* For hang detection, in ms */ + /** + * hangcheck_period: For hang detection, in ms + * + * Note that in practice, a submit/job will get at least two hangcheck + * periods, due to checking for progress being implemented as simply + * "have the CP position registers changed since last time?" + */ unsigned int hangcheck_period;
/** diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 4f495eecc34ba..3802495003258 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -494,6 +494,21 @@ static void hangcheck_timer_reset(struct msm_gpu *gpu) round_jiffies_up(jiffies + msecs_to_jiffies(priv->hangcheck_period))); }
+static bool made_progress(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +{ + if (ring->hangcheck_progress_retries >= DRM_MSM_HANGCHECK_PROGRESS_RETRIES) + return false; + + if (!gpu->funcs->progress) + return false; + + if (!gpu->funcs->progress(gpu, ring)) + return false; + + ring->hangcheck_progress_retries++; + return true; +} + static void hangcheck_handler(struct timer_list *t) { struct msm_gpu *gpu = from_timer(gpu, t, hangcheck_timer); @@ -504,9 +519,12 @@ static void hangcheck_handler(struct timer_list *t) if (fence != ring->hangcheck_fence) { /* some progress has been made.. ya! */ ring->hangcheck_fence = fence; - } else if (fence_before(fence, ring->fctx->last_fence)) { + ring->hangcheck_progress_retries = 0; + } else if (fence_before(fence, ring->fctx->last_fence) && + !made_progress(gpu, ring)) { /* no progress and not done.. hung! */ ring->hangcheck_fence = fence; + ring->hangcheck_progress_retries = 0; DRM_DEV_ERROR(dev->dev, "%s: hangcheck detected gpu lockup rb %d!\n", gpu->name, ring->id); DRM_DEV_ERROR(dev->dev, "%s: completed fence: %u\n", @@ -832,6 +850,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config) { + struct msm_drm_private *priv = drm->dev_private; int i, ret, nr_rings = config->nr_rings; void *memptrs; uint64_t memptrs_iova; @@ -859,6 +878,16 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, kthread_init_work(&gpu->recover_work, recover_worker); kthread_init_work(&gpu->fault_work, fault_worker);
+ priv->hangcheck_period = DRM_MSM_HANGCHECK_DEFAULT_PERIOD; + + /* + * If progress detection is supported, halve the hangcheck timer + * duration, as it takes two iterations of the hangcheck handler + * to detect a hang. + */ + if (funcs->progress) + priv->hangcheck_period /= 2; + timer_setup(&gpu->hangcheck_timer, hangcheck_handler, 0);
spin_lock_init(&gpu->perf_lock); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 7a36e0784f067..732295e256834 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,6 +78,15 @@ struct msm_gpu_funcs { struct msm_gem_address_space *(*create_private_address_space) (struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); + + /** + * progress: Has the GPU made progress? + * + * Return true if GPU position in cmdstream has advanced (or changed) + * since the last call. To avoid false negatives, this should account + * for cmdstream that is buffered in this FIFO upstream of the CP fw. + */ + bool (*progress)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); };
/* Additional state for iommu faults: */ @@ -237,6 +246,7 @@ struct msm_gpu { #define DRM_MSM_INACTIVE_PERIOD 66 /* in ms (roughly four frames) */
#define DRM_MSM_HANGCHECK_DEFAULT_PERIOD 500 /* in ms */ +#define DRM_MSM_HANGCHECK_PROGRESS_RETRIES 3 struct timer_list hangcheck_timer;
/* Fault info for most recent iova fault: */ diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 2a5045abe46e8..698b333abccd6 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -35,6 +35,11 @@ struct msm_rbmemptrs { volatile u64 ttbr0; };
+struct msm_cp_state { + uint64_t ib1_base, ib2_base; + uint32_t ib1_rem, ib2_rem; +}; + struct msm_ringbuffer { struct msm_gpu *gpu; int id; @@ -64,6 +69,29 @@ struct msm_ringbuffer { uint64_t memptrs_iova; struct msm_fence_context *fctx;
+ /** + * hangcheck_progress_retries: + * + * The number of extra hangcheck duration cycles that we have given + * due to it appearing that the GPU is making forward progress. + * + * For GPU generations which support progress detection (see. + * msm_gpu_funcs::progress()), if the GPU appears to be making progress + * (ie. the CP has advanced in the command stream, we'll allow up to + * DRM_MSM_HANGCHECK_PROGRESS_RETRIES expirations of the hangcheck timer + * before killing the job. But to detect progress we need two sample + * points, so the duration of the hangcheck timer is halved. In other + * words we'll let the submit run for up to: + * + * (DRM_MSM_HANGCHECK_DEFAULT_PERIOD / 2) * (DRM_MSM_HANGCHECK_PROGRESS_RETRIES + 1) + */ + int hangcheck_progress_retries; + + /** + * last_cp_state: The state of the CP at the last call to gpu->progress() + */ + struct msm_cp_state last_cp_state; + /* * preempt_lock protects preemption and serializes wptr updates against * preemption. Can be aquired from irq context.
From: Johan Hovold johan+linaro@kernel.org
[ Upstream commit ca090c837b430752038b24e56dd182010d77f6f6 ]
Add the missing sanity check to handle workqueue allocation failures.
Fixes: c8afe684c95c ("drm/msm: basic KMS driver for snapdragon") Cc: stable@vger.kernel.org # 3.12 Cc: Rob Clark robdclark@gmail.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525102/ Link: https://lore.kernel.org/r/20230306100722.28485-8-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/msm/msm_drv.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 5eeb4655fbf17..ac3d1d492a48c 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -433,6 +433,10 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) priv->dev = ddev;
priv->wq = alloc_ordered_workqueue("msm", 0); + if (!priv->wq) { + ret = -ENOMEM; + goto err_put_dev; + }
INIT_LIST_HEAD(&priv->objects); mutex_init(&priv->obj_lock);
From: Huacai Chen chenhuacai@loongson.cn
[ Upstream commit 3d12938dbc048ecb193fec69898d95f6b4813a4b ]
1, Adjust the return of acpi_cascade_irqdomain_init() and check its return value. 2, Combine unnecessary short lines to one long line.
Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20221020142514.1725514-1-chenhuacai@loongson.cn Stable-dep-of: 64cc451e45e1 ("irqchip/loongson-eiointc: Fix incorrect use of acpi_get_vec_parent") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/irqchip/irq-loongarch-cpu.c | 30 +++++++++++++++----------- drivers/irqchip/irq-loongson-eiointc.c | 30 +++++++++++++++----------- drivers/irqchip/irq-loongson-pch-pic.c | 15 +++++++------ 3 files changed, 44 insertions(+), 31 deletions(-)
diff --git a/drivers/irqchip/irq-loongarch-cpu.c b/drivers/irqchip/irq-loongarch-cpu.c index 741612ba6a520..fdec3e9cfacfb 100644 --- a/drivers/irqchip/irq-loongarch-cpu.c +++ b/drivers/irqchip/irq-loongarch-cpu.c @@ -92,18 +92,16 @@ static const struct irq_domain_ops loongarch_cpu_intc_irq_domain_ops = { .xlate = irq_domain_xlate_onecell, };
-static int __init -liointc_parse_madt(union acpi_subtable_headers *header, - const unsigned long end) +static int __init liointc_parse_madt(union acpi_subtable_headers *header, + const unsigned long end) { struct acpi_madt_lio_pic *liointc_entry = (struct acpi_madt_lio_pic *)header;
return liointc_acpi_init(irq_domain, liointc_entry); }
-static int __init -eiointc_parse_madt(union acpi_subtable_headers *header, - const unsigned long end) +static int __init eiointc_parse_madt(union acpi_subtable_headers *header, + const unsigned long end) { struct acpi_madt_eio_pic *eiointc_entry = (struct acpi_madt_eio_pic *)header;
@@ -112,16 +110,24 @@ eiointc_parse_madt(union acpi_subtable_headers *header,
static int __init acpi_cascade_irqdomain_init(void) { - acpi_table_parse_madt(ACPI_MADT_TYPE_LIO_PIC, - liointc_parse_madt, 0); - acpi_table_parse_madt(ACPI_MADT_TYPE_EIO_PIC, - eiointc_parse_madt, 0); + int r; + + r = acpi_table_parse_madt(ACPI_MADT_TYPE_LIO_PIC, liointc_parse_madt, 0); + if (r < 0) + return r; + + r = acpi_table_parse_madt(ACPI_MADT_TYPE_EIO_PIC, eiointc_parse_madt, 0); + if (r < 0) + return r; + return 0; }
static int __init cpuintc_acpi_init(union acpi_subtable_headers *header, const unsigned long end) { + int ret; + if (irq_domain) return 0;
@@ -139,9 +145,9 @@ static int __init cpuintc_acpi_init(union acpi_subtable_headers *header, set_handle_irq(&handle_cpu_irq); acpi_set_irq_model(ACPI_IRQ_MODEL_LPIC, lpic_get_gsi_domain_id); acpi_set_gsi_to_irq_fallback(lpic_gsi_to_irq); - acpi_cascade_irqdomain_init(); + ret = acpi_cascade_irqdomain_init();
- return 0; + return ret; }
IRQCHIP_ACPI_DECLARE(cpuintc_v1, ACPI_MADT_TYPE_CORE_PIC, diff --git a/drivers/irqchip/irq-loongson-eiointc.c b/drivers/irqchip/irq-loongson-eiointc.c index ab49cd15699f4..7e601a5f7d9f8 100644 --- a/drivers/irqchip/irq-loongson-eiointc.c +++ b/drivers/irqchip/irq-loongson-eiointc.c @@ -301,9 +301,8 @@ static struct irq_domain *acpi_get_vec_parent(int node, struct acpi_vector_group return NULL; }
-static int __init -pch_pic_parse_madt(union acpi_subtable_headers *header, - const unsigned long end) +static int __init pch_pic_parse_madt(union acpi_subtable_headers *header, + const unsigned long end) { struct acpi_madt_bio_pic *pchpic_entry = (struct acpi_madt_bio_pic *)header; unsigned int node = (pchpic_entry->address >> 44) & 0xf; @@ -315,9 +314,8 @@ pch_pic_parse_madt(union acpi_subtable_headers *header, return 0; }
-static int __init -pch_msi_parse_madt(union acpi_subtable_headers *header, - const unsigned long end) +static int __init pch_msi_parse_madt(union acpi_subtable_headers *header, + const unsigned long end) { struct acpi_madt_msi_pic *pchmsi_entry = (struct acpi_madt_msi_pic *)header; struct irq_domain *parent = acpi_get_vec_parent(eiointc_priv[nr_pics - 1]->node, msi_group); @@ -330,17 +328,23 @@ pch_msi_parse_madt(union acpi_subtable_headers *header,
static int __init acpi_cascade_irqdomain_init(void) { - acpi_table_parse_madt(ACPI_MADT_TYPE_BIO_PIC, - pch_pic_parse_madt, 0); - acpi_table_parse_madt(ACPI_MADT_TYPE_MSI_PIC, - pch_msi_parse_madt, 1); + int r; + + r = acpi_table_parse_madt(ACPI_MADT_TYPE_BIO_PIC, pch_pic_parse_madt, 0); + if (r < 0) + return r; + + r = acpi_table_parse_madt(ACPI_MADT_TYPE_MSI_PIC, pch_msi_parse_madt, 1); + if (r < 0) + return r; + return 0; }
int __init eiointc_acpi_init(struct irq_domain *parent, struct acpi_madt_eio_pic *acpi_eiointc) { - int i, parent_irq; + int i, ret, parent_irq; unsigned long node_map; struct eiointc_priv *priv;
@@ -386,9 +390,9 @@ int __init eiointc_acpi_init(struct irq_domain *parent,
acpi_set_vec_parent(acpi_eiointc->node, priv->eiointc_domain, pch_group); acpi_set_vec_parent(acpi_eiointc->node, priv->eiointc_domain, msi_group); - acpi_cascade_irqdomain_init(); + ret = acpi_cascade_irqdomain_init();
- return 0; + return ret;
out_free_handle: irq_domain_free_fwnode(priv->domain_handle); diff --git a/drivers/irqchip/irq-loongson-pch-pic.c b/drivers/irqchip/irq-loongson-pch-pic.c index 9d2250e277581..679e2b68e6e9d 100644 --- a/drivers/irqchip/irq-loongson-pch-pic.c +++ b/drivers/irqchip/irq-loongson-pch-pic.c @@ -328,9 +328,8 @@ int find_pch_pic(u32 gsi) return -1; }
-static int __init -pch_lpc_parse_madt(union acpi_subtable_headers *header, - const unsigned long end) +static int __init pch_lpc_parse_madt(union acpi_subtable_headers *header, + const unsigned long end) { struct acpi_madt_lpc_pic *pchlpc_entry = (struct acpi_madt_lpc_pic *)header;
@@ -339,8 +338,12 @@ pch_lpc_parse_madt(union acpi_subtable_headers *header,
static int __init acpi_cascade_irqdomain_init(void) { - acpi_table_parse_madt(ACPI_MADT_TYPE_LPC_PIC, - pch_lpc_parse_madt, 0); + int r; + + r = acpi_table_parse_madt(ACPI_MADT_TYPE_LPC_PIC, pch_lpc_parse_madt, 0); + if (r < 0) + return r; + return 0; }
@@ -370,7 +373,7 @@ int __init pch_pic_acpi_init(struct irq_domain *parent, }
if (acpi_pchpic->id == 0) - acpi_cascade_irqdomain_init(); + ret = acpi_cascade_irqdomain_init();
return ret; }
From: Jianmin Lv lvjianmin@loongson.cn
[ Upstream commit 64cc451e45e146b2140211b4f45f278b93b24ac0 ]
In eiointc_acpi_init(), a *eiointc* node is passed into acpi_get_vec_parent() instead of a required *NUMA* node (on some chip like 3C5000L, a *NUMA* node means a *eiointc* node, but on some chip like 3C5000, a *NUMA* node contains 4 *eiointc* nodes), and node in struct acpi_vector_group is essentially a *NUMA* node, which will lead to no parent matched for passed *eiointc* node. so the patch adjusts code to use *NUMA* node for parameter node of acpi_set_vec_parent/acpi_get_vec_parent.
Cc: stable@vger.kernel.org Signed-off-by: Jianmin Lv lvjianmin@loongson.cn Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20230407083453.6305-3-lvjianmin@loongson.cn Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/irqchip/irq-loongson-eiointc.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/drivers/irqchip/irq-loongson-eiointc.c b/drivers/irqchip/irq-loongson-eiointc.c index 7e601a5f7d9f8..768ed36f5f663 100644 --- a/drivers/irqchip/irq-loongson-eiointc.c +++ b/drivers/irqchip/irq-loongson-eiointc.c @@ -279,9 +279,6 @@ static void acpi_set_vec_parent(int node, struct irq_domain *parent, struct acpi { int i;
- if (cpu_has_flatmode) - node = cpu_to_node(node * CORES_PER_EIO_NODE); - for (i = 0; i < MAX_IO_PICS; i++) { if (node == vec_group[i].node) { vec_group[i].parent = parent; @@ -317,8 +314,16 @@ static int __init pch_pic_parse_madt(union acpi_subtable_headers *header, static int __init pch_msi_parse_madt(union acpi_subtable_headers *header, const unsigned long end) { + struct irq_domain *parent; struct acpi_madt_msi_pic *pchmsi_entry = (struct acpi_madt_msi_pic *)header; - struct irq_domain *parent = acpi_get_vec_parent(eiointc_priv[nr_pics - 1]->node, msi_group); + int node; + + if (cpu_has_flatmode) + node = cpu_to_node(eiointc_priv[nr_pics - 1]->node * CORES_PER_EIO_NODE); + else + node = eiointc_priv[nr_pics - 1]->node; + + parent = acpi_get_vec_parent(node, msi_group);
if (parent) return pch_msi_acpi_init(parent, pchmsi_entry); @@ -347,6 +352,7 @@ int __init eiointc_acpi_init(struct irq_domain *parent, int i, ret, parent_irq; unsigned long node_map; struct eiointc_priv *priv; + int node;
priv = kzalloc(sizeof(*priv), GFP_KERNEL); if (!priv) @@ -388,8 +394,12 @@ int __init eiointc_acpi_init(struct irq_domain *parent, "irqchip/loongarch/intc:starting", eiointc_router_init, NULL);
- acpi_set_vec_parent(acpi_eiointc->node, priv->eiointc_domain, pch_group); - acpi_set_vec_parent(acpi_eiointc->node, priv->eiointc_domain, msi_group); + if (cpu_has_flatmode) + node = cpu_to_node(acpi_eiointc->node * CORES_PER_EIO_NODE); + else + node = acpi_eiointc->node; + acpi_set_vec_parent(node, priv->eiointc_domain, pch_group); + acpi_set_vec_parent(node, priv->eiointc_domain, msi_group); ret = acpi_cascade_irqdomain_init();
return ret;
From: Jianmin Lv lvjianmin@loongson.cn
[ Upstream commit bdd60211eebb43ba1c4c14704965f4d4b628b931 ]
When support suspend/resume for loongson-eiointc, the syscore_ops is registered twice in dual-bridges machines where there are two eiointc IRQ domains. Repeated registration of an same syscore_ops broke syscore_ops_list. Also, cpuhp_setup_state_nocalls is only needed to call for once. So the patch will corret them.
Fixes: a90335c2dfb4 ("irqchip/loongson-eiointc: Add suspend/resume support") Cc: stable@vger.kernel.org Signed-off-by: Jianmin Lv lvjianmin@loongson.cn Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20230407083453.6305-4-lvjianmin@loongson.cn Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/irqchip/irq-loongson-eiointc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/irqchip/irq-loongson-eiointc.c b/drivers/irqchip/irq-loongson-eiointc.c index 768ed36f5f663..ac04aeaa2d308 100644 --- a/drivers/irqchip/irq-loongson-eiointc.c +++ b/drivers/irqchip/irq-loongson-eiointc.c @@ -390,9 +390,11 @@ int __init eiointc_acpi_init(struct irq_domain *parent, parent_irq = irq_create_mapping(parent, acpi_eiointc->cascade); irq_set_chained_handler_and_data(parent_irq, eiointc_irq_dispatch, priv);
- cpuhp_setup_state_nocalls(CPUHP_AP_IRQ_LOONGARCH_STARTING, + if (nr_pics == 1) { + cpuhp_setup_state_nocalls(CPUHP_AP_IRQ_LOONGARCH_STARTING, "irqchip/loongarch/intc:starting", eiointc_router_init, NULL); + }
if (cpu_has_flatmode) node = cpu_to_node(acpi_eiointc->node * CORES_PER_EIO_NODE);
From: Sascha Hauer s.hauer@pengutronix.de
[ Upstream commit 14705f969d98187a1cc2682e0c9bd2e230b8098f ]
On my RTW8821CU chipset rfe_option reads as 0x22. Looking at the vendor driver suggests that the field width of rfe_option is 5 bit, so rfe_option should be masked with 0x1f.
Without this the rfe_option comparisons with 2 further down the driver evaluate as false when they should really evaluate as true. The effect is that 2G channels do not work.
rfe_option is also used as an array index into rtw8821c_rfe_defs[]. rtw8821c_rfe_defs[34] (0x22) was added as part of adding USB support, likely because rfe_option reads as 0x22. As this now becomes 0x2, rtw8821c_rfe_defs[34] is no longer used and can be removed.
Note that this might not be the whole truth. In the vendor driver there are indeed places where the unmasked rfe_option value is used. However, the driver has several places where rfe_option is tested with the pattern if (rfe_option == 2 || rfe_option == 0x22) or if (rfe_option == 4 || rfe_option == 0x24), so that rfe_option BIT(5) has no influence on the code path taken. We therefore mask BIT(5) out from rfe_option entirely until this assumption is proved wrong by some chip variant we do not know yet.
Signed-off-by: Sascha Hauer s.hauer@pengutronix.de Tested-by: Alexandru gagniuc mr.nuke.me@gmail.com Tested-by: Larry Finger Larry.Finger@lwfinger.net Tested-by: ValdikSS iam@valdikss.org.ru Cc: stable@vger.kernel.org Reviewed-by: Ping-Ke Shih pkshih@realtek.com Signed-off-by: Kalle Valo kvalo@kernel.org Link: https://lore.kernel.org/r/20230417140358.2240429-3-s.hauer@pengutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtw88/rtw8821c.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c index 9afdc5ce86b43..609a2b86330d8 100644 --- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c +++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c @@ -41,7 +41,7 @@ static int rtw8821c_read_efuse(struct rtw_dev *rtwdev, u8 *log_map)
map = (struct rtw8821c_efuse *)log_map;
- efuse->rfe_option = map->rfe_option; + efuse->rfe_option = map->rfe_option & 0x1f; efuse->rf_board_option = map->rf_board_option; efuse->crystal_cap = map->xtal_k; efuse->pa_type_2g = map->pa_type;
From: Animesh Manna animesh.manna@intel.com
[ Upstream commit f840834a8b60ffd305f03a53007605ba4dfbbc4b ]
The max source and destination limits for scalers in MTL have changed. Use the new values accordingly.
Signed-off-by: José Roberto de Souza jose.souza@intel.com Signed-off-by: Animesh Manna animesh.manna@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221223130509.43245-3-luciano... Stable-dep-of: d944eafed618 ("drm/i915: Check pipe source size when using skl+ scalers") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/skl_scaler.c | 40 ++++++++++++++++++----- 1 file changed, 32 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/skl_scaler.c b/drivers/gpu/drm/i915/display/skl_scaler.c index 4092679be21ec..e6ec5ed0d00ec 100644 --- a/drivers/gpu/drm/i915/display/skl_scaler.c +++ b/drivers/gpu/drm/i915/display/skl_scaler.c @@ -85,6 +85,10 @@ static u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_cosited) #define ICL_MAX_SRC_H 4096 #define ICL_MAX_DST_W 5120 #define ICL_MAX_DST_H 4096 +#define MTL_MAX_SRC_W 4096 +#define MTL_MAX_SRC_H 8192 +#define MTL_MAX_DST_W 8192 +#define MTL_MAX_DST_H 8192 #define SKL_MIN_YUV_420_SRC_W 16 #define SKL_MIN_YUV_420_SRC_H 16
@@ -101,6 +105,8 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach, struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; + int min_src_w, min_src_h, min_dst_w, min_dst_h; + int max_src_w, max_src_h, max_dst_w, max_dst_h;
/* * Src coordinates are already rotated by 270 degrees for @@ -155,15 +161,33 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach, return -EINVAL; }
+ min_src_w = SKL_MIN_SRC_W; + min_src_h = SKL_MIN_SRC_H; + min_dst_w = SKL_MIN_DST_W; + min_dst_h = SKL_MIN_DST_H; + + if (DISPLAY_VER(dev_priv) < 11) { + max_src_w = SKL_MAX_SRC_W; + max_src_h = SKL_MAX_SRC_H; + max_dst_w = SKL_MAX_DST_W; + max_dst_h = SKL_MAX_DST_H; + } else if (DISPLAY_VER(dev_priv) < 14) { + max_src_w = ICL_MAX_SRC_W; + max_src_h = ICL_MAX_SRC_H; + max_dst_w = ICL_MAX_DST_W; + max_dst_h = ICL_MAX_DST_H; + } else { + max_src_w = MTL_MAX_SRC_W; + max_src_h = MTL_MAX_SRC_H; + max_dst_w = MTL_MAX_DST_W; + max_dst_h = MTL_MAX_DST_H; + } + /* range checks */ - if (src_w < SKL_MIN_SRC_W || src_h < SKL_MIN_SRC_H || - dst_w < SKL_MIN_DST_W || dst_h < SKL_MIN_DST_H || - (DISPLAY_VER(dev_priv) >= 11 && - (src_w > ICL_MAX_SRC_W || src_h > ICL_MAX_SRC_H || - dst_w > ICL_MAX_DST_W || dst_h > ICL_MAX_DST_H)) || - (DISPLAY_VER(dev_priv) < 11 && - (src_w > SKL_MAX_SRC_W || src_h > SKL_MAX_SRC_H || - dst_w > SKL_MAX_DST_W || dst_h > SKL_MAX_DST_H))) { + if (src_w < min_src_w || src_h < min_src_h || + dst_w < min_dst_w || dst_h < min_dst_h || + src_w > max_src_w || src_h > max_src_h || + dst_w > max_dst_w || dst_h > max_dst_h) { drm_dbg_kms(&dev_priv->drm, "scaler_user index %u.%u: src %ux%u dst %ux%u " "size is out of scaler range\n",
On Mon, 2023-05-15 at 18:27 +0200, Greg Kroah-Hartman wrote:
From: Animesh Manna animesh.manna@intel.com
[ Upstream commit f840834a8b60ffd305f03a53007605ba4dfbbc4b ]
The max source and destination limits for scalers in MTL have changed. Use the new values accordingly.
Signed-off-by: José Roberto de Souza jose.souza@intel.com Signed-off-by: Animesh Manna animesh.manna@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221223130509.43245-3-luciano... Stable-dep-of: d944eafed618 ("drm/i915: Check pipe source size when using skl+ scalers") Signed-off-by: Sasha Levin sashal@kernel.org
This patch is only relevant for MTL, which is not fully supported on v6.1 yet, so I don't think we need to cherry-pick it to v6.1.
-- Cheers, Luca.
On Tue, May 16, 2023 at 07:13:05AM +0000, Coelho, Luciano wrote:
On Mon, 2023-05-15 at 18:27 +0200, Greg Kroah-Hartman wrote:
From: Animesh Manna animesh.manna@intel.com
[ Upstream commit f840834a8b60ffd305f03a53007605ba4dfbbc4b ]
The max source and destination limits for scalers in MTL have changed. Use the new values accordingly.
Signed-off-by: José Roberto de Souza jose.souza@intel.com Signed-off-by: Animesh Manna animesh.manna@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221223130509.43245-3-luciano... Stable-dep-of: d944eafed618 ("drm/i915: Check pipe source size when using skl+ scalers") Signed-off-by: Sasha Levin sashal@kernel.org
This patch is only relevant for MTL, which is not fully supported on v6.1 yet, so I don't think we need to cherry-pick it to v6.1.
Again, this is a dependency for a different fix, which I think is relevant for 6.1.y, right?
thanks,
greg k-h
On Tue, 2023-05-16 at 09:20 +0200, gregkh@linuxfoundation.org wrote:
On Tue, May 16, 2023 at 07:13:05AM +0000, Coelho, Luciano wrote:
On Mon, 2023-05-15 at 18:27 +0200, Greg Kroah-Hartman wrote:
From: Animesh Manna animesh.manna@intel.com
[ Upstream commit f840834a8b60ffd305f03a53007605ba4dfbbc4b ]
The max source and destination limits for scalers in MTL have changed. Use the new values accordingly.
Signed-off-by: José Roberto de Souza jose.souza@intel.com Signed-off-by: Animesh Manna animesh.manna@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221223130509.43245-3-luciano... Stable-dep-of: d944eafed618 ("drm/i915: Check pipe source size when using skl+ scalers") Signed-off-by: Sasha Levin sashal@kernel.org
This patch is only relevant for MTL, which is not fully supported on v6.1 yet, so I don't think we need to cherry-pick it to v6.1.
Again, this is a dependency for a different fix, which I think is relevant for 6.1.y, right?
Yes, again you're right. I missed the dependency link and the patch that depends on this one should indeed be applied on v6.1 as well.
Sorry for the noise.
-- Cheers, Luca.
From: Ville Syrjälä ville.syrjala@linux.intel.com
[ Upstream commit d944eafed618a8507270b324ad9d5405bb7f0b3e ]
The skl+ scalers only sample 12 bits of PIPESRC so we can't do any plane scaling at all when the pipe source size is >4k.
Make sure the pipe source size is also below the scaler's src size limits. Might not be 100% accurate, but should at least be safe. We can refine the limits later if we discover that recent hw is less restricted.
Cc: stable@vger.kernel.org Tested-by: Ross Zwisler zwisler@google.com Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/8357 Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230418175528.13117-2-ville.s... Reviewed-by: Jani Nikula jani.nikula@intel.com (cherry picked from commit 691248d4135fe3fae64b4ee0676bc96a7fd6950c) Signed-off-by: Joonas Lahtinen joonas.lahtinen@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/skl_scaler.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+)
diff --git a/drivers/gpu/drm/i915/display/skl_scaler.c b/drivers/gpu/drm/i915/display/skl_scaler.c index e6ec5ed0d00ec..90f42f63128ec 100644 --- a/drivers/gpu/drm/i915/display/skl_scaler.c +++ b/drivers/gpu/drm/i915/display/skl_scaler.c @@ -105,6 +105,8 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach, struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; + int pipe_src_w = drm_rect_width(&crtc_state->pipe_src); + int pipe_src_h = drm_rect_height(&crtc_state->pipe_src); int min_src_w, min_src_h, min_dst_w, min_dst_h; int max_src_w, max_src_h, max_dst_w, max_dst_h;
@@ -196,6 +198,21 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach, return -EINVAL; }
+ /* + * The pipe scaler does not use all the bits of PIPESRC, at least + * on the earlier platforms. So even when we're scaling a plane + * the *pipe* source size must not be too large. For simplicity + * we assume the limits match the scaler source size limits. Might + * not be 100% accurate on all platforms, but good enough for now. + */ + if (pipe_src_w > max_src_w || pipe_src_h > max_src_h) { + drm_dbg_kms(&dev_priv->drm, + "scaler_user index %u.%u: pipe src size %ux%u " + "is out of scaler range\n", + crtc->pipe, scaler_user, pipe_src_w, pipe_src_h); + return -EINVAL; + } + /* mark this plane as a scaler user in crtc_state */ scaler_state->scaler_users |= (1 << scaler_user); drm_dbg_kms(&dev_priv->drm, "scaler_user index %u.%u: "
From: Ian Chen ian.chen@amd.com
[ Upstream commit bd829d5707730072fecc3267016a675a4789905b ]
We split out PSR config from "global" to "per-panel" config settings.
Tested-by: Mark Broadworth mark.broadworth@amd.com Reviewed-by: Robin Chen robin.chen@amd.com Acked-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Ian Chen ian.chen@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: d893f39320e1 ("drm/amd/display: Lowering min Z8 residency time") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dc.h | 1 - drivers/gpu/drm/amd/display/dc/dc_link.h | 14 +++++++++++--- .../gpu/drm/amd/display/dc/dcn21/dcn21_resource.c | 5 ++++- .../gpu/drm/amd/display/dc/dcn30/dcn30_resource.c | 15 +++++++++++++-- .../drm/amd/display/dc/dcn302/dcn302_resource.c | 14 +++++++++++++- .../drm/amd/display/dc/dcn303/dcn303_resource.c | 13 ++++++++++++- .../gpu/drm/amd/display/dc/dcn31/dcn31_resource.c | 4 ++++ .../drm/amd/display/dc/dcn314/dcn314_resource.c | 4 ++++ .../drm/amd/display/dc/dcn315/dcn315_resource.c | 4 ++++ .../drm/amd/display/dc/dcn316/dcn316_resource.c | 4 ++++ .../gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 2 +- 11 files changed, 70 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h index 0598465fd1a1b..8757d7ff8ff62 100644 --- a/drivers/gpu/drm/amd/display/dc/dc.h +++ b/drivers/gpu/drm/amd/display/dc/dc.h @@ -764,7 +764,6 @@ struct dc_debug_options { bool disable_mem_low_power; bool pstate_enabled; bool disable_dmcu; - bool disable_psr; bool force_abm_enable; bool disable_stereo_support; bool vsr_support; diff --git a/drivers/gpu/drm/amd/display/dc/dc_link.h b/drivers/gpu/drm/amd/display/dc/dc_link.h index caf0c7af2d0b9..17f080f8af6cd 100644 --- a/drivers/gpu/drm/amd/display/dc/dc_link.h +++ b/drivers/gpu/drm/amd/display/dc/dc_link.h @@ -117,7 +117,7 @@ struct psr_settings { * Add a struct dc_panel_config under dc_link */ struct dc_panel_config { - // extra panel power sequence parameters + /* extra panel power sequence parameters */ struct pps { unsigned int extra_t3_ms; unsigned int extra_t7_ms; @@ -127,13 +127,21 @@ struct dc_panel_config { unsigned int extra_t12_ms; unsigned int extra_post_OUI_ms; } pps; - // ABM + /* PSR */ + struct psr { + bool disable_psr; + bool disallow_psrsu; + bool rc_disable; + bool rc_allow_static_screen; + bool rc_allow_fullscreen_VPB; + } psr; + /* ABM */ struct varib { unsigned int varibright_feature_enable; unsigned int def_varibright_level; unsigned int abm_config_setting; } varib; - // edp DSC + /* edp DSC */ struct dsc { bool disable_dsc_edp; unsigned int force_dsc_edp_policy; diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c index 887081472c0d8..ce6c70e25703d 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c @@ -671,12 +671,15 @@ static const struct dc_debug_options debug_defaults_diags = { .disable_pplib_wm_range = true, .disable_stutter = true, .disable_48mhz_pwrdwn = true, - .disable_psr = true, .enable_tri_buf = true, .use_max_lb = true };
static const struct dc_panel_config panel_config_defaults = { + .psr = { + .disable_psr = false, + .disallow_psrsu = false, + }, .ilr = { .optimize_edp_link_rate = true, }, diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c index e958f838c8041..5a8d1a0513149 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c @@ -723,7 +723,6 @@ static const struct dc_debug_options debug_defaults_drv = { .underflow_assert_delay_us = 0xFFFFFFFF, .dwb_fi_phase = -1, // -1 = disable, .dmub_command_table = true, - .disable_psr = false, .use_max_lb = true, .exit_idle_opt_for_cursor_updates = true }; @@ -742,11 +741,17 @@ static const struct dc_debug_options debug_defaults_diags = { .scl_reset_length10 = true, .dwb_fi_phase = -1, // -1 = disable .dmub_command_table = true, - .disable_psr = true, .enable_tri_buf = true, .use_max_lb = true };
+static const struct dc_panel_config panel_config_defaults = { + .psr = { + .disable_psr = false, + .disallow_psrsu = false, + }, +}; + static void dcn30_dpp_destroy(struct dpp **dpp) { kfree(TO_DCN20_DPP(*dpp)); @@ -2214,6 +2219,11 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params } }
+static void dcn30_get_panel_config_defaults(struct dc_panel_config *panel_config) +{ + *panel_config = panel_config_defaults; +} + static const struct resource_funcs dcn30_res_pool_funcs = { .destroy = dcn30_destroy_resource_pool, .link_enc_create = dcn30_link_encoder_create, @@ -2233,6 +2243,7 @@ static const struct resource_funcs dcn30_res_pool_funcs = { .release_post_bldn_3dlut = dcn30_release_post_bldn_3dlut, .update_bw_bounding_box = dcn30_update_bw_bounding_box, .patch_unknown_plane_state = dcn20_patch_unknown_plane_state, + .get_panel_config_defaults = dcn30_get_panel_config_defaults, };
#define CTX ctx diff --git a/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c b/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c index b925b6ddde5a3..d3945876aceda 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c @@ -112,10 +112,16 @@ static const struct dc_debug_options debug_defaults_diags = { .dwb_fi_phase = -1, // -1 = disable .dmub_command_table = true, .enable_tri_buf = true, - .disable_psr = true, .use_max_lb = true };
+static const struct dc_panel_config panel_config_defaults = { + .psr = { + .disable_psr = false, + .disallow_psrsu = false, + }, +}; + enum dcn302_clk_src_array_id { DCN302_CLK_SRC_PLL0, DCN302_CLK_SRC_PLL1, @@ -1132,6 +1138,11 @@ void dcn302_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_param DC_FP_END(); }
+static void dcn302_get_panel_config_defaults(struct dc_panel_config *panel_config) +{ + *panel_config = panel_config_defaults; +} + static struct resource_funcs dcn302_res_pool_funcs = { .destroy = dcn302_destroy_resource_pool, .link_enc_create = dcn302_link_encoder_create, @@ -1151,6 +1162,7 @@ static struct resource_funcs dcn302_res_pool_funcs = { .release_post_bldn_3dlut = dcn30_release_post_bldn_3dlut, .update_bw_bounding_box = dcn302_update_bw_bounding_box, .patch_unknown_plane_state = dcn20_patch_unknown_plane_state, + .get_panel_config_defaults = dcn302_get_panel_config_defaults, };
static struct dc_cap_funcs cap_funcs = { diff --git a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c index 527d5c9028785..7e7f18bef0986 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c @@ -96,7 +96,13 @@ static const struct dc_debug_options debug_defaults_diags = { .dwb_fi_phase = -1, // -1 = disable .dmub_command_table = true, .enable_tri_buf = true, - .disable_psr = true, +}; + +static const struct dc_panel_config panel_config_defaults = { + .psr = { + .disable_psr = false, + .disallow_psrsu = false, + }, };
enum dcn303_clk_src_array_id { @@ -1055,6 +1061,10 @@ static void dcn303_destroy_resource_pool(struct resource_pool **pool) *pool = NULL; }
+static void dcn303_get_panel_config_defaults(struct dc_panel_config *panel_config) +{ + *panel_config = panel_config_defaults; +}
void dcn303_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params) { @@ -1082,6 +1092,7 @@ static struct resource_funcs dcn303_res_pool_funcs = { .release_post_bldn_3dlut = dcn30_release_post_bldn_3dlut, .update_bw_bounding_box = dcn303_update_bw_bounding_box, .patch_unknown_plane_state = dcn20_patch_unknown_plane_state, + .get_panel_config_defaults = dcn303_get_panel_config_defaults, };
static struct dc_cap_funcs cap_funcs = { diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c index d825f11b4feaa..d3f76512841b4 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c @@ -911,6 +911,10 @@ static const struct dc_debug_options debug_defaults_diags = { };
static const struct dc_panel_config panel_config_defaults = { + .psr = { + .disable_psr = false, + .disallow_psrsu = false, + }, .ilr = { .optimize_edp_link_rate = true, }, diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c index ffaa4e5b3fca0..94a90c8f3abbe 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c @@ -940,6 +940,10 @@ static const struct dc_debug_options debug_defaults_diags = { };
static const struct dc_panel_config panel_config_defaults = { + .psr = { + .disable_psr = false, + .disallow_psrsu = false, + }, .ilr = { .optimize_edp_link_rate = true, }, diff --git a/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c index 58746c437554f..31cbc5762eab3 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c @@ -907,6 +907,10 @@ static const struct dc_debug_options debug_defaults_diags = { };
static const struct dc_panel_config panel_config_defaults = { + .psr = { + .disable_psr = false, + .disallow_psrsu = false, + }, .ilr = { .optimize_edp_link_rate = true, }, diff --git a/drivers/gpu/drm/amd/display/dc/dcn316/dcn316_resource.c b/drivers/gpu/drm/amd/display/dc/dcn316/dcn316_resource.c index 6b40a11ac83a9..af3eddc0cf32e 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn316/dcn316_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn316/dcn316_resource.c @@ -906,6 +906,10 @@ static const struct dc_debug_options debug_defaults_diags = { };
static const struct dc_panel_config panel_config_defaults = { + .psr = { + .disable_psr = false, + .disallow_psrsu = false, + }, .ilr = { .optimize_edp_link_rate = true, }, diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c index 45db40c41882c..602e885ed52c4 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c @@ -989,7 +989,7 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc
if (context->bw_ctx.dml.vba.StutterPeriod > 5000.0 || optimized_min_dst_y_next_start_us > 5000) return DCN_ZSTATE_SUPPORT_ALLOW; - else if (link->psr_settings.psr_version == DC_PSR_VERSION_1 && !dc->debug.disable_psr) + else if (link->psr_settings.psr_version == DC_PSR_VERSION_1 && !link->panel_config.psr.disable_psr) return DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY; else return DCN_ZSTATE_SUPPORT_DISALLOW;
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 80676936805e46c79c38008e5142a77a1b2f2dc7 ]
[Why] Even if we block Z9 based on crossover threshold it's possible to allow for Z8.
[How] There's support for this on DCN314, so update the support types to include a z8 only and z8_z10 only state.
Update the decide_zstate_support function to allow for specifying these modes based on the Z8 threshold.
DCN31 has z-state disabled, but still update the legacy code to map z8_only = disallow and z10_z8_only = z10_only to keep the support the same.
Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Brian Chang Brian.Chang@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: d893f39320e1 ("drm/amd/display: Lowering min Z8 residency time") Signed-off-by: Sasha Levin sashal@kernel.org --- .../gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c | 4 ++-- .../drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c | 12 ++++++++++-- drivers/gpu/drm/amd/display/dc/dc.h | 2 ++ drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 12 +++++++++--- 4 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c index 090b2c02aee17..0827c7df28557 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c @@ -333,8 +333,8 @@ void dcn31_smu_set_zstate_support(struct clk_mgr_internal *clk_mgr, enum dcn_zst (support == DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY)) support = DCN_ZSTATE_SUPPORT_DISALLOW;
- - if (support == DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY) + if (support == DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY || + support == DCN_ZSTATE_SUPPORT_ALLOW_Z8_Z10_ONLY) param = 1; else param = 0; diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c index aa264c600408d..0765334f08259 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c @@ -349,8 +349,6 @@ void dcn314_smu_set_zstate_support(struct clk_mgr_internal *clk_mgr, enum dcn_zs if (!clk_mgr->smu_present) return;
- // Arg[15:0] = 8/9/0 for Z8/Z9/disallow -> existing bits - // Arg[16] = Disallow Z9 -> new bit switch (support) {
case DCN_ZSTATE_SUPPORT_ALLOW: @@ -369,6 +367,16 @@ void dcn314_smu_set_zstate_support(struct clk_mgr_internal *clk_mgr, enum dcn_zs param = (1 << 10); break;
+ case DCN_ZSTATE_SUPPORT_ALLOW_Z8_Z10_ONLY: + msg_id = VBIOSSMC_MSG_AllowZstatesEntry; + param = (1 << 10) | (1 << 8); + break; + + case DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY: + msg_id = VBIOSSMC_MSG_AllowZstatesEntry; + param = (1 << 8); + break; + default: //DCN_ZSTATE_SUPPORT_UNKNOWN msg_id = VBIOSSMC_MSG_AllowZstatesEntry; param = 0; diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h index 8757d7ff8ff62..6d64d3b0dc211 100644 --- a/drivers/gpu/drm/amd/display/dc/dc.h +++ b/drivers/gpu/drm/amd/display/dc/dc.h @@ -491,6 +491,8 @@ enum dcn_pwr_state { enum dcn_zstate_support_state { DCN_ZSTATE_SUPPORT_UNKNOWN, DCN_ZSTATE_SUPPORT_ALLOW, + DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY, + DCN_ZSTATE_SUPPORT_ALLOW_Z8_Z10_ONLY, DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY, DCN_ZSTATE_SUPPORT_DISALLOW, }; diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c index 602e885ed52c4..feef0a75878f9 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c @@ -949,6 +949,7 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc int plane_count; int i; unsigned int optimized_min_dst_y_next_start_us; + bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > 1000.0;
plane_count = 0; optimized_min_dst_y_next_start_us = 0; @@ -963,6 +964,8 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc * 2. single eDP, on link 0, 1 plane and stutter period > 5ms * Z10 only cases: * 1. single eDP, on link 0, 1 plane and stutter period >= 5ms + * Z8 cases: + * 1. stutter period sufficient * Zstate not allowed cases: * 1. Everything else */ @@ -990,11 +993,14 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc if (context->bw_ctx.dml.vba.StutterPeriod > 5000.0 || optimized_min_dst_y_next_start_us > 5000) return DCN_ZSTATE_SUPPORT_ALLOW; else if (link->psr_settings.psr_version == DC_PSR_VERSION_1 && !link->panel_config.psr.disable_psr) - return DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY; + return allow_z8 ? DCN_ZSTATE_SUPPORT_ALLOW_Z8_Z10_ONLY : DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY; else - return DCN_ZSTATE_SUPPORT_DISALLOW; - } else + return allow_z8 ? DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY : DCN_ZSTATE_SUPPORT_DISALLOW; + } else if (allow_z8) { + return DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY; + } else { return DCN_ZSTATE_SUPPORT_DISALLOW; + } }
void dcn20_calculate_dlg_params(
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 00812bfc7bcb02faf127ee05f6ac27a5581eb701 ]
[Why] It's currently tied to Z10 support, and is required for Z10, but we can still support Z10 display off without PSR.
We currently need to skip the PSR CRTC disable to prevent stuttering and underflow from occuring during PSR-SU.
[How] Add a debug option to allow specifying this separately.
Reviewed-by: Robin Chen robin.chen@amd.com Acked-by: Stylon Wang stylon.wang@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: d893f39320e1 ("drm/amd/display: Lowering min Z8 residency time") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/core/dc_link.c | 2 +- drivers/gpu/drm/amd/display/dc/dc.h | 1 + drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c | 1 + 3 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c index bf7fcd268cb47..6299130663a3d 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c @@ -3381,7 +3381,7 @@ bool dc_link_setup_psr(struct dc_link *link, case FAMILY_YELLOW_CARP: case AMDGPU_FAMILY_GC_10_3_6: case AMDGPU_FAMILY_GC_11_0_1: - if (dc->debug.disable_z10) + if (dc->debug.disable_z10 || dc->debug.psr_skip_crtc_disable) psr_context->psr_level.bits.SKIP_CRTC_DISABLE = true; break; default: diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h index 6d64d3b0dc211..e038a180b941d 100644 --- a/drivers/gpu/drm/amd/display/dc/dc.h +++ b/drivers/gpu/drm/amd/display/dc/dc.h @@ -829,6 +829,7 @@ struct dc_debug_options { int crb_alloc_policy_min_disp_count; bool disable_z10; bool enable_z9_disable_interface; + bool psr_skip_crtc_disable; union dpia_debug_options dpia_debug; bool disable_fixed_vs_aux_timeout_wa; bool force_disable_subvp; diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c index 94a90c8f3abbe..58931df853f1e 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c @@ -884,6 +884,7 @@ static const struct dc_plane_cap plane_cap = { static const struct dc_debug_options debug_defaults_drv = { .disable_z10 = false, .enable_z9_disable_interface = true, + .psr_skip_crtc_disable = true, .disable_dmcu = true, .force_abm_enable = false, .timing_trace = false,
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 73dd4ca4b5a01235607231839bd351bbef75a1d2 ]
[Why] It's not supported in multi-display, but it is supported in 2nd eDP screen only.
[How] Remove multi display support, restrict number of planes for all z-states support, but still allow Z8 if we're not using PWRSEQ0.
Reviewed-by: Charlene Liu Charlene.Liu@amd.com Acked-by: Alex Hung alex.hung@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: d893f39320e1 ("drm/amd/display: Lowering min Z8 residency time") Signed-off-by: Sasha Levin sashal@kernel.org --- .../gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c index feef0a75878f9..6a7bcba4a7dad 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c @@ -949,7 +949,6 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc int plane_count; int i; unsigned int optimized_min_dst_y_next_start_us; - bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > 1000.0;
plane_count = 0; optimized_min_dst_y_next_start_us = 0; @@ -974,6 +973,8 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc else if (context->stream_count == 1 && context->streams[0]->signal == SIGNAL_TYPE_EDP) { struct dc_link *link = context->streams[0]->sink->link; struct dc_stream_status *stream_status = &context->stream_status[0]; + bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > 1000.0; + bool is_pwrseq0 = link->link_index == 0;
if (dc_extended_blank_supported(dc)) { for (i = 0; i < dc->res_pool->pipe_count; i++) { @@ -986,18 +987,17 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc } } } - /* zstate only supported on PWRSEQ0 and when there's <2 planes*/ - if (link->link_index != 0 || stream_status->plane_count > 1) + + /* Don't support multi-plane configurations */ + if (stream_status->plane_count > 1) return DCN_ZSTATE_SUPPORT_DISALLOW;
- if (context->bw_ctx.dml.vba.StutterPeriod > 5000.0 || optimized_min_dst_y_next_start_us > 5000) + if (is_pwrseq0 && (context->bw_ctx.dml.vba.StutterPeriod > 5000.0 || optimized_min_dst_y_next_start_us > 5000)) return DCN_ZSTATE_SUPPORT_ALLOW; - else if (link->psr_settings.psr_version == DC_PSR_VERSION_1 && !link->panel_config.psr.disable_psr) + else if (is_pwrseq0 && link->psr_settings.psr_version == DC_PSR_VERSION_1 && !link->panel_config.psr.disable_psr) return allow_z8 ? DCN_ZSTATE_SUPPORT_ALLOW_Z8_Z10_ONLY : DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY; else return allow_z8 ? DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY : DCN_ZSTATE_SUPPORT_DISALLOW; - } else if (allow_z8) { - return DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY; } else { return DCN_ZSTATE_SUPPORT_DISALLOW; }
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 0db13eae41fcc67f408dbb3dfda59633c4fa03fb ]
[Why] Allows finer control and tuning for debug and profiling.
[How] Add the debug option into DC. The default remains the same as before for now.
Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: d893f39320e1 ("drm/amd/display: Lowering min Z8 residency time") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dc.h | 1 + drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c | 1 + drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 3 ++- 3 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h index e038a180b941d..3f277009075fd 100644 --- a/drivers/gpu/drm/amd/display/dc/dc.h +++ b/drivers/gpu/drm/amd/display/dc/dc.h @@ -781,6 +781,7 @@ struct dc_debug_options { unsigned int force_odm_combine; //bit vector based on otg inst unsigned int seamless_boot_odm_combine; unsigned int force_odm_combine_4to1; //bit vector based on otg inst + int minimum_z8_residency_time; bool disable_z9_mpc; unsigned int force_fclk_khz; bool enable_tri_buf; diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c index 58931df853f1e..67c892b9e2cf5 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c @@ -884,6 +884,7 @@ static const struct dc_plane_cap plane_cap = { static const struct dc_debug_options debug_defaults_drv = { .disable_z10 = false, .enable_z9_disable_interface = true, + .minimum_z8_residency_time = 1000, .psr_skip_crtc_disable = true, .disable_dmcu = true, .force_abm_enable = false, diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c index 6a7bcba4a7dad..186538e3e3c0c 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c @@ -973,7 +973,8 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc else if (context->stream_count == 1 && context->streams[0]->signal == SIGNAL_TYPE_EDP) { struct dc_link *link = context->streams[0]->sink->link; struct dc_stream_status *stream_status = &context->stream_status[0]; - bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > 1000.0; + int minmum_z8_residency = dc->debug.minimum_z8_residency_time > 0 ? dc->debug.minimum_z8_residency_time : 1000; + bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > (double)minmum_z8_residency; bool is_pwrseq0 = link->link_index == 0;
if (dc_extended_blank_supported(dc)) {
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 0215ce9057edf69aff9c1a32f4254e1ec297db31 ]
[Why] Block periods that are too short as they have the potential to currently cause hangs in other firmware components on the system.
[How] Update the threshold, mostly targeting a block of 4k and downscaling.
Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: d893f39320e1 ("drm/amd/display: Lowering min Z8 residency time") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c index 67c892b9e2cf5..ca3aabdf81d2f 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c @@ -884,7 +884,7 @@ static const struct dc_plane_cap plane_cap = { static const struct dc_debug_options debug_defaults_drv = { .disable_z10 = false, .enable_z9_disable_interface = true, - .minimum_z8_residency_time = 1000, + .minimum_z8_residency_time = 3080, .psr_skip_crtc_disable = true, .disable_dmcu = true, .force_abm_enable = false,
From: Leo Chen sancchen@amd.com
[ Upstream commit d893f39320e1248d1c97fde0d6e51e5ea008a76b ]
[Why & How] Per HW team request, we're lowering the minimum Z8 residency time to 2000us. This enables Z8 support for additional modes we were previously blocking like 2k>60hz
Cc: stable@vger.kernel.org Tested-by: Daniel Wheeler daniel.wheeler@amd.com Reviewed-by: Nicholas Kazlauskas Nicholas.Kazlauskas@amd.com Acked-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Leo Chen sancchen@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c index ca3aabdf81d2f..b7782433ce6ba 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c @@ -884,7 +884,7 @@ static const struct dc_plane_cap plane_cap = { static const struct dc_debug_options debug_defaults_drv = { .disable_z10 = false, .enable_z9_disable_interface = true, - .minimum_z8_residency_time = 3080, + .minimum_z8_residency_time = 2000, .psr_skip_crtc_disable = true, .disable_dmcu = true, .force_abm_enable = false,
From: Shuming Fan shumingf@realtek.com
[ Upstream commit 6ad73a2b42ea6d43fc5bf32033e8f6b21df3109e ]
This is the initial amplifier driver for rt1318 SDCA version.
Signed-off-by: Shuming Fan shumingf@realtek.com Link: https://lore.kernel.org/r/20221108092727.13011-1-shumingf@realtek.com Signed-off-by: Mark Brown broonie@kernel.org Stable-dep-of: 84822215acd1 ("ASoC: codecs: wcd938x: fix accessing regmap on unattached devices") Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/Kconfig | 6 + sound/soc/codecs/Makefile | 2 + sound/soc/codecs/rt1318-sdw.c | 884 ++++++++++++++++++++++++++++++++++ sound/soc/codecs/rt1318-sdw.h | 101 ++++ 4 files changed, 993 insertions(+) create mode 100644 sound/soc/codecs/rt1318-sdw.c create mode 100644 sound/soc/codecs/rt1318-sdw.h
diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig index 3f16ad1c37585..965ae55fa1607 100644 --- a/sound/soc/codecs/Kconfig +++ b/sound/soc/codecs/Kconfig @@ -199,6 +199,7 @@ config SND_SOC_ALL_CODECS imply SND_SOC_RT715_SDCA_SDW imply SND_SOC_RT1308_SDW imply SND_SOC_RT1316_SDW + imply SND_SOC_RT1318_SDW imply SND_SOC_RT9120 imply SND_SOC_SDW_MOCKUP imply SND_SOC_SGTL5000 @@ -1311,6 +1312,11 @@ config SND_SOC_RT1316_SDW depends on SOUNDWIRE select REGMAP_SOUNDWIRE
+config SND_SOC_RT1318_SDW + tristate "Realtek RT1318 Codec - SDW" + depends on SOUNDWIRE + select REGMAP_SOUNDWIRE + config SND_SOC_RT5514 tristate depends on I2C diff --git a/sound/soc/codecs/Makefile b/sound/soc/codecs/Makefile index 9170ee1447dda..71d3ce5867e4f 100644 --- a/sound/soc/codecs/Makefile +++ b/sound/soc/codecs/Makefile @@ -196,6 +196,7 @@ snd-soc-rt1305-objs := rt1305.o snd-soc-rt1308-objs := rt1308.o snd-soc-rt1308-sdw-objs := rt1308-sdw.o snd-soc-rt1316-sdw-objs := rt1316-sdw.o +snd-soc-rt1318-sdw-objs := rt1318-sdw.o snd-soc-rt274-objs := rt274.o snd-soc-rt286-objs := rt286.o snd-soc-rt298-objs := rt298.o @@ -551,6 +552,7 @@ obj-$(CONFIG_SND_SOC_RT1305) += snd-soc-rt1305.o obj-$(CONFIG_SND_SOC_RT1308) += snd-soc-rt1308.o obj-$(CONFIG_SND_SOC_RT1308_SDW) += snd-soc-rt1308-sdw.o obj-$(CONFIG_SND_SOC_RT1316_SDW) += snd-soc-rt1316-sdw.o +obj-$(CONFIG_SND_SOC_RT1318_SDW) += snd-soc-rt1318-sdw.o obj-$(CONFIG_SND_SOC_RT274) += snd-soc-rt274.o obj-$(CONFIG_SND_SOC_RT286) += snd-soc-rt286.o obj-$(CONFIG_SND_SOC_RT298) += snd-soc-rt298.o diff --git a/sound/soc/codecs/rt1318-sdw.c b/sound/soc/codecs/rt1318-sdw.c new file mode 100644 index 0000000000000..f85f5ab2c6d04 --- /dev/null +++ b/sound/soc/codecs/rt1318-sdw.c @@ -0,0 +1,884 @@ +// SPDX-License-Identifier: GPL-2.0-only +// +// rt1318-sdw.c -- rt1318 SDCA ALSA SoC amplifier audio driver +// +// Copyright(c) 2022 Realtek Semiconductor Corp. +// +// +#include <linux/delay.h> +#include <linux/device.h> +#include <linux/pm_runtime.h> +#include <linux/mod_devicetable.h> +#include <linux/module.h> +#include <linux/regmap.h> +#include <linux/dmi.h> +#include <linux/firmware.h> +#include <sound/core.h> +#include <sound/pcm.h> +#include <sound/pcm_params.h> +#include <sound/soc-dapm.h> +#include <sound/initval.h> +#include "rt1318-sdw.h" + +static const struct reg_sequence rt1318_blind_write[] = { + { 0xc001, 0x43 }, + { 0xc003, 0xa2 }, + { 0xc004, 0x44 }, + { 0xc005, 0x44 }, + { 0xc006, 0x33 }, + { 0xc007, 0x64 }, + { 0xc320, 0x20 }, + { 0xf203, 0x18 }, + { 0xf211, 0x00 }, + { 0xf212, 0x26 }, + { 0xf20d, 0x17 }, + { 0xf214, 0x06 }, + { 0xf20e, 0x00 }, + { 0xf223, 0x7f }, + { 0xf224, 0xdb }, + { 0xf225, 0xee }, + { 0xf226, 0x3f }, + { 0xf227, 0x0f }, + { 0xf21a, 0x78 }, + { 0xf242, 0x3c }, + { 0xc321, 0x0b }, + { 0xc200, 0xd8 }, + { 0xc201, 0x27 }, + { 0xc202, 0x0f }, + { 0xf800, 0x20 }, + { 0xdf00, 0x10 }, + { 0xdf5f, 0x01 }, + { 0xdf60, 0xa7 }, + { 0xc400, 0x0e }, + { 0xc401, 0x43 }, + { 0xc402, 0xe0 }, + { 0xc403, 0x00 }, + { 0xc404, 0x4c }, + { 0xc407, 0x02 }, + { 0xc408, 0x3f }, + { 0xc300, 0x01 }, + { 0xc206, 0x78 }, + { 0xc203, 0x84 }, + { 0xc120, 0xc0 }, + { 0xc121, 0x03 }, + { 0xe000, 0x88 }, + { 0xc321, 0x09 }, + { 0xc322, 0x01 }, + { 0xe706, 0x0f }, + { 0xe707, 0x30 }, + { 0xe806, 0x0f }, + { 0xe807, 0x30 }, + { 0xed00, 0xb0 }, + { 0xce04, 0x02 }, + { 0xce05, 0x63 }, + { 0xce06, 0x68 }, + { 0xce07, 0x07 }, + { 0xcf04, 0x02 }, + { 0xcf05, 0x63 }, + { 0xcf06, 0x68 }, + { 0xcf07, 0x07 }, + { 0xce60, 0xe3 }, + { 0xc130, 0x51 }, + { 0xf102, 0x00 }, + { 0xf103, 0x00 }, + { 0xf104, 0xf5 }, + { 0xf105, 0x06 }, + { 0xf109, 0x9b }, + { 0xf10a, 0x0b }, + { 0xf10b, 0x4c }, + { 0xf10b, 0x5c }, + { 0xf102, 0x00 }, + { 0xf103, 0x00 }, + { 0xf104, 0xf5 }, + { 0xf105, 0x0b }, + { 0xf109, 0x03 }, + { 0xf10a, 0x0b }, + { 0xf10b, 0x4c }, + { 0xf10b, 0x5c }, + { 0xf102, 0x00 }, + { 0xf103, 0x00 }, + { 0xf104, 0xf5 }, + { 0xf105, 0x0c }, + { 0xf109, 0x7f }, + { 0xf10a, 0x0b }, + { 0xf10b, 0x4c }, + { 0xf10b, 0x5c }, + + { 0xe604, 0x00 }, + { 0xdb00, 0x0c }, + { 0xdd00, 0x0c }, + { 0xdc19, 0x00 }, + { 0xdc1a, 0xff }, + { 0xdc1b, 0xff }, + { 0xdc1c, 0xff }, + { 0xdc1d, 0x00 }, + { 0xdc1e, 0x00 }, + { 0xdc1f, 0x00 }, + { 0xdc20, 0xff }, + { 0xde19, 0x00 }, + { 0xde1a, 0xff }, + { 0xde1b, 0xff }, + { 0xde1c, 0xff }, + { 0xde1d, 0x00 }, + { 0xde1e, 0x00 }, + { 0xde1f, 0x00 }, + { 0xde20, 0xff }, + { 0xdb32, 0x00 }, + { 0xdd32, 0x00 }, + { 0xdb33, 0x0a }, + { 0xdd33, 0x0a }, + { 0xdb34, 0x1a }, + { 0xdd34, 0x1a }, + { 0xdb17, 0xef }, + { 0xdd17, 0xef }, + { 0xdba7, 0x00 }, + { 0xdba8, 0x64 }, + { 0xdda7, 0x00 }, + { 0xdda8, 0x64 }, + { 0xdb19, 0x40 }, + { 0xdd19, 0x40 }, + { 0xdb00, 0x4c }, + { 0xdb01, 0x79 }, + { 0xdd01, 0x79 }, + { 0xdb04, 0x05 }, + { 0xdb05, 0x03 }, + { 0xdd04, 0x05 }, + { 0xdd05, 0x03 }, + { 0xdbbb, 0x09 }, + { 0xdbbc, 0x30 }, + { 0xdbbd, 0xf0 }, + { 0xdbbe, 0xf1 }, + { 0xddbb, 0x09 }, + { 0xddbc, 0x30 }, + { 0xddbd, 0xf0 }, + { 0xddbe, 0xf1 }, + { 0xdb01, 0x79 }, + { 0xdd01, 0x79 }, + { 0xdc52, 0xef }, + { 0xde52, 0xef }, + { 0x2f55, 0x22 }, +}; + +static const struct reg_default rt1318_reg_defaults[] = { + { 0x3000, 0x00 }, + { 0x3004, 0x01 }, + { 0x3005, 0x23 }, + { 0x3202, 0x00 }, + { 0x3203, 0x01 }, + { 0x3206, 0x00 }, + { 0xc000, 0x00 }, + { 0xc001, 0x43 }, + { 0xc003, 0x22 }, + { 0xc004, 0x44 }, + { 0xc005, 0x44 }, + { 0xc006, 0x33 }, + { 0xc007, 0x64 }, + { 0xc008, 0x05 }, + { 0xc00a, 0xfc }, + { 0xc00b, 0x0f }, + { 0xc00c, 0x0e }, + { 0xc00d, 0xef }, + { 0xc00e, 0xe5 }, + { 0xc00f, 0xff }, + { 0xc120, 0xc0 }, + { 0xc121, 0x00 }, + { 0xc122, 0x00 }, + { 0xc123, 0x14 }, + { 0xc125, 0x00 }, + { 0xc200, 0x00 }, + { 0xc201, 0x00 }, + { 0xc202, 0x00 }, + { 0xc203, 0x04 }, + { 0xc204, 0x00 }, + { 0xc205, 0x00 }, + { 0xc206, 0x68 }, + { 0xc207, 0x70 }, + { 0xc208, 0x00 }, + { 0xc20a, 0x00 }, + { 0xc20b, 0x01 }, + { 0xc20c, 0x7f }, + { 0xc20d, 0x01 }, + { 0xc20e, 0x7f }, + { 0xc300, 0x00 }, + { 0xc301, 0x00 }, + { 0xc303, 0x80 }, + { 0xc320, 0x00 }, + { 0xc321, 0x09 }, + { 0xc322, 0x02 }, + { 0xc410, 0x04 }, + { 0xc430, 0x00 }, + { 0xc431, 0x00 }, + { 0xca00, 0x10 }, + { 0xca01, 0x00 }, + { 0xca02, 0x0b }, + { 0xca10, 0x10 }, + { 0xca11, 0x00 }, + { 0xca12, 0x0b }, + { 0xdd93, 0x00 }, + { 0xdd94, 0x64 }, + { 0xe300, 0xa0 }, + { 0xed00, 0x80 }, + { 0xed01, 0x0f }, + { 0xed02, 0xff }, + { 0xed03, 0x00 }, + { 0xed04, 0x00 }, + { 0xed05, 0x0f }, + { 0xed06, 0xff }, + { 0xf010, 0x10 }, + { 0xf011, 0xec }, + { 0xf012, 0x68 }, + { 0xf013, 0x21 }, + { 0xf800, 0x00 }, + { 0xf801, 0x12 }, + { 0xf802, 0xe0 }, + { 0xf803, 0x2f }, + { 0xf804, 0x00 }, + { 0xf805, 0x00 }, + { 0xf806, 0x07 }, + { 0xf807, 0xff }, + { SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_UDMPU21, RT1318_SDCA_CTL_UDMPU_CLUSTER, 0), 0x00 }, + { SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_FU21, RT1318_SDCA_CTL_FU_MUTE, CH_L), 0x01 }, + { SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_FU21, RT1318_SDCA_CTL_FU_MUTE, CH_R), 0x01 }, + { SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_PDE23, RT1318_SDCA_CTL_REQ_POWER_STATE, 0), 0x03 }, + { SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_CS21, RT1318_SDCA_CTL_SAMPLE_FREQ_INDEX, 0), 0x09 }, +}; + +static bool rt1318_readable_register(struct device *dev, unsigned int reg) +{ + switch (reg) { + case 0x2f55: + case 0x3000: + case 0x3004 ... 0x3005: + case 0x3202 ... 0x3203: + case 0x3206: + case 0xc000 ... 0xc00f: + case 0xc120 ... 0xc125: + case 0xc200 ... 0xc20e: + case 0xc300 ... 0xc303: + case 0xc320 ... 0xc322: + case 0xc410: + case 0xc430 ... 0xc431: + case 0xca00 ... 0xca02: + case 0xca10 ... 0xca12: + case 0xcb00 ... 0xcb0b: + case 0xcc00 ... 0xcce5: + case 0xcd00 ... 0xcde5: + case 0xce00 ... 0xce6a: + case 0xcf00 ... 0xcf53: + case 0xd000 ... 0xd0cc: + case 0xd100 ... 0xd1b9: + case 0xdb00 ... 0xdc53: + case 0xdd00 ... 0xde53: + case 0xdf00 ... 0xdf6b: + case 0xe300: + case 0xeb00 ... 0xebcc: + case 0xec00 ... 0xecb9: + case 0xed00 ... 0xed06: + case 0xf010 ... 0xf014: + case 0xf800 ... 0xf807: + case SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_UDMPU21, RT1318_SDCA_CTL_UDMPU_CLUSTER, 0): + case SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_FU21, RT1318_SDCA_CTL_FU_MUTE, CH_L): + case SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_FU21, RT1318_SDCA_CTL_FU_MUTE, CH_R): + case SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_PDE23, RT1318_SDCA_CTL_REQ_POWER_STATE, 0): + case SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_CS21, RT1318_SDCA_CTL_SAMPLE_FREQ_INDEX, 0): + case SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_SAPU, RT1318_SDCA_CTL_SAPU_PROTECTION_MODE, 0): + case SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_SAPU, RT1318_SDCA_CTL_SAPU_PROTECTION_STATUS, 0): + return true; + default: + return false; + } +} + +static bool rt1318_volatile_register(struct device *dev, unsigned int reg) +{ + switch (reg) { + case 0x2f55: + case 0x3000 ... 0x3001: + case 0xc000: + case 0xc301: + case 0xc410: + case 0xc430 ... 0xc431: + case 0xdb06: + case 0xdb12: + case 0xdb1d ... 0xdb1f: + case 0xdb35: + case 0xdb37: + case 0xdb8a ... 0xdb92: + case 0xdbc5 ... 0xdbc8: + case 0xdc2b ... 0xdc49: + case 0xdd0b: + case 0xdd12: + case 0xdd1d ... 0xdd1f: + case 0xdd35: + case 0xdd8a ... 0xdd92: + case 0xddc5 ... 0xddc8: + case 0xde2b ... 0xde44: + case 0xdf4a ... 0xdf55: + case 0xe224 ... 0xe23b: + case 0xea01: + case 0xebc5: + case 0xebc8: + case 0xebcb ... 0xebcc: + case 0xed03 ... 0xed06: + case 0xf010 ... 0xf014: + case SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_SAPU, RT1318_SDCA_CTL_SAPU_PROTECTION_MODE, 0): + case SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_SAPU, RT1318_SDCA_CTL_SAPU_PROTECTION_STATUS, 0): + return true; + default: + return false; + } +} + +static const struct regmap_config rt1318_sdw_regmap = { + .reg_bits = 32, + .val_bits = 8, + .readable_reg = rt1318_readable_register, + .volatile_reg = rt1318_volatile_register, + .max_register = 0x41081488, + .reg_defaults = rt1318_reg_defaults, + .num_reg_defaults = ARRAY_SIZE(rt1318_reg_defaults), + .cache_type = REGCACHE_RBTREE, + .use_single_read = true, + .use_single_write = true, +}; + +static int rt1318_read_prop(struct sdw_slave *slave) +{ + struct sdw_slave_prop *prop = &slave->prop; + int nval; + int i, j; + u32 bit; + unsigned long addr; + struct sdw_dpn_prop *dpn; + + prop->scp_int1_mask = SDW_SCP_INT1_BUS_CLASH | SDW_SCP_INT1_PARITY; + prop->quirks = SDW_SLAVE_QUIRKS_INVALID_INITIAL_PARITY; + prop->is_sdca = true; + + prop->paging_support = true; + + /* first we need to allocate memory for set bits in port lists */ + prop->source_ports = BIT(2); + prop->sink_ports = BIT(1); + + nval = hweight32(prop->source_ports); + prop->src_dpn_prop = devm_kcalloc(&slave->dev, nval, + sizeof(*prop->src_dpn_prop), GFP_KERNEL); + if (!prop->src_dpn_prop) + return -ENOMEM; + + i = 0; + dpn = prop->src_dpn_prop; + addr = prop->source_ports; + for_each_set_bit(bit, &addr, 32) { + dpn[i].num = bit; + dpn[i].type = SDW_DPN_FULL; + dpn[i].simple_ch_prep_sm = true; + dpn[i].ch_prep_timeout = 10; + i++; + } + + /* do this again for sink now */ + nval = hweight32(prop->sink_ports); + prop->sink_dpn_prop = devm_kcalloc(&slave->dev, nval, + sizeof(*prop->sink_dpn_prop), GFP_KERNEL); + if (!prop->sink_dpn_prop) + return -ENOMEM; + + j = 0; + dpn = prop->sink_dpn_prop; + addr = prop->sink_ports; + for_each_set_bit(bit, &addr, 32) { + dpn[j].num = bit; + dpn[j].type = SDW_DPN_FULL; + dpn[j].simple_ch_prep_sm = true; + dpn[j].ch_prep_timeout = 10; + j++; + } + + /* set the timeout values */ + prop->clk_stop_timeout = 20; + + return 0; +} + +static int rt1318_io_init(struct device *dev, struct sdw_slave *slave) +{ + struct rt1318_sdw_priv *rt1318 = dev_get_drvdata(dev); + + if (rt1318->hw_init) + return 0; + + if (rt1318->first_hw_init) { + regcache_cache_only(rt1318->regmap, false); + regcache_cache_bypass(rt1318->regmap, true); + } else { + /* + * PM runtime is only enabled when a Slave reports as Attached + */ + + /* set autosuspend parameters */ + pm_runtime_set_autosuspend_delay(&slave->dev, 3000); + pm_runtime_use_autosuspend(&slave->dev); + + /* update count of parent 'active' children */ + pm_runtime_set_active(&slave->dev); + + /* make sure the device does not suspend immediately */ + pm_runtime_mark_last_busy(&slave->dev); + + pm_runtime_enable(&slave->dev); + } + + pm_runtime_get_noresume(&slave->dev); + + /* blind write */ + regmap_multi_reg_write(rt1318->regmap, rt1318_blind_write, + ARRAY_SIZE(rt1318_blind_write)); + + if (rt1318->first_hw_init) { + regcache_cache_bypass(rt1318->regmap, false); + regcache_mark_dirty(rt1318->regmap); + } + + /* Mark Slave initialization complete */ + rt1318->first_hw_init = true; + rt1318->hw_init = true; + + pm_runtime_mark_last_busy(&slave->dev); + pm_runtime_put_autosuspend(&slave->dev); + + dev_dbg(&slave->dev, "%s hw_init complete\n", __func__); + return 0; +} + +static int rt1318_update_status(struct sdw_slave *slave, + enum sdw_slave_status status) +{ + struct rt1318_sdw_priv *rt1318 = dev_get_drvdata(&slave->dev); + + /* Update the status */ + rt1318->status = status; + + if (status == SDW_SLAVE_UNATTACHED) + rt1318->hw_init = false; + + /* + * Perform initialization only if slave status is present and + * hw_init flag is false + */ + if (rt1318->hw_init || rt1318->status != SDW_SLAVE_ATTACHED) + return 0; + + /* perform I/O transfers required for Slave initialization */ + return rt1318_io_init(&slave->dev, slave); +} + +static int rt1318_classd_event(struct snd_soc_dapm_widget *w, + struct snd_kcontrol *kcontrol, int event) +{ + struct snd_soc_component *component = + snd_soc_dapm_to_component(w->dapm); + struct rt1318_sdw_priv *rt1318 = snd_soc_component_get_drvdata(component); + unsigned char ps0 = 0x0, ps3 = 0x3; + + switch (event) { + case SND_SOC_DAPM_POST_PMU: + regmap_write(rt1318->regmap, + SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_PDE23, + RT1318_SDCA_CTL_REQ_POWER_STATE, 0), + ps0); + break; + case SND_SOC_DAPM_PRE_PMD: + regmap_write(rt1318->regmap, + SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_PDE23, + RT1318_SDCA_CTL_REQ_POWER_STATE, 0), + ps3); + break; + + default: + break; + } + + return 0; +} + +static const char * const rt1318_rx_data_ch_select[] = { + "L,R", + "L,L", + "L,R", + "L,L+R", + "R,L", + "R,R", + "R,L+R", + "L+R,L", + "L+R,R", + "L+R,L+R", +}; + +static SOC_ENUM_SINGLE_DECL(rt1318_rx_data_ch_enum, + SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_UDMPU21, RT1318_SDCA_CTL_UDMPU_CLUSTER, 0), 0, + rt1318_rx_data_ch_select); + +static const struct snd_kcontrol_new rt1318_snd_controls[] = { + + /* UDMPU Cluster Selection */ + SOC_ENUM("RX Channel Select", rt1318_rx_data_ch_enum), +}; + +static const struct snd_kcontrol_new rt1318_sto_dac = + SOC_DAPM_DOUBLE_R("Switch", + SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_FU21, RT1318_SDCA_CTL_FU_MUTE, CH_L), + SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_FU21, RT1318_SDCA_CTL_FU_MUTE, CH_R), + 0, 1, 1); + +static const struct snd_soc_dapm_widget rt1318_dapm_widgets[] = { + /* Audio Interface */ + SND_SOC_DAPM_AIF_IN("DP1RX", "DP1 Playback", 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("DP2TX", "DP2 Capture", 0, SND_SOC_NOPM, 0, 0), + + /* Digital Interface */ + SND_SOC_DAPM_SWITCH("DAC", SND_SOC_NOPM, 0, 0, &rt1318_sto_dac), + + /* Output */ + SND_SOC_DAPM_PGA_E("CLASS D", SND_SOC_NOPM, 0, 0, NULL, 0, + rt1318_classd_event, SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMU), + SND_SOC_DAPM_OUTPUT("SPOL"), + SND_SOC_DAPM_OUTPUT("SPOR"), + /* Input */ + SND_SOC_DAPM_PGA("FB Data", SND_SOC_NOPM, 0, 0, NULL, 0), + SND_SOC_DAPM_SIGGEN("FB Gen"), +}; + +static const struct snd_soc_dapm_route rt1318_dapm_routes[] = { + { "DAC", "Switch", "DP1RX" }, + { "CLASS D", NULL, "DAC" }, + { "SPOL", NULL, "CLASS D" }, + { "SPOR", NULL, "CLASS D" }, + + { "FB Data", NULL, "FB Gen" }, + { "DP2TX", NULL, "FB Data" }, +}; + +static int rt1318_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream, + int direction) +{ + struct sdw_stream_data *stream; + + if (!sdw_stream) + return 0; + + stream = kzalloc(sizeof(*stream), GFP_KERNEL); + if (!stream) + return -ENOMEM; + + stream->sdw_stream = sdw_stream; + + /* Use tx_mask or rx_mask to configure stream tag and set dma_data */ + if (direction == SNDRV_PCM_STREAM_PLAYBACK) + dai->playback_dma_data = stream; + else + dai->capture_dma_data = stream; + + return 0; +} + +static void rt1318_sdw_shutdown(struct snd_pcm_substream *substream, + struct snd_soc_dai *dai) +{ + struct sdw_stream_data *stream; + + stream = snd_soc_dai_get_dma_data(dai, substream); + snd_soc_dai_set_dma_data(dai, substream, NULL); + kfree(stream); +} + +static int rt1318_sdw_hw_params(struct snd_pcm_substream *substream, + struct snd_pcm_hw_params *params, struct snd_soc_dai *dai) +{ + struct snd_soc_component *component = dai->component; + struct rt1318_sdw_priv *rt1318 = + snd_soc_component_get_drvdata(component); + struct sdw_stream_config stream_config; + struct sdw_port_config port_config; + enum sdw_data_direction direction; + struct sdw_stream_data *stream; + int retval, port, num_channels, ch_mask; + unsigned int sampling_rate; + + dev_dbg(dai->dev, "%s %s", __func__, dai->name); + stream = snd_soc_dai_get_dma_data(dai, substream); + + if (!stream) + return -EINVAL; + + if (!rt1318->sdw_slave) + return -EINVAL; + + /* SoundWire specific configuration */ + /* port 1 for playback */ + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { + direction = SDW_DATA_DIR_RX; + port = 1; + } else { + direction = SDW_DATA_DIR_TX; + port = 2; + } + + num_channels = params_channels(params); + ch_mask = (1 << num_channels) - 1; + + stream_config.frame_rate = params_rate(params); + stream_config.ch_count = num_channels; + stream_config.bps = snd_pcm_format_width(params_format(params)); + stream_config.direction = direction; + + port_config.ch_mask = ch_mask; + port_config.num = port; + + retval = sdw_stream_add_slave(rt1318->sdw_slave, &stream_config, + &port_config, 1, stream->sdw_stream); + if (retval) { + dev_err(dai->dev, "Unable to configure port\n"); + return retval; + } + + /* sampling rate configuration */ + switch (params_rate(params)) { + case 16000: + sampling_rate = RT1318_SDCA_RATE_16000HZ; + break; + case 32000: + sampling_rate = RT1318_SDCA_RATE_32000HZ; + break; + case 44100: + sampling_rate = RT1318_SDCA_RATE_44100HZ; + break; + case 48000: + sampling_rate = RT1318_SDCA_RATE_48000HZ; + break; + case 96000: + sampling_rate = RT1318_SDCA_RATE_96000HZ; + break; + case 192000: + sampling_rate = RT1318_SDCA_RATE_192000HZ; + break; + default: + dev_err(component->dev, "Rate %d is not supported\n", + params_rate(params)); + return -EINVAL; + } + + /* set sampling frequency */ + regmap_write(rt1318->regmap, + SDW_SDCA_CTL(FUNC_NUM_SMART_AMP, RT1318_SDCA_ENT_CS21, RT1318_SDCA_CTL_SAMPLE_FREQ_INDEX, 0), + sampling_rate); + + return 0; +} + +static int rt1318_sdw_pcm_hw_free(struct snd_pcm_substream *substream, + struct snd_soc_dai *dai) +{ + struct snd_soc_component *component = dai->component; + struct rt1318_sdw_priv *rt1318 = + snd_soc_component_get_drvdata(component); + struct sdw_stream_data *stream = + snd_soc_dai_get_dma_data(dai, substream); + + if (!rt1318->sdw_slave) + return -EINVAL; + + sdw_stream_remove_slave(rt1318->sdw_slave, stream->sdw_stream); + return 0; +} + +/* + * slave_ops: callbacks for get_clock_stop_mode, clock_stop and + * port_prep are not defined for now + */ +static struct sdw_slave_ops rt1318_slave_ops = { + .read_prop = rt1318_read_prop, + .update_status = rt1318_update_status, +}; + +static int rt1318_sdw_component_probe(struct snd_soc_component *component) +{ + int ret; + struct rt1318_sdw_priv *rt1318 = snd_soc_component_get_drvdata(component); + + rt1318->component = component; + + ret = pm_runtime_resume(component->dev); + dev_dbg(&rt1318->sdw_slave->dev, "%s pm_runtime_resume, ret=%d", __func__, ret); + if (ret < 0 && ret != -EACCES) + return ret; + + return 0; +} + +static const struct snd_soc_component_driver soc_component_sdw_rt1318 = { + .probe = rt1318_sdw_component_probe, + .controls = rt1318_snd_controls, + .num_controls = ARRAY_SIZE(rt1318_snd_controls), + .dapm_widgets = rt1318_dapm_widgets, + .num_dapm_widgets = ARRAY_SIZE(rt1318_dapm_widgets), + .dapm_routes = rt1318_dapm_routes, + .num_dapm_routes = ARRAY_SIZE(rt1318_dapm_routes), + .endianness = 1, +}; + +static const struct snd_soc_dai_ops rt1318_aif_dai_ops = { + .hw_params = rt1318_sdw_hw_params, + .hw_free = rt1318_sdw_pcm_hw_free, + .set_stream = rt1318_set_sdw_stream, + .shutdown = rt1318_sdw_shutdown, +}; + +#define RT1318_STEREO_RATES (SNDRV_PCM_RATE_16000 | SNDRV_PCM_RATE_32000 | SNDRV_PCM_RATE_44100 | \ + SNDRV_PCM_RATE_48000 | SNDRV_PCM_RATE_96000 | SNDRV_PCM_RATE_192000) +#define RT1318_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE | \ + SNDRV_PCM_FMTBIT_S32_LE) + +static struct snd_soc_dai_driver rt1318_sdw_dai[] = { + { + .name = "rt1318-aif", + .playback = { + .stream_name = "DP1 Playback", + .channels_min = 1, + .channels_max = 2, + .rates = RT1318_STEREO_RATES, + .formats = RT1318_FORMATS, + }, + .capture = { + .stream_name = "DP2 Capture", + .channels_min = 1, + .channels_max = 2, + .rates = RT1318_STEREO_RATES, + .formats = RT1318_FORMATS, + }, + .ops = &rt1318_aif_dai_ops, + }, +}; + +static int rt1318_sdw_init(struct device *dev, struct regmap *regmap, + struct sdw_slave *slave) +{ + struct rt1318_sdw_priv *rt1318; + int ret; + + rt1318 = devm_kzalloc(dev, sizeof(*rt1318), GFP_KERNEL); + if (!rt1318) + return -ENOMEM; + + dev_set_drvdata(dev, rt1318); + rt1318->sdw_slave = slave; + rt1318->regmap = regmap; + + /* + * Mark hw_init to false + * HW init will be performed when device reports present + */ + rt1318->hw_init = false; + rt1318->first_hw_init = false; + + ret = devm_snd_soc_register_component(dev, + &soc_component_sdw_rt1318, + rt1318_sdw_dai, + ARRAY_SIZE(rt1318_sdw_dai)); + + dev_dbg(&slave->dev, "%s\n", __func__); + + return ret; +} + +static int rt1318_sdw_probe(struct sdw_slave *slave, + const struct sdw_device_id *id) +{ + struct regmap *regmap; + + /* Regmap Initialization */ + regmap = devm_regmap_init_sdw(slave, &rt1318_sdw_regmap); + if (IS_ERR(regmap)) + return PTR_ERR(regmap); + + return rt1318_sdw_init(&slave->dev, regmap, slave); +} + +static int rt1318_sdw_remove(struct sdw_slave *slave) +{ + struct rt1318_sdw_priv *rt1318 = dev_get_drvdata(&slave->dev); + + if (rt1318->first_hw_init) + pm_runtime_disable(&slave->dev); + + return 0; +} + +static const struct sdw_device_id rt1318_id[] = { + SDW_SLAVE_ENTRY_EXT(0x025d, 0x1318, 0x3, 0x1, 0), + {}, +}; +MODULE_DEVICE_TABLE(sdw, rt1318_id); + +static int __maybe_unused rt1318_dev_suspend(struct device *dev) +{ + struct rt1318_sdw_priv *rt1318 = dev_get_drvdata(dev); + + if (!rt1318->hw_init) + return 0; + + regcache_cache_only(rt1318->regmap, true); + return 0; +} + +#define RT1318_PROBE_TIMEOUT 5000 + +static int __maybe_unused rt1318_dev_resume(struct device *dev) +{ + struct sdw_slave *slave = dev_to_sdw_dev(dev); + struct rt1318_sdw_priv *rt1318 = dev_get_drvdata(dev); + unsigned long time; + + if (!rt1318->first_hw_init) + return 0; + + if (!slave->unattach_request) + goto regmap_sync; + + time = wait_for_completion_timeout(&slave->initialization_complete, + msecs_to_jiffies(RT1318_PROBE_TIMEOUT)); + if (!time) { + dev_err(&slave->dev, "Initialization not complete, timed out\n"); + return -ETIMEDOUT; + } + +regmap_sync: + slave->unattach_request = 0; + regcache_cache_only(rt1318->regmap, false); + regcache_sync(rt1318->regmap); + + return 0; +} + +static const struct dev_pm_ops rt1318_pm = { + SET_SYSTEM_SLEEP_PM_OPS(rt1318_dev_suspend, rt1318_dev_resume) + SET_RUNTIME_PM_OPS(rt1318_dev_suspend, rt1318_dev_resume, NULL) +}; + +static struct sdw_driver rt1318_sdw_driver = { + .driver = { + .name = "rt1318-sdca", + .owner = THIS_MODULE, + .pm = &rt1318_pm, + }, + .probe = rt1318_sdw_probe, + .remove = rt1318_sdw_remove, + .ops = &rt1318_slave_ops, + .id_table = rt1318_id, +}; +module_sdw_driver(rt1318_sdw_driver); + +MODULE_DESCRIPTION("ASoC RT1318 driver SDCA SDW"); +MODULE_AUTHOR("Shuming Fan shumingf@realtek.com"); +MODULE_LICENSE("GPL"); diff --git a/sound/soc/codecs/rt1318-sdw.h b/sound/soc/codecs/rt1318-sdw.h new file mode 100644 index 0000000000000..4d7ac9c4bd8de --- /dev/null +++ b/sound/soc/codecs/rt1318-sdw.h @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * rt1318-sdw.h -- RT1318 SDCA ALSA SoC audio driver header + * + * Copyright(c) 2022 Realtek Semiconductor Corp. + */ + +#ifndef __RT1318_SDW_H__ +#define __RT1318_SDW_H__ + +#include <linux/regmap.h> +#include <linux/soundwire/sdw.h> +#include <linux/soundwire/sdw_type.h> +#include <linux/soundwire/sdw_registers.h> +#include <sound/soc.h> + +/* imp-defined registers */ +#define RT1318_SAPU_SM 0x3203 + +#define R1318_TCON 0xc203 +#define R1318_TCON_RELATED_1 0xc206 + +#define R1318_SPK_TEMPERATRUE_PROTECTION_0 0xdb00 +#define R1318_SPK_TEMPERATRUE_PROTECTION_L_4 0xdb08 +#define R1318_SPK_TEMPERATRUE_PROTECTION_R_4 0xdd08 + +#define R1318_SPK_TEMPERATRUE_PROTECTION_L_6 0xdb12 +#define R1318_SPK_TEMPERATRUE_PROTECTION_R_6 0xdd12 + +#define RT1318_INIT_RECIPROCAL_REG_L_24 0xdbb5 +#define RT1318_INIT_RECIPROCAL_REG_L_23_16 0xdbb6 +#define RT1318_INIT_RECIPROCAL_REG_L_15_8 0xdbb7 +#define RT1318_INIT_RECIPROCAL_REG_L_7_0 0xdbb8 +#define RT1318_INIT_RECIPROCAL_REG_R_24 0xddb5 +#define RT1318_INIT_RECIPROCAL_REG_R_23_16 0xddb6 +#define RT1318_INIT_RECIPROCAL_REG_R_15_8 0xddb7 +#define RT1318_INIT_RECIPROCAL_REG_R_7_0 0xddb8 + +#define RT1318_INIT_R0_RECIPROCAL_SYN_L_24 0xdbc5 +#define RT1318_INIT_R0_RECIPROCAL_SYN_L_23_16 0xdbc6 +#define RT1318_INIT_R0_RECIPROCAL_SYN_L_15_8 0xdbc7 +#define RT1318_INIT_R0_RECIPROCAL_SYN_L_7_0 0xdbc8 +#define RT1318_INIT_R0_RECIPROCAL_SYN_R_24 0xddc5 +#define RT1318_INIT_R0_RECIPROCAL_SYN_R_23_16 0xddc6 +#define RT1318_INIT_R0_RECIPROCAL_SYN_R_15_8 0xddc7 +#define RT1318_INIT_R0_RECIPROCAL_SYN_R_7_0 0xddc8 + +#define RT1318_R0_COMPARE_FLAG_L 0xdb35 +#define RT1318_R0_COMPARE_FLAG_R 0xdd35 + +#define RT1318_STP_INITIAL_RS_TEMP_H 0xdd93 +#define RT1318_STP_INITIAL_RS_TEMP_L 0xdd94 + +/* RT1318 SDCA Control - function number */ +#define FUNC_NUM_SMART_AMP 0x04 + +/* RT1318 SDCA entity */ +#define RT1318_SDCA_ENT_PDE23 0x31 +#define RT1318_SDCA_ENT_XU24 0x24 +#define RT1318_SDCA_ENT_FU21 0x03 +#define RT1318_SDCA_ENT_UDMPU21 0x02 +#define RT1318_SDCA_ENT_CS21 0x21 +#define RT1318_SDCA_ENT_SAPU 0x29 + +/* RT1318 SDCA control */ +#define RT1318_SDCA_CTL_SAMPLE_FREQ_INDEX 0x10 +#define RT1318_SDCA_CTL_REQ_POWER_STATE 0x01 +#define RT1318_SDCA_CTL_FU_MUTE 0x01 +#define RT1318_SDCA_CTL_FU_VOLUME 0x02 +#define RT1318_SDCA_CTL_UDMPU_CLUSTER 0x10 +#define RT1318_SDCA_CTL_SAPU_PROTECTION_MODE 0x10 +#define RT1318_SDCA_CTL_SAPU_PROTECTION_STATUS 0x11 + +/* RT1318 SDCA channel */ +#define CH_L 0x01 +#define CH_R 0x02 + +/* sample frequency index */ +#define RT1318_SDCA_RATE_16000HZ 0x04 +#define RT1318_SDCA_RATE_32000HZ 0x07 +#define RT1318_SDCA_RATE_44100HZ 0x08 +#define RT1318_SDCA_RATE_48000HZ 0x09 +#define RT1318_SDCA_RATE_96000HZ 0x0b +#define RT1318_SDCA_RATE_192000HZ 0x0d + + +struct rt1318_sdw_priv { + struct snd_soc_component *component; + struct regmap *regmap; + struct sdw_slave *sdw_slave; + enum sdw_slave_status status; + struct sdw_bus_params params; + bool hw_init; + bool first_hw_init; +}; + +struct sdw_stream_data { + struct sdw_stream_runtime *sdw_stream; +}; + +#endif /* __RT1318_SDW_H__ */
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
[ Upstream commit 65b7b869da9bd3bd0b9fa60e6fe557bfbc0a75e8 ]
The struct sdw_slave_ops is not modified and sdw_driver takes pointer to const, so make it a const for code safety.
Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Link: https://lore.kernel.org/r/20230124163953.345949-1-krzysztof.kozlowski@linaro... Signed-off-by: Mark Brown broonie@kernel.org Stable-dep-of: 84822215acd1 ("ASoC: codecs: wcd938x: fix accessing regmap on unattached devices") Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/rt1316-sdw.c | 2 +- sound/soc/codecs/rt1318-sdw.c | 2 +- sound/soc/codecs/rt711-sdca-sdw.c | 2 +- sound/soc/codecs/rt715-sdca-sdw.c | 2 +- sound/soc/codecs/wcd938x-sdw.c | 2 +- sound/soc/codecs/wsa881x.c | 2 +- sound/soc/codecs/wsa883x.c | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/sound/soc/codecs/rt1316-sdw.c b/sound/soc/codecs/rt1316-sdw.c index ed0a114363621..154b6179b6dcd 100644 --- a/sound/soc/codecs/rt1316-sdw.c +++ b/sound/soc/codecs/rt1316-sdw.c @@ -585,7 +585,7 @@ static int rt1316_sdw_pcm_hw_free(struct snd_pcm_substream *substream, * slave_ops: callbacks for get_clock_stop_mode, clock_stop and * port_prep are not defined for now */ -static struct sdw_slave_ops rt1316_slave_ops = { +static const struct sdw_slave_ops rt1316_slave_ops = { .read_prop = rt1316_read_prop, .update_status = rt1316_update_status, }; diff --git a/sound/soc/codecs/rt1318-sdw.c b/sound/soc/codecs/rt1318-sdw.c index f85f5ab2c6d04..c6ec86e97a6e7 100644 --- a/sound/soc/codecs/rt1318-sdw.c +++ b/sound/soc/codecs/rt1318-sdw.c @@ -697,7 +697,7 @@ static int rt1318_sdw_pcm_hw_free(struct snd_pcm_substream *substream, * slave_ops: callbacks for get_clock_stop_mode, clock_stop and * port_prep are not defined for now */ -static struct sdw_slave_ops rt1318_slave_ops = { +static const struct sdw_slave_ops rt1318_slave_ops = { .read_prop = rt1318_read_prop, .update_status = rt1318_update_status, }; diff --git a/sound/soc/codecs/rt711-sdca-sdw.c b/sound/soc/codecs/rt711-sdca-sdw.c index 88a8392a58edb..e23cec4c457de 100644 --- a/sound/soc/codecs/rt711-sdca-sdw.c +++ b/sound/soc/codecs/rt711-sdca-sdw.c @@ -338,7 +338,7 @@ static int rt711_sdca_interrupt_callback(struct sdw_slave *slave, return ret; }
-static struct sdw_slave_ops rt711_sdca_slave_ops = { +static const struct sdw_slave_ops rt711_sdca_slave_ops = { .read_prop = rt711_sdca_read_prop, .interrupt_callback = rt711_sdca_interrupt_callback, .update_status = rt711_sdca_update_status, diff --git a/sound/soc/codecs/rt715-sdca-sdw.c b/sound/soc/codecs/rt715-sdca-sdw.c index c54ecf3e69879..38a82e4e2f952 100644 --- a/sound/soc/codecs/rt715-sdca-sdw.c +++ b/sound/soc/codecs/rt715-sdca-sdw.c @@ -172,7 +172,7 @@ static int rt715_sdca_read_prop(struct sdw_slave *slave) return 0; }
-static struct sdw_slave_ops rt715_sdca_slave_ops = { +static const struct sdw_slave_ops rt715_sdca_slave_ops = { .read_prop = rt715_sdca_read_prop, .update_status = rt715_sdca_update_status, }; diff --git a/sound/soc/codecs/wcd938x-sdw.c b/sound/soc/codecs/wcd938x-sdw.c index 1bf3c06a2b622..33d1b5ffeaeba 100644 --- a/sound/soc/codecs/wcd938x-sdw.c +++ b/sound/soc/codecs/wcd938x-sdw.c @@ -191,7 +191,7 @@ static int wcd9380_interrupt_callback(struct sdw_slave *slave, return IRQ_HANDLED; }
-static struct sdw_slave_ops wcd9380_slave_ops = { +static const struct sdw_slave_ops wcd9380_slave_ops = { .update_status = wcd9380_update_status, .interrupt_callback = wcd9380_interrupt_callback, .bus_config = wcd9380_bus_config, diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c index 6c8b1db649b89..046843b57b038 100644 --- a/sound/soc/codecs/wsa881x.c +++ b/sound/soc/codecs/wsa881x.c @@ -1101,7 +1101,7 @@ static int wsa881x_bus_config(struct sdw_slave *slave, return 0; }
-static struct sdw_slave_ops wsa881x_slave_ops = { +static const struct sdw_slave_ops wsa881x_slave_ops = { .update_status = wsa881x_update_status, .bus_config = wsa881x_bus_config, .port_prep = wsa881x_port_prep, diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c index 2533d0973529f..6e9a64c5948e2 100644 --- a/sound/soc/codecs/wsa883x.c +++ b/sound/soc/codecs/wsa883x.c @@ -1073,7 +1073,7 @@ static int wsa883x_port_prep(struct sdw_slave *slave, return 0; }
-static struct sdw_slave_ops wsa883x_slave_ops = { +static const struct sdw_slave_ops wsa883x_slave_ops = { .update_status = wsa883x_update_status, .port_prep = wsa883x_port_prep, };
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
[ Upstream commit 84822215acd15bd86a7759a835271e63bba83a7b ]
The WCD938x comes with three devices on two Linux drivers: 1. RX Soundwire device (wcd938x-sdw.c driver), 2. TX Soundwire device, which is used to access devices via regmap (also wcd938x-sdw.c driver), 3. platform device (wcd938x.c driver) - glue and component master, actually having most of the code using TX Soundwire device regmap.
When RX and TX Soundwire devices probe, the component master (platform device) bind tries to write micbias configuration via TX Soundwire regmap. This might happen before TX Soundwire enumerates, so the regmap access fails. On Qualcomm SM8550 board with WCD9385:
qcom-soundwire 6d30000.soundwire-controller: Qualcomm Soundwire controller v2.0.0 Registered wcd938x_codec audio-codec: bound sdw:0:0217:010d:00:4 (ops wcd938x_sdw_component_ops) wcd938x_codec audio-codec: bound sdw:0:0217:010d:00:3 (ops wcd938x_sdw_component_ops) qcom-soundwire 6ad0000.soundwire-controller: swrm_wait_for_wr_fifo_avail err write overflow
Fix the issue by: 1. Moving the regmap creation from platform device to TX Soundwire device. The regmap settings are moved as-is with one difference: making the wcd938x_regmap_config const. 2. Using regmap in cache only mode till the actual TX Soundwire device enumerates and then sync the regmap cache.
Cc: stable@vger.kernel.org # v3.14+ Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Message-Id: 20230503144102.242240-1-krzysztof.kozlowski@linaro.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/wcd938x-sdw.c | 1037 +++++++++++++++++++++++++++++++- sound/soc/codecs/wcd938x.c | 1003 +----------------------------- sound/soc/codecs/wcd938x.h | 1 + 3 files changed, 1030 insertions(+), 1011 deletions(-)
diff --git a/sound/soc/codecs/wcd938x-sdw.c b/sound/soc/codecs/wcd938x-sdw.c index 33d1b5ffeaeba..402286dfaea44 100644 --- a/sound/soc/codecs/wcd938x-sdw.c +++ b/sound/soc/codecs/wcd938x-sdw.c @@ -161,6 +161,14 @@ EXPORT_SYMBOL_GPL(wcd938x_sdw_set_sdw_stream); static int wcd9380_update_status(struct sdw_slave *slave, enum sdw_slave_status status) { + struct wcd938x_sdw_priv *wcd = dev_get_drvdata(&slave->dev); + + if (wcd->regmap && (status == SDW_SLAVE_ATTACHED)) { + /* Write out any cached changes that happened between probe and attach */ + regcache_cache_only(wcd->regmap, false); + return regcache_sync(wcd->regmap); + } + return 0; }
@@ -177,20 +185,1014 @@ static int wcd9380_interrupt_callback(struct sdw_slave *slave, { struct wcd938x_sdw_priv *wcd = dev_get_drvdata(&slave->dev); struct irq_domain *slave_irq = wcd->slave_irq; - struct regmap *regmap = dev_get_regmap(&slave->dev, NULL); u32 sts1, sts2, sts3;
do { handle_nested_irq(irq_find_mapping(slave_irq, 0)); - regmap_read(regmap, WCD938X_DIGITAL_INTR_STATUS_0, &sts1); - regmap_read(regmap, WCD938X_DIGITAL_INTR_STATUS_1, &sts2); - regmap_read(regmap, WCD938X_DIGITAL_INTR_STATUS_2, &sts3); + regmap_read(wcd->regmap, WCD938X_DIGITAL_INTR_STATUS_0, &sts1); + regmap_read(wcd->regmap, WCD938X_DIGITAL_INTR_STATUS_1, &sts2); + regmap_read(wcd->regmap, WCD938X_DIGITAL_INTR_STATUS_2, &sts3);
} while (sts1 || sts2 || sts3);
return IRQ_HANDLED; }
+static const struct reg_default wcd938x_defaults[] = { + {WCD938X_ANA_PAGE_REGISTER, 0x00}, + {WCD938X_ANA_BIAS, 0x00}, + {WCD938X_ANA_RX_SUPPLIES, 0x00}, + {WCD938X_ANA_HPH, 0x0C}, + {WCD938X_ANA_EAR, 0x00}, + {WCD938X_ANA_EAR_COMPANDER_CTL, 0x02}, + {WCD938X_ANA_TX_CH1, 0x20}, + {WCD938X_ANA_TX_CH2, 0x00}, + {WCD938X_ANA_TX_CH3, 0x20}, + {WCD938X_ANA_TX_CH4, 0x00}, + {WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC, 0x00}, + {WCD938X_ANA_MICB3_DSP_EN_LOGIC, 0x00}, + {WCD938X_ANA_MBHC_MECH, 0x39}, + {WCD938X_ANA_MBHC_ELECT, 0x08}, + {WCD938X_ANA_MBHC_ZDET, 0x00}, + {WCD938X_ANA_MBHC_RESULT_1, 0x00}, + {WCD938X_ANA_MBHC_RESULT_2, 0x00}, + {WCD938X_ANA_MBHC_RESULT_3, 0x00}, + {WCD938X_ANA_MBHC_BTN0, 0x00}, + {WCD938X_ANA_MBHC_BTN1, 0x10}, + {WCD938X_ANA_MBHC_BTN2, 0x20}, + {WCD938X_ANA_MBHC_BTN3, 0x30}, + {WCD938X_ANA_MBHC_BTN4, 0x40}, + {WCD938X_ANA_MBHC_BTN5, 0x50}, + {WCD938X_ANA_MBHC_BTN6, 0x60}, + {WCD938X_ANA_MBHC_BTN7, 0x70}, + {WCD938X_ANA_MICB1, 0x10}, + {WCD938X_ANA_MICB2, 0x10}, + {WCD938X_ANA_MICB2_RAMP, 0x00}, + {WCD938X_ANA_MICB3, 0x10}, + {WCD938X_ANA_MICB4, 0x10}, + {WCD938X_BIAS_CTL, 0x2A}, + {WCD938X_BIAS_VBG_FINE_ADJ, 0x55}, + {WCD938X_LDOL_VDDCX_ADJUST, 0x01}, + {WCD938X_LDOL_DISABLE_LDOL, 0x00}, + {WCD938X_MBHC_CTL_CLK, 0x00}, + {WCD938X_MBHC_CTL_ANA, 0x00}, + {WCD938X_MBHC_CTL_SPARE_1, 0x00}, + {WCD938X_MBHC_CTL_SPARE_2, 0x00}, + {WCD938X_MBHC_CTL_BCS, 0x00}, + {WCD938X_MBHC_MOISTURE_DET_FSM_STATUS, 0x00}, + {WCD938X_MBHC_TEST_CTL, 0x00}, + {WCD938X_LDOH_MODE, 0x2B}, + {WCD938X_LDOH_BIAS, 0x68}, + {WCD938X_LDOH_STB_LOADS, 0x00}, + {WCD938X_LDOH_SLOWRAMP, 0x50}, + {WCD938X_MICB1_TEST_CTL_1, 0x1A}, + {WCD938X_MICB1_TEST_CTL_2, 0x00}, + {WCD938X_MICB1_TEST_CTL_3, 0xA4}, + {WCD938X_MICB2_TEST_CTL_1, 0x1A}, + {WCD938X_MICB2_TEST_CTL_2, 0x00}, + {WCD938X_MICB2_TEST_CTL_3, 0x24}, + {WCD938X_MICB3_TEST_CTL_1, 0x1A}, + {WCD938X_MICB3_TEST_CTL_2, 0x00}, + {WCD938X_MICB3_TEST_CTL_3, 0xA4}, + {WCD938X_MICB4_TEST_CTL_1, 0x1A}, + {WCD938X_MICB4_TEST_CTL_2, 0x00}, + {WCD938X_MICB4_TEST_CTL_3, 0xA4}, + {WCD938X_TX_COM_ADC_VCM, 0x39}, + {WCD938X_TX_COM_BIAS_ATEST, 0xE0}, + {WCD938X_TX_COM_SPARE1, 0x00}, + {WCD938X_TX_COM_SPARE2, 0x00}, + {WCD938X_TX_COM_TXFE_DIV_CTL, 0x22}, + {WCD938X_TX_COM_TXFE_DIV_START, 0x00}, + {WCD938X_TX_COM_SPARE3, 0x00}, + {WCD938X_TX_COM_SPARE4, 0x00}, + {WCD938X_TX_1_2_TEST_EN, 0xCC}, + {WCD938X_TX_1_2_ADC_IB, 0xE9}, + {WCD938X_TX_1_2_ATEST_REFCTL, 0x0A}, + {WCD938X_TX_1_2_TEST_CTL, 0x38}, + {WCD938X_TX_1_2_TEST_BLK_EN1, 0xFF}, + {WCD938X_TX_1_2_TXFE1_CLKDIV, 0x00}, + {WCD938X_TX_1_2_SAR2_ERR, 0x00}, + {WCD938X_TX_1_2_SAR1_ERR, 0x00}, + {WCD938X_TX_3_4_TEST_EN, 0xCC}, + {WCD938X_TX_3_4_ADC_IB, 0xE9}, + {WCD938X_TX_3_4_ATEST_REFCTL, 0x0A}, + {WCD938X_TX_3_4_TEST_CTL, 0x38}, + {WCD938X_TX_3_4_TEST_BLK_EN3, 0xFF}, + {WCD938X_TX_3_4_TXFE3_CLKDIV, 0x00}, + {WCD938X_TX_3_4_SAR4_ERR, 0x00}, + {WCD938X_TX_3_4_SAR3_ERR, 0x00}, + {WCD938X_TX_3_4_TEST_BLK_EN2, 0xFB}, + {WCD938X_TX_3_4_TXFE2_CLKDIV, 0x00}, + {WCD938X_TX_3_4_SPARE1, 0x00}, + {WCD938X_TX_3_4_TEST_BLK_EN4, 0xFB}, + {WCD938X_TX_3_4_TXFE4_CLKDIV, 0x00}, + {WCD938X_TX_3_4_SPARE2, 0x00}, + {WCD938X_CLASSH_MODE_1, 0x40}, + {WCD938X_CLASSH_MODE_2, 0x3A}, + {WCD938X_CLASSH_MODE_3, 0x00}, + {WCD938X_CLASSH_CTRL_VCL_1, 0x70}, + {WCD938X_CLASSH_CTRL_VCL_2, 0x82}, + {WCD938X_CLASSH_CTRL_CCL_1, 0x31}, + {WCD938X_CLASSH_CTRL_CCL_2, 0x80}, + {WCD938X_CLASSH_CTRL_CCL_3, 0x80}, + {WCD938X_CLASSH_CTRL_CCL_4, 0x51}, + {WCD938X_CLASSH_CTRL_CCL_5, 0x00}, + {WCD938X_CLASSH_BUCK_TMUX_A_D, 0x00}, + {WCD938X_CLASSH_BUCK_SW_DRV_CNTL, 0x77}, + {WCD938X_CLASSH_SPARE, 0x00}, + {WCD938X_FLYBACK_EN, 0x4E}, + {WCD938X_FLYBACK_VNEG_CTRL_1, 0x0B}, + {WCD938X_FLYBACK_VNEG_CTRL_2, 0x45}, + {WCD938X_FLYBACK_VNEG_CTRL_3, 0x74}, + {WCD938X_FLYBACK_VNEG_CTRL_4, 0x7F}, + {WCD938X_FLYBACK_VNEG_CTRL_5, 0x83}, + {WCD938X_FLYBACK_VNEG_CTRL_6, 0x98}, + {WCD938X_FLYBACK_VNEG_CTRL_7, 0xA9}, + {WCD938X_FLYBACK_VNEG_CTRL_8, 0x68}, + {WCD938X_FLYBACK_VNEG_CTRL_9, 0x64}, + {WCD938X_FLYBACK_VNEGDAC_CTRL_1, 0xED}, + {WCD938X_FLYBACK_VNEGDAC_CTRL_2, 0xF0}, + {WCD938X_FLYBACK_VNEGDAC_CTRL_3, 0xA6}, + {WCD938X_FLYBACK_CTRL_1, 0x65}, + {WCD938X_FLYBACK_TEST_CTL, 0x00}, + {WCD938X_RX_AUX_SW_CTL, 0x00}, + {WCD938X_RX_PA_AUX_IN_CONN, 0x01}, + {WCD938X_RX_TIMER_DIV, 0x32}, + {WCD938X_RX_OCP_CTL, 0x1F}, + {WCD938X_RX_OCP_COUNT, 0x77}, + {WCD938X_RX_BIAS_EAR_DAC, 0xA0}, + {WCD938X_RX_BIAS_EAR_AMP, 0xAA}, + {WCD938X_RX_BIAS_HPH_LDO, 0xA9}, + {WCD938X_RX_BIAS_HPH_PA, 0xAA}, + {WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2, 0x8A}, + {WCD938X_RX_BIAS_HPH_RDAC_LDO, 0x88}, + {WCD938X_RX_BIAS_HPH_CNP1, 0x82}, + {WCD938X_RX_BIAS_HPH_LOWPOWER, 0x82}, + {WCD938X_RX_BIAS_AUX_DAC, 0xA0}, + {WCD938X_RX_BIAS_AUX_AMP, 0xAA}, + {WCD938X_RX_BIAS_VNEGDAC_BLEEDER, 0x50}, + {WCD938X_RX_BIAS_MISC, 0x00}, + {WCD938X_RX_BIAS_BUCK_RST, 0x08}, + {WCD938X_RX_BIAS_BUCK_VREF_ERRAMP, 0x44}, + {WCD938X_RX_BIAS_FLYB_ERRAMP, 0x40}, + {WCD938X_RX_BIAS_FLYB_BUFF, 0xAA}, + {WCD938X_RX_BIAS_FLYB_MID_RST, 0x14}, + {WCD938X_HPH_L_STATUS, 0x04}, + {WCD938X_HPH_R_STATUS, 0x04}, + {WCD938X_HPH_CNP_EN, 0x80}, + {WCD938X_HPH_CNP_WG_CTL, 0x9A}, + {WCD938X_HPH_CNP_WG_TIME, 0x14}, + {WCD938X_HPH_OCP_CTL, 0x28}, + {WCD938X_HPH_AUTO_CHOP, 0x16}, + {WCD938X_HPH_CHOP_CTL, 0x83}, + {WCD938X_HPH_PA_CTL1, 0x46}, + {WCD938X_HPH_PA_CTL2, 0x50}, + {WCD938X_HPH_L_EN, 0x80}, + {WCD938X_HPH_L_TEST, 0xE0}, + {WCD938X_HPH_L_ATEST, 0x50}, + {WCD938X_HPH_R_EN, 0x80}, + {WCD938X_HPH_R_TEST, 0xE0}, + {WCD938X_HPH_R_ATEST, 0x54}, + {WCD938X_HPH_RDAC_CLK_CTL1, 0x99}, + {WCD938X_HPH_RDAC_CLK_CTL2, 0x9B}, + {WCD938X_HPH_RDAC_LDO_CTL, 0x33}, + {WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL, 0x00}, + {WCD938X_HPH_REFBUFF_UHQA_CTL, 0x68}, + {WCD938X_HPH_REFBUFF_LP_CTL, 0x0E}, + {WCD938X_HPH_L_DAC_CTL, 0x20}, + {WCD938X_HPH_R_DAC_CTL, 0x20}, + {WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL, 0x55}, + {WCD938X_HPH_SURGE_HPHLR_SURGE_EN, 0x19}, + {WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1, 0xA0}, + {WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS, 0x00}, + {WCD938X_EAR_EAR_EN_REG, 0x22}, + {WCD938X_EAR_EAR_PA_CON, 0x44}, + {WCD938X_EAR_EAR_SP_CON, 0xDB}, + {WCD938X_EAR_EAR_DAC_CON, 0x80}, + {WCD938X_EAR_EAR_CNP_FSM_CON, 0xB2}, + {WCD938X_EAR_TEST_CTL, 0x00}, + {WCD938X_EAR_STATUS_REG_1, 0x00}, + {WCD938X_EAR_STATUS_REG_2, 0x08}, + {WCD938X_ANA_NEW_PAGE_REGISTER, 0x00}, + {WCD938X_HPH_NEW_ANA_HPH2, 0x00}, + {WCD938X_HPH_NEW_ANA_HPH3, 0x00}, + {WCD938X_SLEEP_CTL, 0x16}, + {WCD938X_SLEEP_WATCHDOG_CTL, 0x00}, + {WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL, 0x00}, + {WCD938X_MBHC_NEW_CTL_1, 0x02}, + {WCD938X_MBHC_NEW_CTL_2, 0x05}, + {WCD938X_MBHC_NEW_PLUG_DETECT_CTL, 0xE9}, + {WCD938X_MBHC_NEW_ZDET_ANA_CTL, 0x0F}, + {WCD938X_MBHC_NEW_ZDET_RAMP_CTL, 0x00}, + {WCD938X_MBHC_NEW_FSM_STATUS, 0x00}, + {WCD938X_MBHC_NEW_ADC_RESULT, 0x00}, + {WCD938X_TX_NEW_AMIC_MUX_CFG, 0x00}, + {WCD938X_AUX_AUXPA, 0x00}, + {WCD938X_LDORXTX_MODE, 0x0C}, + {WCD938X_LDORXTX_CONFIG, 0x10}, + {WCD938X_DIE_CRACK_DIE_CRK_DET_EN, 0x00}, + {WCD938X_DIE_CRACK_DIE_CRK_DET_OUT, 0x00}, + {WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL, 0x40}, + {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L, 0x81}, + {WCD938X_HPH_NEW_INT_RDAC_VREF_CTL, 0x10}, + {WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL, 0x00}, + {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R, 0x81}, + {WCD938X_HPH_NEW_INT_PA_MISC1, 0x22}, + {WCD938X_HPH_NEW_INT_PA_MISC2, 0x00}, + {WCD938X_HPH_NEW_INT_PA_RDAC_MISC, 0x00}, + {WCD938X_HPH_NEW_INT_HPH_TIMER1, 0xFE}, + {WCD938X_HPH_NEW_INT_HPH_TIMER2, 0x02}, + {WCD938X_HPH_NEW_INT_HPH_TIMER3, 0x4E}, + {WCD938X_HPH_NEW_INT_HPH_TIMER4, 0x54}, + {WCD938X_HPH_NEW_INT_PA_RDAC_MISC2, 0x00}, + {WCD938X_HPH_NEW_INT_PA_RDAC_MISC3, 0x00}, + {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW, 0x90}, + {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW, 0x90}, + {WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI, 0x62}, + {WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP, 0x01}, + {WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP, 0x11}, + {WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL, 0x57}, + {WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL, 0x01}, + {WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT, 0x00}, + {WCD938X_MBHC_NEW_INT_SPARE_2, 0x00}, + {WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON, 0xA8}, + {WCD938X_EAR_INT_NEW_CNP_VCM_CON1, 0x42}, + {WCD938X_EAR_INT_NEW_CNP_VCM_CON2, 0x22}, + {WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS, 0x00}, + {WCD938X_AUX_INT_EN_REG, 0x00}, + {WCD938X_AUX_INT_PA_CTRL, 0x06}, + {WCD938X_AUX_INT_SP_CTRL, 0xD2}, + {WCD938X_AUX_INT_DAC_CTRL, 0x80}, + {WCD938X_AUX_INT_CLK_CTRL, 0x50}, + {WCD938X_AUX_INT_TEST_CTRL, 0x00}, + {WCD938X_AUX_INT_STATUS_REG, 0x00}, + {WCD938X_AUX_INT_MISC, 0x00}, + {WCD938X_LDORXTX_INT_BIAS, 0x6E}, + {WCD938X_LDORXTX_INT_STB_LOADS_DTEST, 0x50}, + {WCD938X_LDORXTX_INT_TEST0, 0x1C}, + {WCD938X_LDORXTX_INT_STARTUP_TIMER, 0xFF}, + {WCD938X_LDORXTX_INT_TEST1, 0x1F}, + {WCD938X_LDORXTX_INT_STATUS, 0x00}, + {WCD938X_SLEEP_INT_WATCHDOG_CTL_1, 0x0A}, + {WCD938X_SLEEP_INT_WATCHDOG_CTL_2, 0x0A}, + {WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1, 0x02}, + {WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2, 0x60}, + {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2, 0xFF}, + {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1, 0x7F}, + {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0, 0x3F}, + {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M, 0x1F}, + {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M, 0x0F}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1, 0xD7}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0, 0xC8}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP, 0xC6}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1, 0xD5}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0, 0xCA}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP, 0x05}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0, 0xA5}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP, 0x13}, + {WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1, 0x88}, + {WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP, 0x42}, + {WCD938X_TX_COM_NEW_INT_TXADC_INT_L2, 0xFF}, + {WCD938X_TX_COM_NEW_INT_TXADC_INT_L1, 0x64}, + {WCD938X_TX_COM_NEW_INT_TXADC_INT_L0, 0x64}, + {WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP, 0x77}, + {WCD938X_DIGITAL_PAGE_REGISTER, 0x00}, + {WCD938X_DIGITAL_CHIP_ID0, 0x00}, + {WCD938X_DIGITAL_CHIP_ID1, 0x00}, + {WCD938X_DIGITAL_CHIP_ID2, 0x0D}, + {WCD938X_DIGITAL_CHIP_ID3, 0x01}, + {WCD938X_DIGITAL_SWR_TX_CLK_RATE, 0x00}, + {WCD938X_DIGITAL_CDC_RST_CTL, 0x03}, + {WCD938X_DIGITAL_TOP_CLK_CFG, 0x00}, + {WCD938X_DIGITAL_CDC_ANA_CLK_CTL, 0x00}, + {WCD938X_DIGITAL_CDC_DIG_CLK_CTL, 0xF0}, + {WCD938X_DIGITAL_SWR_RST_EN, 0x00}, + {WCD938X_DIGITAL_CDC_PATH_MODE, 0x55}, + {WCD938X_DIGITAL_CDC_RX_RST, 0x00}, + {WCD938X_DIGITAL_CDC_RX0_CTL, 0xFC}, + {WCD938X_DIGITAL_CDC_RX1_CTL, 0xFC}, + {WCD938X_DIGITAL_CDC_RX2_CTL, 0xFC}, + {WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1, 0x00}, + {WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3, 0x00}, + {WCD938X_DIGITAL_CDC_COMP_CTL_0, 0x00}, + {WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL, 0x1E}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A1_0, 0x00}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A1_1, 0x01}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A2_0, 0x63}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A2_1, 0x04}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A3_0, 0xAC}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A3_1, 0x04}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A4_0, 0x1A}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A4_1, 0x03}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A5_0, 0xBC}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A5_1, 0x02}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A6_0, 0xC7}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A7_0, 0xF8}, + {WCD938X_DIGITAL_CDC_HPH_DSM_C_0, 0x47}, + {WCD938X_DIGITAL_CDC_HPH_DSM_C_1, 0x43}, + {WCD938X_DIGITAL_CDC_HPH_DSM_C_2, 0xB1}, + {WCD938X_DIGITAL_CDC_HPH_DSM_C_3, 0x17}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R1, 0x4D}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R2, 0x29}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R3, 0x34}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R4, 0x59}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R5, 0x66}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R6, 0x87}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R7, 0x64}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A1_0, 0x00}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A1_1, 0x01}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A2_0, 0x96}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A2_1, 0x09}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A3_0, 0xAB}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A3_1, 0x05}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A4_0, 0x1C}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A4_1, 0x02}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A5_0, 0x17}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A5_1, 0x02}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A6_0, 0xAA}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A7_0, 0xE3}, + {WCD938X_DIGITAL_CDC_AUX_DSM_C_0, 0x69}, + {WCD938X_DIGITAL_CDC_AUX_DSM_C_1, 0x54}, + {WCD938X_DIGITAL_CDC_AUX_DSM_C_2, 0x02}, + {WCD938X_DIGITAL_CDC_AUX_DSM_C_3, 0x15}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R1, 0xA4}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R2, 0xB5}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R3, 0x86}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R4, 0x85}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R5, 0xAA}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R6, 0xE2}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R7, 0x62}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0, 0x55}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1, 0xA9}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0, 0x3D}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1, 0x2E}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2, 0x01}, + {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0, 0x00}, + {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1, 0xFC}, + {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2, 0x01}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_CTL, 0x00}, + {WCD938X_DIGITAL_CDC_AUX_GAIN_CTL, 0x00}, + {WCD938X_DIGITAL_CDC_EAR_PATH_CTL, 0x00}, + {WCD938X_DIGITAL_CDC_SWR_CLH, 0x00}, + {WCD938X_DIGITAL_SWR_CLH_BYP, 0x00}, + {WCD938X_DIGITAL_CDC_TX0_CTL, 0x68}, + {WCD938X_DIGITAL_CDC_TX1_CTL, 0x68}, + {WCD938X_DIGITAL_CDC_TX2_CTL, 0x68}, + {WCD938X_DIGITAL_CDC_TX_RST, 0x00}, + {WCD938X_DIGITAL_CDC_REQ_CTL, 0x01}, + {WCD938X_DIGITAL_CDC_RST, 0x00}, + {WCD938X_DIGITAL_CDC_AMIC_CTL, 0x0F}, + {WCD938X_DIGITAL_CDC_DMIC_CTL, 0x04}, + {WCD938X_DIGITAL_CDC_DMIC1_CTL, 0x01}, + {WCD938X_DIGITAL_CDC_DMIC2_CTL, 0x01}, + {WCD938X_DIGITAL_CDC_DMIC3_CTL, 0x01}, + {WCD938X_DIGITAL_CDC_DMIC4_CTL, 0x01}, + {WCD938X_DIGITAL_EFUSE_PRG_CTL, 0x00}, + {WCD938X_DIGITAL_EFUSE_CTL, 0x2B}, + {WCD938X_DIGITAL_CDC_DMIC_RATE_1_2, 0x11}, + {WCD938X_DIGITAL_CDC_DMIC_RATE_3_4, 0x11}, + {WCD938X_DIGITAL_PDM_WD_CTL0, 0x00}, + {WCD938X_DIGITAL_PDM_WD_CTL1, 0x00}, + {WCD938X_DIGITAL_PDM_WD_CTL2, 0x00}, + {WCD938X_DIGITAL_INTR_MODE, 0x00}, + {WCD938X_DIGITAL_INTR_MASK_0, 0xFF}, + {WCD938X_DIGITAL_INTR_MASK_1, 0xFF}, + {WCD938X_DIGITAL_INTR_MASK_2, 0x3F}, + {WCD938X_DIGITAL_INTR_STATUS_0, 0x00}, + {WCD938X_DIGITAL_INTR_STATUS_1, 0x00}, + {WCD938X_DIGITAL_INTR_STATUS_2, 0x00}, + {WCD938X_DIGITAL_INTR_CLEAR_0, 0x00}, + {WCD938X_DIGITAL_INTR_CLEAR_1, 0x00}, + {WCD938X_DIGITAL_INTR_CLEAR_2, 0x00}, + {WCD938X_DIGITAL_INTR_LEVEL_0, 0x00}, + {WCD938X_DIGITAL_INTR_LEVEL_1, 0x00}, + {WCD938X_DIGITAL_INTR_LEVEL_2, 0x00}, + {WCD938X_DIGITAL_INTR_SET_0, 0x00}, + {WCD938X_DIGITAL_INTR_SET_1, 0x00}, + {WCD938X_DIGITAL_INTR_SET_2, 0x00}, + {WCD938X_DIGITAL_INTR_TEST_0, 0x00}, + {WCD938X_DIGITAL_INTR_TEST_1, 0x00}, + {WCD938X_DIGITAL_INTR_TEST_2, 0x00}, + {WCD938X_DIGITAL_TX_MODE_DBG_EN, 0x00}, + {WCD938X_DIGITAL_TX_MODE_DBG_0_1, 0x00}, + {WCD938X_DIGITAL_TX_MODE_DBG_2_3, 0x00}, + {WCD938X_DIGITAL_LB_IN_SEL_CTL, 0x00}, + {WCD938X_DIGITAL_LOOP_BACK_MODE, 0x00}, + {WCD938X_DIGITAL_SWR_DAC_TEST, 0x00}, + {WCD938X_DIGITAL_SWR_HM_TEST_RX_0, 0x40}, + {WCD938X_DIGITAL_SWR_HM_TEST_TX_0, 0x40}, + {WCD938X_DIGITAL_SWR_HM_TEST_RX_1, 0x00}, + {WCD938X_DIGITAL_SWR_HM_TEST_TX_1, 0x00}, + {WCD938X_DIGITAL_SWR_HM_TEST_TX_2, 0x00}, + {WCD938X_DIGITAL_SWR_HM_TEST_0, 0x00}, + {WCD938X_DIGITAL_SWR_HM_TEST_1, 0x00}, + {WCD938X_DIGITAL_PAD_CTL_SWR_0, 0x8F}, + {WCD938X_DIGITAL_PAD_CTL_SWR_1, 0x06}, + {WCD938X_DIGITAL_I2C_CTL, 0x00}, + {WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE, 0x00}, + {WCD938X_DIGITAL_EFUSE_TEST_CTL_0, 0x00}, + {WCD938X_DIGITAL_EFUSE_TEST_CTL_1, 0x00}, + {WCD938X_DIGITAL_EFUSE_T_DATA_0, 0x00}, + {WCD938X_DIGITAL_EFUSE_T_DATA_1, 0x00}, + {WCD938X_DIGITAL_PAD_CTL_PDM_RX0, 0xF1}, + {WCD938X_DIGITAL_PAD_CTL_PDM_RX1, 0xF1}, + {WCD938X_DIGITAL_PAD_CTL_PDM_TX0, 0xF1}, + {WCD938X_DIGITAL_PAD_CTL_PDM_TX1, 0xF1}, + {WCD938X_DIGITAL_PAD_CTL_PDM_TX2, 0xF1}, + {WCD938X_DIGITAL_PAD_INP_DIS_0, 0x00}, + {WCD938X_DIGITAL_PAD_INP_DIS_1, 0x00}, + {WCD938X_DIGITAL_DRIVE_STRENGTH_0, 0x00}, + {WCD938X_DIGITAL_DRIVE_STRENGTH_1, 0x00}, + {WCD938X_DIGITAL_DRIVE_STRENGTH_2, 0x00}, + {WCD938X_DIGITAL_RX_DATA_EDGE_CTL, 0x1F}, + {WCD938X_DIGITAL_TX_DATA_EDGE_CTL, 0x80}, + {WCD938X_DIGITAL_GPIO_MODE, 0x00}, + {WCD938X_DIGITAL_PIN_CTL_OE, 0x00}, + {WCD938X_DIGITAL_PIN_CTL_DATA_0, 0x00}, + {WCD938X_DIGITAL_PIN_CTL_DATA_1, 0x00}, + {WCD938X_DIGITAL_PIN_STATUS_0, 0x00}, + {WCD938X_DIGITAL_PIN_STATUS_1, 0x00}, + {WCD938X_DIGITAL_DIG_DEBUG_CTL, 0x00}, + {WCD938X_DIGITAL_DIG_DEBUG_EN, 0x00}, + {WCD938X_DIGITAL_ANA_CSR_DBG_ADD, 0x00}, + {WCD938X_DIGITAL_ANA_CSR_DBG_CTL, 0x48}, + {WCD938X_DIGITAL_SSP_DBG, 0x00}, + {WCD938X_DIGITAL_MODE_STATUS_0, 0x00}, + {WCD938X_DIGITAL_MODE_STATUS_1, 0x00}, + {WCD938X_DIGITAL_SPARE_0, 0x00}, + {WCD938X_DIGITAL_SPARE_1, 0x00}, + {WCD938X_DIGITAL_SPARE_2, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_0, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_1, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_2, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_3, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_4, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_5, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_6, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_7, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_8, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_9, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_10, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_11, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_12, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_13, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_14, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_15, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_16, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_17, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_18, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_19, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_20, 0x0E}, + {WCD938X_DIGITAL_EFUSE_REG_21, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_22, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_23, 0xF8}, + {WCD938X_DIGITAL_EFUSE_REG_24, 0x16}, + {WCD938X_DIGITAL_EFUSE_REG_25, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_26, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_27, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_28, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_29, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_30, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_31, 0x00}, + {WCD938X_DIGITAL_TX_REQ_FB_CTL_0, 0x88}, + {WCD938X_DIGITAL_TX_REQ_FB_CTL_1, 0x88}, + {WCD938X_DIGITAL_TX_REQ_FB_CTL_2, 0x88}, + {WCD938X_DIGITAL_TX_REQ_FB_CTL_3, 0x88}, + {WCD938X_DIGITAL_TX_REQ_FB_CTL_4, 0x88}, + {WCD938X_DIGITAL_DEM_BYPASS_DATA0, 0x55}, + {WCD938X_DIGITAL_DEM_BYPASS_DATA1, 0x55}, + {WCD938X_DIGITAL_DEM_BYPASS_DATA2, 0x55}, + {WCD938X_DIGITAL_DEM_BYPASS_DATA3, 0x01}, +}; + +static bool wcd938x_rdwr_register(struct device *dev, unsigned int reg) +{ + switch (reg) { + case WCD938X_ANA_PAGE_REGISTER: + case WCD938X_ANA_BIAS: + case WCD938X_ANA_RX_SUPPLIES: + case WCD938X_ANA_HPH: + case WCD938X_ANA_EAR: + case WCD938X_ANA_EAR_COMPANDER_CTL: + case WCD938X_ANA_TX_CH1: + case WCD938X_ANA_TX_CH2: + case WCD938X_ANA_TX_CH3: + case WCD938X_ANA_TX_CH4: + case WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC: + case WCD938X_ANA_MICB3_DSP_EN_LOGIC: + case WCD938X_ANA_MBHC_MECH: + case WCD938X_ANA_MBHC_ELECT: + case WCD938X_ANA_MBHC_ZDET: + case WCD938X_ANA_MBHC_BTN0: + case WCD938X_ANA_MBHC_BTN1: + case WCD938X_ANA_MBHC_BTN2: + case WCD938X_ANA_MBHC_BTN3: + case WCD938X_ANA_MBHC_BTN4: + case WCD938X_ANA_MBHC_BTN5: + case WCD938X_ANA_MBHC_BTN6: + case WCD938X_ANA_MBHC_BTN7: + case WCD938X_ANA_MICB1: + case WCD938X_ANA_MICB2: + case WCD938X_ANA_MICB2_RAMP: + case WCD938X_ANA_MICB3: + case WCD938X_ANA_MICB4: + case WCD938X_BIAS_CTL: + case WCD938X_BIAS_VBG_FINE_ADJ: + case WCD938X_LDOL_VDDCX_ADJUST: + case WCD938X_LDOL_DISABLE_LDOL: + case WCD938X_MBHC_CTL_CLK: + case WCD938X_MBHC_CTL_ANA: + case WCD938X_MBHC_CTL_SPARE_1: + case WCD938X_MBHC_CTL_SPARE_2: + case WCD938X_MBHC_CTL_BCS: + case WCD938X_MBHC_TEST_CTL: + case WCD938X_LDOH_MODE: + case WCD938X_LDOH_BIAS: + case WCD938X_LDOH_STB_LOADS: + case WCD938X_LDOH_SLOWRAMP: + case WCD938X_MICB1_TEST_CTL_1: + case WCD938X_MICB1_TEST_CTL_2: + case WCD938X_MICB1_TEST_CTL_3: + case WCD938X_MICB2_TEST_CTL_1: + case WCD938X_MICB2_TEST_CTL_2: + case WCD938X_MICB2_TEST_CTL_3: + case WCD938X_MICB3_TEST_CTL_1: + case WCD938X_MICB3_TEST_CTL_2: + case WCD938X_MICB3_TEST_CTL_3: + case WCD938X_MICB4_TEST_CTL_1: + case WCD938X_MICB4_TEST_CTL_2: + case WCD938X_MICB4_TEST_CTL_3: + case WCD938X_TX_COM_ADC_VCM: + case WCD938X_TX_COM_BIAS_ATEST: + case WCD938X_TX_COM_SPARE1: + case WCD938X_TX_COM_SPARE2: + case WCD938X_TX_COM_TXFE_DIV_CTL: + case WCD938X_TX_COM_TXFE_DIV_START: + case WCD938X_TX_COM_SPARE3: + case WCD938X_TX_COM_SPARE4: + case WCD938X_TX_1_2_TEST_EN: + case WCD938X_TX_1_2_ADC_IB: + case WCD938X_TX_1_2_ATEST_REFCTL: + case WCD938X_TX_1_2_TEST_CTL: + case WCD938X_TX_1_2_TEST_BLK_EN1: + case WCD938X_TX_1_2_TXFE1_CLKDIV: + case WCD938X_TX_3_4_TEST_EN: + case WCD938X_TX_3_4_ADC_IB: + case WCD938X_TX_3_4_ATEST_REFCTL: + case WCD938X_TX_3_4_TEST_CTL: + case WCD938X_TX_3_4_TEST_BLK_EN3: + case WCD938X_TX_3_4_TXFE3_CLKDIV: + case WCD938X_TX_3_4_TEST_BLK_EN2: + case WCD938X_TX_3_4_TXFE2_CLKDIV: + case WCD938X_TX_3_4_SPARE1: + case WCD938X_TX_3_4_TEST_BLK_EN4: + case WCD938X_TX_3_4_TXFE4_CLKDIV: + case WCD938X_TX_3_4_SPARE2: + case WCD938X_CLASSH_MODE_1: + case WCD938X_CLASSH_MODE_2: + case WCD938X_CLASSH_MODE_3: + case WCD938X_CLASSH_CTRL_VCL_1: + case WCD938X_CLASSH_CTRL_VCL_2: + case WCD938X_CLASSH_CTRL_CCL_1: + case WCD938X_CLASSH_CTRL_CCL_2: + case WCD938X_CLASSH_CTRL_CCL_3: + case WCD938X_CLASSH_CTRL_CCL_4: + case WCD938X_CLASSH_CTRL_CCL_5: + case WCD938X_CLASSH_BUCK_TMUX_A_D: + case WCD938X_CLASSH_BUCK_SW_DRV_CNTL: + case WCD938X_CLASSH_SPARE: + case WCD938X_FLYBACK_EN: + case WCD938X_FLYBACK_VNEG_CTRL_1: + case WCD938X_FLYBACK_VNEG_CTRL_2: + case WCD938X_FLYBACK_VNEG_CTRL_3: + case WCD938X_FLYBACK_VNEG_CTRL_4: + case WCD938X_FLYBACK_VNEG_CTRL_5: + case WCD938X_FLYBACK_VNEG_CTRL_6: + case WCD938X_FLYBACK_VNEG_CTRL_7: + case WCD938X_FLYBACK_VNEG_CTRL_8: + case WCD938X_FLYBACK_VNEG_CTRL_9: + case WCD938X_FLYBACK_VNEGDAC_CTRL_1: + case WCD938X_FLYBACK_VNEGDAC_CTRL_2: + case WCD938X_FLYBACK_VNEGDAC_CTRL_3: + case WCD938X_FLYBACK_CTRL_1: + case WCD938X_FLYBACK_TEST_CTL: + case WCD938X_RX_AUX_SW_CTL: + case WCD938X_RX_PA_AUX_IN_CONN: + case WCD938X_RX_TIMER_DIV: + case WCD938X_RX_OCP_CTL: + case WCD938X_RX_OCP_COUNT: + case WCD938X_RX_BIAS_EAR_DAC: + case WCD938X_RX_BIAS_EAR_AMP: + case WCD938X_RX_BIAS_HPH_LDO: + case WCD938X_RX_BIAS_HPH_PA: + case WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2: + case WCD938X_RX_BIAS_HPH_RDAC_LDO: + case WCD938X_RX_BIAS_HPH_CNP1: + case WCD938X_RX_BIAS_HPH_LOWPOWER: + case WCD938X_RX_BIAS_AUX_DAC: + case WCD938X_RX_BIAS_AUX_AMP: + case WCD938X_RX_BIAS_VNEGDAC_BLEEDER: + case WCD938X_RX_BIAS_MISC: + case WCD938X_RX_BIAS_BUCK_RST: + case WCD938X_RX_BIAS_BUCK_VREF_ERRAMP: + case WCD938X_RX_BIAS_FLYB_ERRAMP: + case WCD938X_RX_BIAS_FLYB_BUFF: + case WCD938X_RX_BIAS_FLYB_MID_RST: + case WCD938X_HPH_CNP_EN: + case WCD938X_HPH_CNP_WG_CTL: + case WCD938X_HPH_CNP_WG_TIME: + case WCD938X_HPH_OCP_CTL: + case WCD938X_HPH_AUTO_CHOP: + case WCD938X_HPH_CHOP_CTL: + case WCD938X_HPH_PA_CTL1: + case WCD938X_HPH_PA_CTL2: + case WCD938X_HPH_L_EN: + case WCD938X_HPH_L_TEST: + case WCD938X_HPH_L_ATEST: + case WCD938X_HPH_R_EN: + case WCD938X_HPH_R_TEST: + case WCD938X_HPH_R_ATEST: + case WCD938X_HPH_RDAC_CLK_CTL1: + case WCD938X_HPH_RDAC_CLK_CTL2: + case WCD938X_HPH_RDAC_LDO_CTL: + case WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL: + case WCD938X_HPH_REFBUFF_UHQA_CTL: + case WCD938X_HPH_REFBUFF_LP_CTL: + case WCD938X_HPH_L_DAC_CTL: + case WCD938X_HPH_R_DAC_CTL: + case WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL: + case WCD938X_HPH_SURGE_HPHLR_SURGE_EN: + case WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1: + case WCD938X_EAR_EAR_EN_REG: + case WCD938X_EAR_EAR_PA_CON: + case WCD938X_EAR_EAR_SP_CON: + case WCD938X_EAR_EAR_DAC_CON: + case WCD938X_EAR_EAR_CNP_FSM_CON: + case WCD938X_EAR_TEST_CTL: + case WCD938X_ANA_NEW_PAGE_REGISTER: + case WCD938X_HPH_NEW_ANA_HPH2: + case WCD938X_HPH_NEW_ANA_HPH3: + case WCD938X_SLEEP_CTL: + case WCD938X_SLEEP_WATCHDOG_CTL: + case WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL: + case WCD938X_MBHC_NEW_CTL_1: + case WCD938X_MBHC_NEW_CTL_2: + case WCD938X_MBHC_NEW_PLUG_DETECT_CTL: + case WCD938X_MBHC_NEW_ZDET_ANA_CTL: + case WCD938X_MBHC_NEW_ZDET_RAMP_CTL: + case WCD938X_TX_NEW_AMIC_MUX_CFG: + case WCD938X_AUX_AUXPA: + case WCD938X_LDORXTX_MODE: + case WCD938X_LDORXTX_CONFIG: + case WCD938X_DIE_CRACK_DIE_CRK_DET_EN: + case WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL: + case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L: + case WCD938X_HPH_NEW_INT_RDAC_VREF_CTL: + case WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL: + case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R: + case WCD938X_HPH_NEW_INT_PA_MISC1: + case WCD938X_HPH_NEW_INT_PA_MISC2: + case WCD938X_HPH_NEW_INT_PA_RDAC_MISC: + case WCD938X_HPH_NEW_INT_HPH_TIMER1: + case WCD938X_HPH_NEW_INT_HPH_TIMER2: + case WCD938X_HPH_NEW_INT_HPH_TIMER3: + case WCD938X_HPH_NEW_INT_HPH_TIMER4: + case WCD938X_HPH_NEW_INT_PA_RDAC_MISC2: + case WCD938X_HPH_NEW_INT_PA_RDAC_MISC3: + case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW: + case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW: + case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI: + case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP: + case WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP: + case WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL: + case WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL: + case WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT: + case WCD938X_MBHC_NEW_INT_SPARE_2: + case WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON: + case WCD938X_EAR_INT_NEW_CNP_VCM_CON1: + case WCD938X_EAR_INT_NEW_CNP_VCM_CON2: + case WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS: + case WCD938X_AUX_INT_EN_REG: + case WCD938X_AUX_INT_PA_CTRL: + case WCD938X_AUX_INT_SP_CTRL: + case WCD938X_AUX_INT_DAC_CTRL: + case WCD938X_AUX_INT_CLK_CTRL: + case WCD938X_AUX_INT_TEST_CTRL: + case WCD938X_AUX_INT_MISC: + case WCD938X_LDORXTX_INT_BIAS: + case WCD938X_LDORXTX_INT_STB_LOADS_DTEST: + case WCD938X_LDORXTX_INT_TEST0: + case WCD938X_LDORXTX_INT_STARTUP_TIMER: + case WCD938X_LDORXTX_INT_TEST1: + case WCD938X_SLEEP_INT_WATCHDOG_CTL_1: + case WCD938X_SLEEP_INT_WATCHDOG_CTL_2: + case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1: + case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2: + case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2: + case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1: + case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0: + case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M: + case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP: + case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1: + case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP: + case WCD938X_TX_COM_NEW_INT_TXADC_INT_L2: + case WCD938X_TX_COM_NEW_INT_TXADC_INT_L1: + case WCD938X_TX_COM_NEW_INT_TXADC_INT_L0: + case WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP: + case WCD938X_DIGITAL_PAGE_REGISTER: + case WCD938X_DIGITAL_SWR_TX_CLK_RATE: + case WCD938X_DIGITAL_CDC_RST_CTL: + case WCD938X_DIGITAL_TOP_CLK_CFG: + case WCD938X_DIGITAL_CDC_ANA_CLK_CTL: + case WCD938X_DIGITAL_CDC_DIG_CLK_CTL: + case WCD938X_DIGITAL_SWR_RST_EN: + case WCD938X_DIGITAL_CDC_PATH_MODE: + case WCD938X_DIGITAL_CDC_RX_RST: + case WCD938X_DIGITAL_CDC_RX0_CTL: + case WCD938X_DIGITAL_CDC_RX1_CTL: + case WCD938X_DIGITAL_CDC_RX2_CTL: + case WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1: + case WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3: + case WCD938X_DIGITAL_CDC_COMP_CTL_0: + case WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL: + case WCD938X_DIGITAL_CDC_HPH_DSM_A1_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A1_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_A2_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A2_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_A3_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A3_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_A4_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A4_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_A5_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A5_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_A6_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A7_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_C_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_C_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_C_2: + case WCD938X_DIGITAL_CDC_HPH_DSM_C_3: + case WCD938X_DIGITAL_CDC_HPH_DSM_R1: + case WCD938X_DIGITAL_CDC_HPH_DSM_R2: + case WCD938X_DIGITAL_CDC_HPH_DSM_R3: + case WCD938X_DIGITAL_CDC_HPH_DSM_R4: + case WCD938X_DIGITAL_CDC_HPH_DSM_R5: + case WCD938X_DIGITAL_CDC_HPH_DSM_R6: + case WCD938X_DIGITAL_CDC_HPH_DSM_R7: + case WCD938X_DIGITAL_CDC_AUX_DSM_A1_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A1_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_A2_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A2_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_A3_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A3_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_A4_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A4_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_A5_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A5_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_A6_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A7_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_C_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_C_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_C_2: + case WCD938X_DIGITAL_CDC_AUX_DSM_C_3: + case WCD938X_DIGITAL_CDC_AUX_DSM_R1: + case WCD938X_DIGITAL_CDC_AUX_DSM_R2: + case WCD938X_DIGITAL_CDC_AUX_DSM_R3: + case WCD938X_DIGITAL_CDC_AUX_DSM_R4: + case WCD938X_DIGITAL_CDC_AUX_DSM_R5: + case WCD938X_DIGITAL_CDC_AUX_DSM_R6: + case WCD938X_DIGITAL_CDC_AUX_DSM_R7: + case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0: + case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1: + case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0: + case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1: + case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2: + case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0: + case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1: + case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2: + case WCD938X_DIGITAL_CDC_HPH_GAIN_CTL: + case WCD938X_DIGITAL_CDC_AUX_GAIN_CTL: + case WCD938X_DIGITAL_CDC_EAR_PATH_CTL: + case WCD938X_DIGITAL_CDC_SWR_CLH: + case WCD938X_DIGITAL_SWR_CLH_BYP: + case WCD938X_DIGITAL_CDC_TX0_CTL: + case WCD938X_DIGITAL_CDC_TX1_CTL: + case WCD938X_DIGITAL_CDC_TX2_CTL: + case WCD938X_DIGITAL_CDC_TX_RST: + case WCD938X_DIGITAL_CDC_REQ_CTL: + case WCD938X_DIGITAL_CDC_RST: + case WCD938X_DIGITAL_CDC_AMIC_CTL: + case WCD938X_DIGITAL_CDC_DMIC_CTL: + case WCD938X_DIGITAL_CDC_DMIC1_CTL: + case WCD938X_DIGITAL_CDC_DMIC2_CTL: + case WCD938X_DIGITAL_CDC_DMIC3_CTL: + case WCD938X_DIGITAL_CDC_DMIC4_CTL: + case WCD938X_DIGITAL_EFUSE_PRG_CTL: + case WCD938X_DIGITAL_EFUSE_CTL: + case WCD938X_DIGITAL_CDC_DMIC_RATE_1_2: + case WCD938X_DIGITAL_CDC_DMIC_RATE_3_4: + case WCD938X_DIGITAL_PDM_WD_CTL0: + case WCD938X_DIGITAL_PDM_WD_CTL1: + case WCD938X_DIGITAL_PDM_WD_CTL2: + case WCD938X_DIGITAL_INTR_MODE: + case WCD938X_DIGITAL_INTR_MASK_0: + case WCD938X_DIGITAL_INTR_MASK_1: + case WCD938X_DIGITAL_INTR_MASK_2: + case WCD938X_DIGITAL_INTR_CLEAR_0: + case WCD938X_DIGITAL_INTR_CLEAR_1: + case WCD938X_DIGITAL_INTR_CLEAR_2: + case WCD938X_DIGITAL_INTR_LEVEL_0: + case WCD938X_DIGITAL_INTR_LEVEL_1: + case WCD938X_DIGITAL_INTR_LEVEL_2: + case WCD938X_DIGITAL_INTR_SET_0: + case WCD938X_DIGITAL_INTR_SET_1: + case WCD938X_DIGITAL_INTR_SET_2: + case WCD938X_DIGITAL_INTR_TEST_0: + case WCD938X_DIGITAL_INTR_TEST_1: + case WCD938X_DIGITAL_INTR_TEST_2: + case WCD938X_DIGITAL_TX_MODE_DBG_EN: + case WCD938X_DIGITAL_TX_MODE_DBG_0_1: + case WCD938X_DIGITAL_TX_MODE_DBG_2_3: + case WCD938X_DIGITAL_LB_IN_SEL_CTL: + case WCD938X_DIGITAL_LOOP_BACK_MODE: + case WCD938X_DIGITAL_SWR_DAC_TEST: + case WCD938X_DIGITAL_SWR_HM_TEST_RX_0: + case WCD938X_DIGITAL_SWR_HM_TEST_TX_0: + case WCD938X_DIGITAL_SWR_HM_TEST_RX_1: + case WCD938X_DIGITAL_SWR_HM_TEST_TX_1: + case WCD938X_DIGITAL_SWR_HM_TEST_TX_2: + case WCD938X_DIGITAL_PAD_CTL_SWR_0: + case WCD938X_DIGITAL_PAD_CTL_SWR_1: + case WCD938X_DIGITAL_I2C_CTL: + case WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE: + case WCD938X_DIGITAL_EFUSE_TEST_CTL_0: + case WCD938X_DIGITAL_EFUSE_TEST_CTL_1: + case WCD938X_DIGITAL_PAD_CTL_PDM_RX0: + case WCD938X_DIGITAL_PAD_CTL_PDM_RX1: + case WCD938X_DIGITAL_PAD_CTL_PDM_TX0: + case WCD938X_DIGITAL_PAD_CTL_PDM_TX1: + case WCD938X_DIGITAL_PAD_CTL_PDM_TX2: + case WCD938X_DIGITAL_PAD_INP_DIS_0: + case WCD938X_DIGITAL_PAD_INP_DIS_1: + case WCD938X_DIGITAL_DRIVE_STRENGTH_0: + case WCD938X_DIGITAL_DRIVE_STRENGTH_1: + case WCD938X_DIGITAL_DRIVE_STRENGTH_2: + case WCD938X_DIGITAL_RX_DATA_EDGE_CTL: + case WCD938X_DIGITAL_TX_DATA_EDGE_CTL: + case WCD938X_DIGITAL_GPIO_MODE: + case WCD938X_DIGITAL_PIN_CTL_OE: + case WCD938X_DIGITAL_PIN_CTL_DATA_0: + case WCD938X_DIGITAL_PIN_CTL_DATA_1: + case WCD938X_DIGITAL_DIG_DEBUG_CTL: + case WCD938X_DIGITAL_DIG_DEBUG_EN: + case WCD938X_DIGITAL_ANA_CSR_DBG_ADD: + case WCD938X_DIGITAL_ANA_CSR_DBG_CTL: + case WCD938X_DIGITAL_SSP_DBG: + case WCD938X_DIGITAL_SPARE_0: + case WCD938X_DIGITAL_SPARE_1: + case WCD938X_DIGITAL_SPARE_2: + case WCD938X_DIGITAL_TX_REQ_FB_CTL_0: + case WCD938X_DIGITAL_TX_REQ_FB_CTL_1: + case WCD938X_DIGITAL_TX_REQ_FB_CTL_2: + case WCD938X_DIGITAL_TX_REQ_FB_CTL_3: + case WCD938X_DIGITAL_TX_REQ_FB_CTL_4: + case WCD938X_DIGITAL_DEM_BYPASS_DATA0: + case WCD938X_DIGITAL_DEM_BYPASS_DATA1: + case WCD938X_DIGITAL_DEM_BYPASS_DATA2: + case WCD938X_DIGITAL_DEM_BYPASS_DATA3: + return true; + } + + return false; +} + +static bool wcd938x_readonly_register(struct device *dev, unsigned int reg) +{ + switch (reg) { + case WCD938X_ANA_MBHC_RESULT_1: + case WCD938X_ANA_MBHC_RESULT_2: + case WCD938X_ANA_MBHC_RESULT_3: + case WCD938X_MBHC_MOISTURE_DET_FSM_STATUS: + case WCD938X_TX_1_2_SAR2_ERR: + case WCD938X_TX_1_2_SAR1_ERR: + case WCD938X_TX_3_4_SAR4_ERR: + case WCD938X_TX_3_4_SAR3_ERR: + case WCD938X_HPH_L_STATUS: + case WCD938X_HPH_R_STATUS: + case WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS: + case WCD938X_EAR_STATUS_REG_1: + case WCD938X_EAR_STATUS_REG_2: + case WCD938X_MBHC_NEW_FSM_STATUS: + case WCD938X_MBHC_NEW_ADC_RESULT: + case WCD938X_DIE_CRACK_DIE_CRK_DET_OUT: + case WCD938X_AUX_INT_STATUS_REG: + case WCD938X_LDORXTX_INT_STATUS: + case WCD938X_DIGITAL_CHIP_ID0: + case WCD938X_DIGITAL_CHIP_ID1: + case WCD938X_DIGITAL_CHIP_ID2: + case WCD938X_DIGITAL_CHIP_ID3: + case WCD938X_DIGITAL_INTR_STATUS_0: + case WCD938X_DIGITAL_INTR_STATUS_1: + case WCD938X_DIGITAL_INTR_STATUS_2: + case WCD938X_DIGITAL_INTR_CLEAR_0: + case WCD938X_DIGITAL_INTR_CLEAR_1: + case WCD938X_DIGITAL_INTR_CLEAR_2: + case WCD938X_DIGITAL_SWR_HM_TEST_0: + case WCD938X_DIGITAL_SWR_HM_TEST_1: + case WCD938X_DIGITAL_EFUSE_T_DATA_0: + case WCD938X_DIGITAL_EFUSE_T_DATA_1: + case WCD938X_DIGITAL_PIN_STATUS_0: + case WCD938X_DIGITAL_PIN_STATUS_1: + case WCD938X_DIGITAL_MODE_STATUS_0: + case WCD938X_DIGITAL_MODE_STATUS_1: + case WCD938X_DIGITAL_EFUSE_REG_0: + case WCD938X_DIGITAL_EFUSE_REG_1: + case WCD938X_DIGITAL_EFUSE_REG_2: + case WCD938X_DIGITAL_EFUSE_REG_3: + case WCD938X_DIGITAL_EFUSE_REG_4: + case WCD938X_DIGITAL_EFUSE_REG_5: + case WCD938X_DIGITAL_EFUSE_REG_6: + case WCD938X_DIGITAL_EFUSE_REG_7: + case WCD938X_DIGITAL_EFUSE_REG_8: + case WCD938X_DIGITAL_EFUSE_REG_9: + case WCD938X_DIGITAL_EFUSE_REG_10: + case WCD938X_DIGITAL_EFUSE_REG_11: + case WCD938X_DIGITAL_EFUSE_REG_12: + case WCD938X_DIGITAL_EFUSE_REG_13: + case WCD938X_DIGITAL_EFUSE_REG_14: + case WCD938X_DIGITAL_EFUSE_REG_15: + case WCD938X_DIGITAL_EFUSE_REG_16: + case WCD938X_DIGITAL_EFUSE_REG_17: + case WCD938X_DIGITAL_EFUSE_REG_18: + case WCD938X_DIGITAL_EFUSE_REG_19: + case WCD938X_DIGITAL_EFUSE_REG_20: + case WCD938X_DIGITAL_EFUSE_REG_21: + case WCD938X_DIGITAL_EFUSE_REG_22: + case WCD938X_DIGITAL_EFUSE_REG_23: + case WCD938X_DIGITAL_EFUSE_REG_24: + case WCD938X_DIGITAL_EFUSE_REG_25: + case WCD938X_DIGITAL_EFUSE_REG_26: + case WCD938X_DIGITAL_EFUSE_REG_27: + case WCD938X_DIGITAL_EFUSE_REG_28: + case WCD938X_DIGITAL_EFUSE_REG_29: + case WCD938X_DIGITAL_EFUSE_REG_30: + case WCD938X_DIGITAL_EFUSE_REG_31: + return true; + } + return false; +} + +static bool wcd938x_readable_register(struct device *dev, unsigned int reg) +{ + bool ret; + + ret = wcd938x_readonly_register(dev, reg); + if (!ret) + return wcd938x_rdwr_register(dev, reg); + + return ret; +} + +static bool wcd938x_writeable_register(struct device *dev, unsigned int reg) +{ + return wcd938x_rdwr_register(dev, reg); +} + +static bool wcd938x_volatile_register(struct device *dev, unsigned int reg) +{ + if (reg <= WCD938X_BASE_ADDRESS) + return false; + + if (reg == WCD938X_DIGITAL_SWR_TX_CLK_RATE) + return true; + + if (wcd938x_readonly_register(dev, reg)) + return true; + + return false; +} + +static const struct regmap_config wcd938x_regmap_config = { + .name = "wcd938x_csr", + .reg_bits = 32, + .val_bits = 8, + .cache_type = REGCACHE_RBTREE, + .reg_defaults = wcd938x_defaults, + .num_reg_defaults = ARRAY_SIZE(wcd938x_defaults), + .max_register = WCD938X_MAX_REGISTER, + .readable_reg = wcd938x_readable_register, + .writeable_reg = wcd938x_writeable_register, + .volatile_reg = wcd938x_volatile_register, + .can_multi_write = true, +}; + static const struct sdw_slave_ops wcd9380_slave_ops = { .update_status = wcd9380_update_status, .interrupt_callback = wcd9380_interrupt_callback, @@ -261,6 +1263,16 @@ static int wcd9380_probe(struct sdw_slave *pdev, wcd->ch_info = &wcd938x_sdw_rx_ch_info[0]; }
+ if (wcd->is_tx) { + wcd->regmap = devm_regmap_init_sdw(pdev, &wcd938x_regmap_config); + if (IS_ERR(wcd->regmap)) + return dev_err_probe(dev, PTR_ERR(wcd->regmap), + "Regmap init failed\n"); + + /* Start in cache-only until device is enumerated */ + regcache_cache_only(wcd->regmap, true); + }; + pm_runtime_set_autosuspend_delay(dev, 3000); pm_runtime_use_autosuspend(dev); pm_runtime_mark_last_busy(dev); @@ -278,22 +1290,23 @@ MODULE_DEVICE_TABLE(sdw, wcd9380_slave_id);
static int __maybe_unused wcd938x_sdw_runtime_suspend(struct device *dev) { - struct regmap *regmap = dev_get_regmap(dev, NULL); + struct wcd938x_sdw_priv *wcd = dev_get_drvdata(dev);
- if (regmap) { - regcache_cache_only(regmap, true); - regcache_mark_dirty(regmap); + if (wcd->regmap) { + regcache_cache_only(wcd->regmap, true); + regcache_mark_dirty(wcd->regmap); } + return 0; }
static int __maybe_unused wcd938x_sdw_runtime_resume(struct device *dev) { - struct regmap *regmap = dev_get_regmap(dev, NULL); + struct wcd938x_sdw_priv *wcd = dev_get_drvdata(dev);
- if (regmap) { - regcache_cache_only(regmap, false); - regcache_sync(regmap); + if (wcd->regmap) { + regcache_cache_only(wcd->regmap, false); + regcache_sync(wcd->regmap); }
pm_runtime_mark_last_busy(dev); diff --git a/sound/soc/codecs/wcd938x.c b/sound/soc/codecs/wcd938x.c index aca06a4026f3e..1d801a7b1469d 100644 --- a/sound/soc/codecs/wcd938x.c +++ b/sound/soc/codecs/wcd938x.c @@ -273,1001 +273,6 @@ static struct wcd_mbhc_field wcd_mbhc_fields[WCD_MBHC_REG_FUNC_MAX] = { WCD_MBHC_FIELD(WCD_MBHC_ELECT_ISRC_EN, WCD938X_ANA_MBHC_ZDET, 0x02), };
-static const struct reg_default wcd938x_defaults[] = { - {WCD938X_ANA_PAGE_REGISTER, 0x00}, - {WCD938X_ANA_BIAS, 0x00}, - {WCD938X_ANA_RX_SUPPLIES, 0x00}, - {WCD938X_ANA_HPH, 0x0C}, - {WCD938X_ANA_EAR, 0x00}, - {WCD938X_ANA_EAR_COMPANDER_CTL, 0x02}, - {WCD938X_ANA_TX_CH1, 0x20}, - {WCD938X_ANA_TX_CH2, 0x00}, - {WCD938X_ANA_TX_CH3, 0x20}, - {WCD938X_ANA_TX_CH4, 0x00}, - {WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC, 0x00}, - {WCD938X_ANA_MICB3_DSP_EN_LOGIC, 0x00}, - {WCD938X_ANA_MBHC_MECH, 0x39}, - {WCD938X_ANA_MBHC_ELECT, 0x08}, - {WCD938X_ANA_MBHC_ZDET, 0x00}, - {WCD938X_ANA_MBHC_RESULT_1, 0x00}, - {WCD938X_ANA_MBHC_RESULT_2, 0x00}, - {WCD938X_ANA_MBHC_RESULT_3, 0x00}, - {WCD938X_ANA_MBHC_BTN0, 0x00}, - {WCD938X_ANA_MBHC_BTN1, 0x10}, - {WCD938X_ANA_MBHC_BTN2, 0x20}, - {WCD938X_ANA_MBHC_BTN3, 0x30}, - {WCD938X_ANA_MBHC_BTN4, 0x40}, - {WCD938X_ANA_MBHC_BTN5, 0x50}, - {WCD938X_ANA_MBHC_BTN6, 0x60}, - {WCD938X_ANA_MBHC_BTN7, 0x70}, - {WCD938X_ANA_MICB1, 0x10}, - {WCD938X_ANA_MICB2, 0x10}, - {WCD938X_ANA_MICB2_RAMP, 0x00}, - {WCD938X_ANA_MICB3, 0x10}, - {WCD938X_ANA_MICB4, 0x10}, - {WCD938X_BIAS_CTL, 0x2A}, - {WCD938X_BIAS_VBG_FINE_ADJ, 0x55}, - {WCD938X_LDOL_VDDCX_ADJUST, 0x01}, - {WCD938X_LDOL_DISABLE_LDOL, 0x00}, - {WCD938X_MBHC_CTL_CLK, 0x00}, - {WCD938X_MBHC_CTL_ANA, 0x00}, - {WCD938X_MBHC_CTL_SPARE_1, 0x00}, - {WCD938X_MBHC_CTL_SPARE_2, 0x00}, - {WCD938X_MBHC_CTL_BCS, 0x00}, - {WCD938X_MBHC_MOISTURE_DET_FSM_STATUS, 0x00}, - {WCD938X_MBHC_TEST_CTL, 0x00}, - {WCD938X_LDOH_MODE, 0x2B}, - {WCD938X_LDOH_BIAS, 0x68}, - {WCD938X_LDOH_STB_LOADS, 0x00}, - {WCD938X_LDOH_SLOWRAMP, 0x50}, - {WCD938X_MICB1_TEST_CTL_1, 0x1A}, - {WCD938X_MICB1_TEST_CTL_2, 0x00}, - {WCD938X_MICB1_TEST_CTL_3, 0xA4}, - {WCD938X_MICB2_TEST_CTL_1, 0x1A}, - {WCD938X_MICB2_TEST_CTL_2, 0x00}, - {WCD938X_MICB2_TEST_CTL_3, 0x24}, - {WCD938X_MICB3_TEST_CTL_1, 0x1A}, - {WCD938X_MICB3_TEST_CTL_2, 0x00}, - {WCD938X_MICB3_TEST_CTL_3, 0xA4}, - {WCD938X_MICB4_TEST_CTL_1, 0x1A}, - {WCD938X_MICB4_TEST_CTL_2, 0x00}, - {WCD938X_MICB4_TEST_CTL_3, 0xA4}, - {WCD938X_TX_COM_ADC_VCM, 0x39}, - {WCD938X_TX_COM_BIAS_ATEST, 0xE0}, - {WCD938X_TX_COM_SPARE1, 0x00}, - {WCD938X_TX_COM_SPARE2, 0x00}, - {WCD938X_TX_COM_TXFE_DIV_CTL, 0x22}, - {WCD938X_TX_COM_TXFE_DIV_START, 0x00}, - {WCD938X_TX_COM_SPARE3, 0x00}, - {WCD938X_TX_COM_SPARE4, 0x00}, - {WCD938X_TX_1_2_TEST_EN, 0xCC}, - {WCD938X_TX_1_2_ADC_IB, 0xE9}, - {WCD938X_TX_1_2_ATEST_REFCTL, 0x0A}, - {WCD938X_TX_1_2_TEST_CTL, 0x38}, - {WCD938X_TX_1_2_TEST_BLK_EN1, 0xFF}, - {WCD938X_TX_1_2_TXFE1_CLKDIV, 0x00}, - {WCD938X_TX_1_2_SAR2_ERR, 0x00}, - {WCD938X_TX_1_2_SAR1_ERR, 0x00}, - {WCD938X_TX_3_4_TEST_EN, 0xCC}, - {WCD938X_TX_3_4_ADC_IB, 0xE9}, - {WCD938X_TX_3_4_ATEST_REFCTL, 0x0A}, - {WCD938X_TX_3_4_TEST_CTL, 0x38}, - {WCD938X_TX_3_4_TEST_BLK_EN3, 0xFF}, - {WCD938X_TX_3_4_TXFE3_CLKDIV, 0x00}, - {WCD938X_TX_3_4_SAR4_ERR, 0x00}, - {WCD938X_TX_3_4_SAR3_ERR, 0x00}, - {WCD938X_TX_3_4_TEST_BLK_EN2, 0xFB}, - {WCD938X_TX_3_4_TXFE2_CLKDIV, 0x00}, - {WCD938X_TX_3_4_SPARE1, 0x00}, - {WCD938X_TX_3_4_TEST_BLK_EN4, 0xFB}, - {WCD938X_TX_3_4_TXFE4_CLKDIV, 0x00}, - {WCD938X_TX_3_4_SPARE2, 0x00}, - {WCD938X_CLASSH_MODE_1, 0x40}, - {WCD938X_CLASSH_MODE_2, 0x3A}, - {WCD938X_CLASSH_MODE_3, 0x00}, - {WCD938X_CLASSH_CTRL_VCL_1, 0x70}, - {WCD938X_CLASSH_CTRL_VCL_2, 0x82}, - {WCD938X_CLASSH_CTRL_CCL_1, 0x31}, - {WCD938X_CLASSH_CTRL_CCL_2, 0x80}, - {WCD938X_CLASSH_CTRL_CCL_3, 0x80}, - {WCD938X_CLASSH_CTRL_CCL_4, 0x51}, - {WCD938X_CLASSH_CTRL_CCL_5, 0x00}, - {WCD938X_CLASSH_BUCK_TMUX_A_D, 0x00}, - {WCD938X_CLASSH_BUCK_SW_DRV_CNTL, 0x77}, - {WCD938X_CLASSH_SPARE, 0x00}, - {WCD938X_FLYBACK_EN, 0x4E}, - {WCD938X_FLYBACK_VNEG_CTRL_1, 0x0B}, - {WCD938X_FLYBACK_VNEG_CTRL_2, 0x45}, - {WCD938X_FLYBACK_VNEG_CTRL_3, 0x74}, - {WCD938X_FLYBACK_VNEG_CTRL_4, 0x7F}, - {WCD938X_FLYBACK_VNEG_CTRL_5, 0x83}, - {WCD938X_FLYBACK_VNEG_CTRL_6, 0x98}, - {WCD938X_FLYBACK_VNEG_CTRL_7, 0xA9}, - {WCD938X_FLYBACK_VNEG_CTRL_8, 0x68}, - {WCD938X_FLYBACK_VNEG_CTRL_9, 0x64}, - {WCD938X_FLYBACK_VNEGDAC_CTRL_1, 0xED}, - {WCD938X_FLYBACK_VNEGDAC_CTRL_2, 0xF0}, - {WCD938X_FLYBACK_VNEGDAC_CTRL_3, 0xA6}, - {WCD938X_FLYBACK_CTRL_1, 0x65}, - {WCD938X_FLYBACK_TEST_CTL, 0x00}, - {WCD938X_RX_AUX_SW_CTL, 0x00}, - {WCD938X_RX_PA_AUX_IN_CONN, 0x01}, - {WCD938X_RX_TIMER_DIV, 0x32}, - {WCD938X_RX_OCP_CTL, 0x1F}, - {WCD938X_RX_OCP_COUNT, 0x77}, - {WCD938X_RX_BIAS_EAR_DAC, 0xA0}, - {WCD938X_RX_BIAS_EAR_AMP, 0xAA}, - {WCD938X_RX_BIAS_HPH_LDO, 0xA9}, - {WCD938X_RX_BIAS_HPH_PA, 0xAA}, - {WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2, 0x8A}, - {WCD938X_RX_BIAS_HPH_RDAC_LDO, 0x88}, - {WCD938X_RX_BIAS_HPH_CNP1, 0x82}, - {WCD938X_RX_BIAS_HPH_LOWPOWER, 0x82}, - {WCD938X_RX_BIAS_AUX_DAC, 0xA0}, - {WCD938X_RX_BIAS_AUX_AMP, 0xAA}, - {WCD938X_RX_BIAS_VNEGDAC_BLEEDER, 0x50}, - {WCD938X_RX_BIAS_MISC, 0x00}, - {WCD938X_RX_BIAS_BUCK_RST, 0x08}, - {WCD938X_RX_BIAS_BUCK_VREF_ERRAMP, 0x44}, - {WCD938X_RX_BIAS_FLYB_ERRAMP, 0x40}, - {WCD938X_RX_BIAS_FLYB_BUFF, 0xAA}, - {WCD938X_RX_BIAS_FLYB_MID_RST, 0x14}, - {WCD938X_HPH_L_STATUS, 0x04}, - {WCD938X_HPH_R_STATUS, 0x04}, - {WCD938X_HPH_CNP_EN, 0x80}, - {WCD938X_HPH_CNP_WG_CTL, 0x9A}, - {WCD938X_HPH_CNP_WG_TIME, 0x14}, - {WCD938X_HPH_OCP_CTL, 0x28}, - {WCD938X_HPH_AUTO_CHOP, 0x16}, - {WCD938X_HPH_CHOP_CTL, 0x83}, - {WCD938X_HPH_PA_CTL1, 0x46}, - {WCD938X_HPH_PA_CTL2, 0x50}, - {WCD938X_HPH_L_EN, 0x80}, - {WCD938X_HPH_L_TEST, 0xE0}, - {WCD938X_HPH_L_ATEST, 0x50}, - {WCD938X_HPH_R_EN, 0x80}, - {WCD938X_HPH_R_TEST, 0xE0}, - {WCD938X_HPH_R_ATEST, 0x54}, - {WCD938X_HPH_RDAC_CLK_CTL1, 0x99}, - {WCD938X_HPH_RDAC_CLK_CTL2, 0x9B}, - {WCD938X_HPH_RDAC_LDO_CTL, 0x33}, - {WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL, 0x00}, - {WCD938X_HPH_REFBUFF_UHQA_CTL, 0x68}, - {WCD938X_HPH_REFBUFF_LP_CTL, 0x0E}, - {WCD938X_HPH_L_DAC_CTL, 0x20}, - {WCD938X_HPH_R_DAC_CTL, 0x20}, - {WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL, 0x55}, - {WCD938X_HPH_SURGE_HPHLR_SURGE_EN, 0x19}, - {WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1, 0xA0}, - {WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS, 0x00}, - {WCD938X_EAR_EAR_EN_REG, 0x22}, - {WCD938X_EAR_EAR_PA_CON, 0x44}, - {WCD938X_EAR_EAR_SP_CON, 0xDB}, - {WCD938X_EAR_EAR_DAC_CON, 0x80}, - {WCD938X_EAR_EAR_CNP_FSM_CON, 0xB2}, - {WCD938X_EAR_TEST_CTL, 0x00}, - {WCD938X_EAR_STATUS_REG_1, 0x00}, - {WCD938X_EAR_STATUS_REG_2, 0x08}, - {WCD938X_ANA_NEW_PAGE_REGISTER, 0x00}, - {WCD938X_HPH_NEW_ANA_HPH2, 0x00}, - {WCD938X_HPH_NEW_ANA_HPH3, 0x00}, - {WCD938X_SLEEP_CTL, 0x16}, - {WCD938X_SLEEP_WATCHDOG_CTL, 0x00}, - {WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL, 0x00}, - {WCD938X_MBHC_NEW_CTL_1, 0x02}, - {WCD938X_MBHC_NEW_CTL_2, 0x05}, - {WCD938X_MBHC_NEW_PLUG_DETECT_CTL, 0xE9}, - {WCD938X_MBHC_NEW_ZDET_ANA_CTL, 0x0F}, - {WCD938X_MBHC_NEW_ZDET_RAMP_CTL, 0x00}, - {WCD938X_MBHC_NEW_FSM_STATUS, 0x00}, - {WCD938X_MBHC_NEW_ADC_RESULT, 0x00}, - {WCD938X_TX_NEW_AMIC_MUX_CFG, 0x00}, - {WCD938X_AUX_AUXPA, 0x00}, - {WCD938X_LDORXTX_MODE, 0x0C}, - {WCD938X_LDORXTX_CONFIG, 0x10}, - {WCD938X_DIE_CRACK_DIE_CRK_DET_EN, 0x00}, - {WCD938X_DIE_CRACK_DIE_CRK_DET_OUT, 0x00}, - {WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL, 0x40}, - {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L, 0x81}, - {WCD938X_HPH_NEW_INT_RDAC_VREF_CTL, 0x10}, - {WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL, 0x00}, - {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R, 0x81}, - {WCD938X_HPH_NEW_INT_PA_MISC1, 0x22}, - {WCD938X_HPH_NEW_INT_PA_MISC2, 0x00}, - {WCD938X_HPH_NEW_INT_PA_RDAC_MISC, 0x00}, - {WCD938X_HPH_NEW_INT_HPH_TIMER1, 0xFE}, - {WCD938X_HPH_NEW_INT_HPH_TIMER2, 0x02}, - {WCD938X_HPH_NEW_INT_HPH_TIMER3, 0x4E}, - {WCD938X_HPH_NEW_INT_HPH_TIMER4, 0x54}, - {WCD938X_HPH_NEW_INT_PA_RDAC_MISC2, 0x00}, - {WCD938X_HPH_NEW_INT_PA_RDAC_MISC3, 0x00}, - {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW, 0x90}, - {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW, 0x90}, - {WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI, 0x62}, - {WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP, 0x01}, - {WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP, 0x11}, - {WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL, 0x57}, - {WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL, 0x01}, - {WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT, 0x00}, - {WCD938X_MBHC_NEW_INT_SPARE_2, 0x00}, - {WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON, 0xA8}, - {WCD938X_EAR_INT_NEW_CNP_VCM_CON1, 0x42}, - {WCD938X_EAR_INT_NEW_CNP_VCM_CON2, 0x22}, - {WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS, 0x00}, - {WCD938X_AUX_INT_EN_REG, 0x00}, - {WCD938X_AUX_INT_PA_CTRL, 0x06}, - {WCD938X_AUX_INT_SP_CTRL, 0xD2}, - {WCD938X_AUX_INT_DAC_CTRL, 0x80}, - {WCD938X_AUX_INT_CLK_CTRL, 0x50}, - {WCD938X_AUX_INT_TEST_CTRL, 0x00}, - {WCD938X_AUX_INT_STATUS_REG, 0x00}, - {WCD938X_AUX_INT_MISC, 0x00}, - {WCD938X_LDORXTX_INT_BIAS, 0x6E}, - {WCD938X_LDORXTX_INT_STB_LOADS_DTEST, 0x50}, - {WCD938X_LDORXTX_INT_TEST0, 0x1C}, - {WCD938X_LDORXTX_INT_STARTUP_TIMER, 0xFF}, - {WCD938X_LDORXTX_INT_TEST1, 0x1F}, - {WCD938X_LDORXTX_INT_STATUS, 0x00}, - {WCD938X_SLEEP_INT_WATCHDOG_CTL_1, 0x0A}, - {WCD938X_SLEEP_INT_WATCHDOG_CTL_2, 0x0A}, - {WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1, 0x02}, - {WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2, 0x60}, - {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2, 0xFF}, - {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1, 0x7F}, - {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0, 0x3F}, - {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M, 0x1F}, - {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M, 0x0F}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1, 0xD7}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0, 0xC8}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP, 0xC6}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1, 0xD5}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0, 0xCA}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP, 0x05}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0, 0xA5}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP, 0x13}, - {WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1, 0x88}, - {WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP, 0x42}, - {WCD938X_TX_COM_NEW_INT_TXADC_INT_L2, 0xFF}, - {WCD938X_TX_COM_NEW_INT_TXADC_INT_L1, 0x64}, - {WCD938X_TX_COM_NEW_INT_TXADC_INT_L0, 0x64}, - {WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP, 0x77}, - {WCD938X_DIGITAL_PAGE_REGISTER, 0x00}, - {WCD938X_DIGITAL_CHIP_ID0, 0x00}, - {WCD938X_DIGITAL_CHIP_ID1, 0x00}, - {WCD938X_DIGITAL_CHIP_ID2, 0x0D}, - {WCD938X_DIGITAL_CHIP_ID3, 0x01}, - {WCD938X_DIGITAL_SWR_TX_CLK_RATE, 0x00}, - {WCD938X_DIGITAL_CDC_RST_CTL, 0x03}, - {WCD938X_DIGITAL_TOP_CLK_CFG, 0x00}, - {WCD938X_DIGITAL_CDC_ANA_CLK_CTL, 0x00}, - {WCD938X_DIGITAL_CDC_DIG_CLK_CTL, 0xF0}, - {WCD938X_DIGITAL_SWR_RST_EN, 0x00}, - {WCD938X_DIGITAL_CDC_PATH_MODE, 0x55}, - {WCD938X_DIGITAL_CDC_RX_RST, 0x00}, - {WCD938X_DIGITAL_CDC_RX0_CTL, 0xFC}, - {WCD938X_DIGITAL_CDC_RX1_CTL, 0xFC}, - {WCD938X_DIGITAL_CDC_RX2_CTL, 0xFC}, - {WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1, 0x00}, - {WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3, 0x00}, - {WCD938X_DIGITAL_CDC_COMP_CTL_0, 0x00}, - {WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL, 0x1E}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A1_0, 0x00}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A1_1, 0x01}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A2_0, 0x63}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A2_1, 0x04}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A3_0, 0xAC}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A3_1, 0x04}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A4_0, 0x1A}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A4_1, 0x03}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A5_0, 0xBC}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A5_1, 0x02}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A6_0, 0xC7}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A7_0, 0xF8}, - {WCD938X_DIGITAL_CDC_HPH_DSM_C_0, 0x47}, - {WCD938X_DIGITAL_CDC_HPH_DSM_C_1, 0x43}, - {WCD938X_DIGITAL_CDC_HPH_DSM_C_2, 0xB1}, - {WCD938X_DIGITAL_CDC_HPH_DSM_C_3, 0x17}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R1, 0x4D}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R2, 0x29}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R3, 0x34}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R4, 0x59}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R5, 0x66}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R6, 0x87}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R7, 0x64}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A1_0, 0x00}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A1_1, 0x01}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A2_0, 0x96}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A2_1, 0x09}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A3_0, 0xAB}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A3_1, 0x05}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A4_0, 0x1C}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A4_1, 0x02}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A5_0, 0x17}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A5_1, 0x02}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A6_0, 0xAA}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A7_0, 0xE3}, - {WCD938X_DIGITAL_CDC_AUX_DSM_C_0, 0x69}, - {WCD938X_DIGITAL_CDC_AUX_DSM_C_1, 0x54}, - {WCD938X_DIGITAL_CDC_AUX_DSM_C_2, 0x02}, - {WCD938X_DIGITAL_CDC_AUX_DSM_C_3, 0x15}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R1, 0xA4}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R2, 0xB5}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R3, 0x86}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R4, 0x85}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R5, 0xAA}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R6, 0xE2}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R7, 0x62}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0, 0x55}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1, 0xA9}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0, 0x3D}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1, 0x2E}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2, 0x01}, - {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0, 0x00}, - {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1, 0xFC}, - {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2, 0x01}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_CTL, 0x00}, - {WCD938X_DIGITAL_CDC_AUX_GAIN_CTL, 0x00}, - {WCD938X_DIGITAL_CDC_EAR_PATH_CTL, 0x00}, - {WCD938X_DIGITAL_CDC_SWR_CLH, 0x00}, - {WCD938X_DIGITAL_SWR_CLH_BYP, 0x00}, - {WCD938X_DIGITAL_CDC_TX0_CTL, 0x68}, - {WCD938X_DIGITAL_CDC_TX1_CTL, 0x68}, - {WCD938X_DIGITAL_CDC_TX2_CTL, 0x68}, - {WCD938X_DIGITAL_CDC_TX_RST, 0x00}, - {WCD938X_DIGITAL_CDC_REQ_CTL, 0x01}, - {WCD938X_DIGITAL_CDC_RST, 0x00}, - {WCD938X_DIGITAL_CDC_AMIC_CTL, 0x0F}, - {WCD938X_DIGITAL_CDC_DMIC_CTL, 0x04}, - {WCD938X_DIGITAL_CDC_DMIC1_CTL, 0x01}, - {WCD938X_DIGITAL_CDC_DMIC2_CTL, 0x01}, - {WCD938X_DIGITAL_CDC_DMIC3_CTL, 0x01}, - {WCD938X_DIGITAL_CDC_DMIC4_CTL, 0x01}, - {WCD938X_DIGITAL_EFUSE_PRG_CTL, 0x00}, - {WCD938X_DIGITAL_EFUSE_CTL, 0x2B}, - {WCD938X_DIGITAL_CDC_DMIC_RATE_1_2, 0x11}, - {WCD938X_DIGITAL_CDC_DMIC_RATE_3_4, 0x11}, - {WCD938X_DIGITAL_PDM_WD_CTL0, 0x00}, - {WCD938X_DIGITAL_PDM_WD_CTL1, 0x00}, - {WCD938X_DIGITAL_PDM_WD_CTL2, 0x00}, - {WCD938X_DIGITAL_INTR_MODE, 0x00}, - {WCD938X_DIGITAL_INTR_MASK_0, 0xFF}, - {WCD938X_DIGITAL_INTR_MASK_1, 0xFF}, - {WCD938X_DIGITAL_INTR_MASK_2, 0x3F}, - {WCD938X_DIGITAL_INTR_STATUS_0, 0x00}, - {WCD938X_DIGITAL_INTR_STATUS_1, 0x00}, - {WCD938X_DIGITAL_INTR_STATUS_2, 0x00}, - {WCD938X_DIGITAL_INTR_CLEAR_0, 0x00}, - {WCD938X_DIGITAL_INTR_CLEAR_1, 0x00}, - {WCD938X_DIGITAL_INTR_CLEAR_2, 0x00}, - {WCD938X_DIGITAL_INTR_LEVEL_0, 0x00}, - {WCD938X_DIGITAL_INTR_LEVEL_1, 0x00}, - {WCD938X_DIGITAL_INTR_LEVEL_2, 0x00}, - {WCD938X_DIGITAL_INTR_SET_0, 0x00}, - {WCD938X_DIGITAL_INTR_SET_1, 0x00}, - {WCD938X_DIGITAL_INTR_SET_2, 0x00}, - {WCD938X_DIGITAL_INTR_TEST_0, 0x00}, - {WCD938X_DIGITAL_INTR_TEST_1, 0x00}, - {WCD938X_DIGITAL_INTR_TEST_2, 0x00}, - {WCD938X_DIGITAL_TX_MODE_DBG_EN, 0x00}, - {WCD938X_DIGITAL_TX_MODE_DBG_0_1, 0x00}, - {WCD938X_DIGITAL_TX_MODE_DBG_2_3, 0x00}, - {WCD938X_DIGITAL_LB_IN_SEL_CTL, 0x00}, - {WCD938X_DIGITAL_LOOP_BACK_MODE, 0x00}, - {WCD938X_DIGITAL_SWR_DAC_TEST, 0x00}, - {WCD938X_DIGITAL_SWR_HM_TEST_RX_0, 0x40}, - {WCD938X_DIGITAL_SWR_HM_TEST_TX_0, 0x40}, - {WCD938X_DIGITAL_SWR_HM_TEST_RX_1, 0x00}, - {WCD938X_DIGITAL_SWR_HM_TEST_TX_1, 0x00}, - {WCD938X_DIGITAL_SWR_HM_TEST_TX_2, 0x00}, - {WCD938X_DIGITAL_SWR_HM_TEST_0, 0x00}, - {WCD938X_DIGITAL_SWR_HM_TEST_1, 0x00}, - {WCD938X_DIGITAL_PAD_CTL_SWR_0, 0x8F}, - {WCD938X_DIGITAL_PAD_CTL_SWR_1, 0x06}, - {WCD938X_DIGITAL_I2C_CTL, 0x00}, - {WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE, 0x00}, - {WCD938X_DIGITAL_EFUSE_TEST_CTL_0, 0x00}, - {WCD938X_DIGITAL_EFUSE_TEST_CTL_1, 0x00}, - {WCD938X_DIGITAL_EFUSE_T_DATA_0, 0x00}, - {WCD938X_DIGITAL_EFUSE_T_DATA_1, 0x00}, - {WCD938X_DIGITAL_PAD_CTL_PDM_RX0, 0xF1}, - {WCD938X_DIGITAL_PAD_CTL_PDM_RX1, 0xF1}, - {WCD938X_DIGITAL_PAD_CTL_PDM_TX0, 0xF1}, - {WCD938X_DIGITAL_PAD_CTL_PDM_TX1, 0xF1}, - {WCD938X_DIGITAL_PAD_CTL_PDM_TX2, 0xF1}, - {WCD938X_DIGITAL_PAD_INP_DIS_0, 0x00}, - {WCD938X_DIGITAL_PAD_INP_DIS_1, 0x00}, - {WCD938X_DIGITAL_DRIVE_STRENGTH_0, 0x00}, - {WCD938X_DIGITAL_DRIVE_STRENGTH_1, 0x00}, - {WCD938X_DIGITAL_DRIVE_STRENGTH_2, 0x00}, - {WCD938X_DIGITAL_RX_DATA_EDGE_CTL, 0x1F}, - {WCD938X_DIGITAL_TX_DATA_EDGE_CTL, 0x80}, - {WCD938X_DIGITAL_GPIO_MODE, 0x00}, - {WCD938X_DIGITAL_PIN_CTL_OE, 0x00}, - {WCD938X_DIGITAL_PIN_CTL_DATA_0, 0x00}, - {WCD938X_DIGITAL_PIN_CTL_DATA_1, 0x00}, - {WCD938X_DIGITAL_PIN_STATUS_0, 0x00}, - {WCD938X_DIGITAL_PIN_STATUS_1, 0x00}, - {WCD938X_DIGITAL_DIG_DEBUG_CTL, 0x00}, - {WCD938X_DIGITAL_DIG_DEBUG_EN, 0x00}, - {WCD938X_DIGITAL_ANA_CSR_DBG_ADD, 0x00}, - {WCD938X_DIGITAL_ANA_CSR_DBG_CTL, 0x48}, - {WCD938X_DIGITAL_SSP_DBG, 0x00}, - {WCD938X_DIGITAL_MODE_STATUS_0, 0x00}, - {WCD938X_DIGITAL_MODE_STATUS_1, 0x00}, - {WCD938X_DIGITAL_SPARE_0, 0x00}, - {WCD938X_DIGITAL_SPARE_1, 0x00}, - {WCD938X_DIGITAL_SPARE_2, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_0, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_1, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_2, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_3, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_4, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_5, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_6, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_7, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_8, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_9, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_10, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_11, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_12, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_13, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_14, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_15, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_16, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_17, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_18, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_19, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_20, 0x0E}, - {WCD938X_DIGITAL_EFUSE_REG_21, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_22, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_23, 0xF8}, - {WCD938X_DIGITAL_EFUSE_REG_24, 0x16}, - {WCD938X_DIGITAL_EFUSE_REG_25, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_26, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_27, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_28, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_29, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_30, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_31, 0x00}, - {WCD938X_DIGITAL_TX_REQ_FB_CTL_0, 0x88}, - {WCD938X_DIGITAL_TX_REQ_FB_CTL_1, 0x88}, - {WCD938X_DIGITAL_TX_REQ_FB_CTL_2, 0x88}, - {WCD938X_DIGITAL_TX_REQ_FB_CTL_3, 0x88}, - {WCD938X_DIGITAL_TX_REQ_FB_CTL_4, 0x88}, - {WCD938X_DIGITAL_DEM_BYPASS_DATA0, 0x55}, - {WCD938X_DIGITAL_DEM_BYPASS_DATA1, 0x55}, - {WCD938X_DIGITAL_DEM_BYPASS_DATA2, 0x55}, - {WCD938X_DIGITAL_DEM_BYPASS_DATA3, 0x01}, -}; - -static bool wcd938x_rdwr_register(struct device *dev, unsigned int reg) -{ - switch (reg) { - case WCD938X_ANA_PAGE_REGISTER: - case WCD938X_ANA_BIAS: - case WCD938X_ANA_RX_SUPPLIES: - case WCD938X_ANA_HPH: - case WCD938X_ANA_EAR: - case WCD938X_ANA_EAR_COMPANDER_CTL: - case WCD938X_ANA_TX_CH1: - case WCD938X_ANA_TX_CH2: - case WCD938X_ANA_TX_CH3: - case WCD938X_ANA_TX_CH4: - case WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC: - case WCD938X_ANA_MICB3_DSP_EN_LOGIC: - case WCD938X_ANA_MBHC_MECH: - case WCD938X_ANA_MBHC_ELECT: - case WCD938X_ANA_MBHC_ZDET: - case WCD938X_ANA_MBHC_BTN0: - case WCD938X_ANA_MBHC_BTN1: - case WCD938X_ANA_MBHC_BTN2: - case WCD938X_ANA_MBHC_BTN3: - case WCD938X_ANA_MBHC_BTN4: - case WCD938X_ANA_MBHC_BTN5: - case WCD938X_ANA_MBHC_BTN6: - case WCD938X_ANA_MBHC_BTN7: - case WCD938X_ANA_MICB1: - case WCD938X_ANA_MICB2: - case WCD938X_ANA_MICB2_RAMP: - case WCD938X_ANA_MICB3: - case WCD938X_ANA_MICB4: - case WCD938X_BIAS_CTL: - case WCD938X_BIAS_VBG_FINE_ADJ: - case WCD938X_LDOL_VDDCX_ADJUST: - case WCD938X_LDOL_DISABLE_LDOL: - case WCD938X_MBHC_CTL_CLK: - case WCD938X_MBHC_CTL_ANA: - case WCD938X_MBHC_CTL_SPARE_1: - case WCD938X_MBHC_CTL_SPARE_2: - case WCD938X_MBHC_CTL_BCS: - case WCD938X_MBHC_TEST_CTL: - case WCD938X_LDOH_MODE: - case WCD938X_LDOH_BIAS: - case WCD938X_LDOH_STB_LOADS: - case WCD938X_LDOH_SLOWRAMP: - case WCD938X_MICB1_TEST_CTL_1: - case WCD938X_MICB1_TEST_CTL_2: - case WCD938X_MICB1_TEST_CTL_3: - case WCD938X_MICB2_TEST_CTL_1: - case WCD938X_MICB2_TEST_CTL_2: - case WCD938X_MICB2_TEST_CTL_3: - case WCD938X_MICB3_TEST_CTL_1: - case WCD938X_MICB3_TEST_CTL_2: - case WCD938X_MICB3_TEST_CTL_3: - case WCD938X_MICB4_TEST_CTL_1: - case WCD938X_MICB4_TEST_CTL_2: - case WCD938X_MICB4_TEST_CTL_3: - case WCD938X_TX_COM_ADC_VCM: - case WCD938X_TX_COM_BIAS_ATEST: - case WCD938X_TX_COM_SPARE1: - case WCD938X_TX_COM_SPARE2: - case WCD938X_TX_COM_TXFE_DIV_CTL: - case WCD938X_TX_COM_TXFE_DIV_START: - case WCD938X_TX_COM_SPARE3: - case WCD938X_TX_COM_SPARE4: - case WCD938X_TX_1_2_TEST_EN: - case WCD938X_TX_1_2_ADC_IB: - case WCD938X_TX_1_2_ATEST_REFCTL: - case WCD938X_TX_1_2_TEST_CTL: - case WCD938X_TX_1_2_TEST_BLK_EN1: - case WCD938X_TX_1_2_TXFE1_CLKDIV: - case WCD938X_TX_3_4_TEST_EN: - case WCD938X_TX_3_4_ADC_IB: - case WCD938X_TX_3_4_ATEST_REFCTL: - case WCD938X_TX_3_4_TEST_CTL: - case WCD938X_TX_3_4_TEST_BLK_EN3: - case WCD938X_TX_3_4_TXFE3_CLKDIV: - case WCD938X_TX_3_4_TEST_BLK_EN2: - case WCD938X_TX_3_4_TXFE2_CLKDIV: - case WCD938X_TX_3_4_SPARE1: - case WCD938X_TX_3_4_TEST_BLK_EN4: - case WCD938X_TX_3_4_TXFE4_CLKDIV: - case WCD938X_TX_3_4_SPARE2: - case WCD938X_CLASSH_MODE_1: - case WCD938X_CLASSH_MODE_2: - case WCD938X_CLASSH_MODE_3: - case WCD938X_CLASSH_CTRL_VCL_1: - case WCD938X_CLASSH_CTRL_VCL_2: - case WCD938X_CLASSH_CTRL_CCL_1: - case WCD938X_CLASSH_CTRL_CCL_2: - case WCD938X_CLASSH_CTRL_CCL_3: - case WCD938X_CLASSH_CTRL_CCL_4: - case WCD938X_CLASSH_CTRL_CCL_5: - case WCD938X_CLASSH_BUCK_TMUX_A_D: - case WCD938X_CLASSH_BUCK_SW_DRV_CNTL: - case WCD938X_CLASSH_SPARE: - case WCD938X_FLYBACK_EN: - case WCD938X_FLYBACK_VNEG_CTRL_1: - case WCD938X_FLYBACK_VNEG_CTRL_2: - case WCD938X_FLYBACK_VNEG_CTRL_3: - case WCD938X_FLYBACK_VNEG_CTRL_4: - case WCD938X_FLYBACK_VNEG_CTRL_5: - case WCD938X_FLYBACK_VNEG_CTRL_6: - case WCD938X_FLYBACK_VNEG_CTRL_7: - case WCD938X_FLYBACK_VNEG_CTRL_8: - case WCD938X_FLYBACK_VNEG_CTRL_9: - case WCD938X_FLYBACK_VNEGDAC_CTRL_1: - case WCD938X_FLYBACK_VNEGDAC_CTRL_2: - case WCD938X_FLYBACK_VNEGDAC_CTRL_3: - case WCD938X_FLYBACK_CTRL_1: - case WCD938X_FLYBACK_TEST_CTL: - case WCD938X_RX_AUX_SW_CTL: - case WCD938X_RX_PA_AUX_IN_CONN: - case WCD938X_RX_TIMER_DIV: - case WCD938X_RX_OCP_CTL: - case WCD938X_RX_OCP_COUNT: - case WCD938X_RX_BIAS_EAR_DAC: - case WCD938X_RX_BIAS_EAR_AMP: - case WCD938X_RX_BIAS_HPH_LDO: - case WCD938X_RX_BIAS_HPH_PA: - case WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2: - case WCD938X_RX_BIAS_HPH_RDAC_LDO: - case WCD938X_RX_BIAS_HPH_CNP1: - case WCD938X_RX_BIAS_HPH_LOWPOWER: - case WCD938X_RX_BIAS_AUX_DAC: - case WCD938X_RX_BIAS_AUX_AMP: - case WCD938X_RX_BIAS_VNEGDAC_BLEEDER: - case WCD938X_RX_BIAS_MISC: - case WCD938X_RX_BIAS_BUCK_RST: - case WCD938X_RX_BIAS_BUCK_VREF_ERRAMP: - case WCD938X_RX_BIAS_FLYB_ERRAMP: - case WCD938X_RX_BIAS_FLYB_BUFF: - case WCD938X_RX_BIAS_FLYB_MID_RST: - case WCD938X_HPH_CNP_EN: - case WCD938X_HPH_CNP_WG_CTL: - case WCD938X_HPH_CNP_WG_TIME: - case WCD938X_HPH_OCP_CTL: - case WCD938X_HPH_AUTO_CHOP: - case WCD938X_HPH_CHOP_CTL: - case WCD938X_HPH_PA_CTL1: - case WCD938X_HPH_PA_CTL2: - case WCD938X_HPH_L_EN: - case WCD938X_HPH_L_TEST: - case WCD938X_HPH_L_ATEST: - case WCD938X_HPH_R_EN: - case WCD938X_HPH_R_TEST: - case WCD938X_HPH_R_ATEST: - case WCD938X_HPH_RDAC_CLK_CTL1: - case WCD938X_HPH_RDAC_CLK_CTL2: - case WCD938X_HPH_RDAC_LDO_CTL: - case WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL: - case WCD938X_HPH_REFBUFF_UHQA_CTL: - case WCD938X_HPH_REFBUFF_LP_CTL: - case WCD938X_HPH_L_DAC_CTL: - case WCD938X_HPH_R_DAC_CTL: - case WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL: - case WCD938X_HPH_SURGE_HPHLR_SURGE_EN: - case WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1: - case WCD938X_EAR_EAR_EN_REG: - case WCD938X_EAR_EAR_PA_CON: - case WCD938X_EAR_EAR_SP_CON: - case WCD938X_EAR_EAR_DAC_CON: - case WCD938X_EAR_EAR_CNP_FSM_CON: - case WCD938X_EAR_TEST_CTL: - case WCD938X_ANA_NEW_PAGE_REGISTER: - case WCD938X_HPH_NEW_ANA_HPH2: - case WCD938X_HPH_NEW_ANA_HPH3: - case WCD938X_SLEEP_CTL: - case WCD938X_SLEEP_WATCHDOG_CTL: - case WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL: - case WCD938X_MBHC_NEW_CTL_1: - case WCD938X_MBHC_NEW_CTL_2: - case WCD938X_MBHC_NEW_PLUG_DETECT_CTL: - case WCD938X_MBHC_NEW_ZDET_ANA_CTL: - case WCD938X_MBHC_NEW_ZDET_RAMP_CTL: - case WCD938X_TX_NEW_AMIC_MUX_CFG: - case WCD938X_AUX_AUXPA: - case WCD938X_LDORXTX_MODE: - case WCD938X_LDORXTX_CONFIG: - case WCD938X_DIE_CRACK_DIE_CRK_DET_EN: - case WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL: - case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L: - case WCD938X_HPH_NEW_INT_RDAC_VREF_CTL: - case WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL: - case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R: - case WCD938X_HPH_NEW_INT_PA_MISC1: - case WCD938X_HPH_NEW_INT_PA_MISC2: - case WCD938X_HPH_NEW_INT_PA_RDAC_MISC: - case WCD938X_HPH_NEW_INT_HPH_TIMER1: - case WCD938X_HPH_NEW_INT_HPH_TIMER2: - case WCD938X_HPH_NEW_INT_HPH_TIMER3: - case WCD938X_HPH_NEW_INT_HPH_TIMER4: - case WCD938X_HPH_NEW_INT_PA_RDAC_MISC2: - case WCD938X_HPH_NEW_INT_PA_RDAC_MISC3: - case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW: - case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW: - case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI: - case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP: - case WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP: - case WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL: - case WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL: - case WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT: - case WCD938X_MBHC_NEW_INT_SPARE_2: - case WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON: - case WCD938X_EAR_INT_NEW_CNP_VCM_CON1: - case WCD938X_EAR_INT_NEW_CNP_VCM_CON2: - case WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS: - case WCD938X_AUX_INT_EN_REG: - case WCD938X_AUX_INT_PA_CTRL: - case WCD938X_AUX_INT_SP_CTRL: - case WCD938X_AUX_INT_DAC_CTRL: - case WCD938X_AUX_INT_CLK_CTRL: - case WCD938X_AUX_INT_TEST_CTRL: - case WCD938X_AUX_INT_MISC: - case WCD938X_LDORXTX_INT_BIAS: - case WCD938X_LDORXTX_INT_STB_LOADS_DTEST: - case WCD938X_LDORXTX_INT_TEST0: - case WCD938X_LDORXTX_INT_STARTUP_TIMER: - case WCD938X_LDORXTX_INT_TEST1: - case WCD938X_SLEEP_INT_WATCHDOG_CTL_1: - case WCD938X_SLEEP_INT_WATCHDOG_CTL_2: - case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1: - case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2: - case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2: - case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1: - case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0: - case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M: - case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP: - case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1: - case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP: - case WCD938X_TX_COM_NEW_INT_TXADC_INT_L2: - case WCD938X_TX_COM_NEW_INT_TXADC_INT_L1: - case WCD938X_TX_COM_NEW_INT_TXADC_INT_L0: - case WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP: - case WCD938X_DIGITAL_PAGE_REGISTER: - case WCD938X_DIGITAL_SWR_TX_CLK_RATE: - case WCD938X_DIGITAL_CDC_RST_CTL: - case WCD938X_DIGITAL_TOP_CLK_CFG: - case WCD938X_DIGITAL_CDC_ANA_CLK_CTL: - case WCD938X_DIGITAL_CDC_DIG_CLK_CTL: - case WCD938X_DIGITAL_SWR_RST_EN: - case WCD938X_DIGITAL_CDC_PATH_MODE: - case WCD938X_DIGITAL_CDC_RX_RST: - case WCD938X_DIGITAL_CDC_RX0_CTL: - case WCD938X_DIGITAL_CDC_RX1_CTL: - case WCD938X_DIGITAL_CDC_RX2_CTL: - case WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1: - case WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3: - case WCD938X_DIGITAL_CDC_COMP_CTL_0: - case WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL: - case WCD938X_DIGITAL_CDC_HPH_DSM_A1_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A1_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_A2_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A2_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_A3_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A3_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_A4_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A4_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_A5_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A5_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_A6_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A7_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_C_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_C_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_C_2: - case WCD938X_DIGITAL_CDC_HPH_DSM_C_3: - case WCD938X_DIGITAL_CDC_HPH_DSM_R1: - case WCD938X_DIGITAL_CDC_HPH_DSM_R2: - case WCD938X_DIGITAL_CDC_HPH_DSM_R3: - case WCD938X_DIGITAL_CDC_HPH_DSM_R4: - case WCD938X_DIGITAL_CDC_HPH_DSM_R5: - case WCD938X_DIGITAL_CDC_HPH_DSM_R6: - case WCD938X_DIGITAL_CDC_HPH_DSM_R7: - case WCD938X_DIGITAL_CDC_AUX_DSM_A1_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A1_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_A2_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A2_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_A3_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A3_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_A4_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A4_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_A5_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A5_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_A6_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A7_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_C_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_C_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_C_2: - case WCD938X_DIGITAL_CDC_AUX_DSM_C_3: - case WCD938X_DIGITAL_CDC_AUX_DSM_R1: - case WCD938X_DIGITAL_CDC_AUX_DSM_R2: - case WCD938X_DIGITAL_CDC_AUX_DSM_R3: - case WCD938X_DIGITAL_CDC_AUX_DSM_R4: - case WCD938X_DIGITAL_CDC_AUX_DSM_R5: - case WCD938X_DIGITAL_CDC_AUX_DSM_R6: - case WCD938X_DIGITAL_CDC_AUX_DSM_R7: - case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0: - case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1: - case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0: - case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1: - case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2: - case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0: - case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1: - case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2: - case WCD938X_DIGITAL_CDC_HPH_GAIN_CTL: - case WCD938X_DIGITAL_CDC_AUX_GAIN_CTL: - case WCD938X_DIGITAL_CDC_EAR_PATH_CTL: - case WCD938X_DIGITAL_CDC_SWR_CLH: - case WCD938X_DIGITAL_SWR_CLH_BYP: - case WCD938X_DIGITAL_CDC_TX0_CTL: - case WCD938X_DIGITAL_CDC_TX1_CTL: - case WCD938X_DIGITAL_CDC_TX2_CTL: - case WCD938X_DIGITAL_CDC_TX_RST: - case WCD938X_DIGITAL_CDC_REQ_CTL: - case WCD938X_DIGITAL_CDC_RST: - case WCD938X_DIGITAL_CDC_AMIC_CTL: - case WCD938X_DIGITAL_CDC_DMIC_CTL: - case WCD938X_DIGITAL_CDC_DMIC1_CTL: - case WCD938X_DIGITAL_CDC_DMIC2_CTL: - case WCD938X_DIGITAL_CDC_DMIC3_CTL: - case WCD938X_DIGITAL_CDC_DMIC4_CTL: - case WCD938X_DIGITAL_EFUSE_PRG_CTL: - case WCD938X_DIGITAL_EFUSE_CTL: - case WCD938X_DIGITAL_CDC_DMIC_RATE_1_2: - case WCD938X_DIGITAL_CDC_DMIC_RATE_3_4: - case WCD938X_DIGITAL_PDM_WD_CTL0: - case WCD938X_DIGITAL_PDM_WD_CTL1: - case WCD938X_DIGITAL_PDM_WD_CTL2: - case WCD938X_DIGITAL_INTR_MODE: - case WCD938X_DIGITAL_INTR_MASK_0: - case WCD938X_DIGITAL_INTR_MASK_1: - case WCD938X_DIGITAL_INTR_MASK_2: - case WCD938X_DIGITAL_INTR_CLEAR_0: - case WCD938X_DIGITAL_INTR_CLEAR_1: - case WCD938X_DIGITAL_INTR_CLEAR_2: - case WCD938X_DIGITAL_INTR_LEVEL_0: - case WCD938X_DIGITAL_INTR_LEVEL_1: - case WCD938X_DIGITAL_INTR_LEVEL_2: - case WCD938X_DIGITAL_INTR_SET_0: - case WCD938X_DIGITAL_INTR_SET_1: - case WCD938X_DIGITAL_INTR_SET_2: - case WCD938X_DIGITAL_INTR_TEST_0: - case WCD938X_DIGITAL_INTR_TEST_1: - case WCD938X_DIGITAL_INTR_TEST_2: - case WCD938X_DIGITAL_TX_MODE_DBG_EN: - case WCD938X_DIGITAL_TX_MODE_DBG_0_1: - case WCD938X_DIGITAL_TX_MODE_DBG_2_3: - case WCD938X_DIGITAL_LB_IN_SEL_CTL: - case WCD938X_DIGITAL_LOOP_BACK_MODE: - case WCD938X_DIGITAL_SWR_DAC_TEST: - case WCD938X_DIGITAL_SWR_HM_TEST_RX_0: - case WCD938X_DIGITAL_SWR_HM_TEST_TX_0: - case WCD938X_DIGITAL_SWR_HM_TEST_RX_1: - case WCD938X_DIGITAL_SWR_HM_TEST_TX_1: - case WCD938X_DIGITAL_SWR_HM_TEST_TX_2: - case WCD938X_DIGITAL_PAD_CTL_SWR_0: - case WCD938X_DIGITAL_PAD_CTL_SWR_1: - case WCD938X_DIGITAL_I2C_CTL: - case WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE: - case WCD938X_DIGITAL_EFUSE_TEST_CTL_0: - case WCD938X_DIGITAL_EFUSE_TEST_CTL_1: - case WCD938X_DIGITAL_PAD_CTL_PDM_RX0: - case WCD938X_DIGITAL_PAD_CTL_PDM_RX1: - case WCD938X_DIGITAL_PAD_CTL_PDM_TX0: - case WCD938X_DIGITAL_PAD_CTL_PDM_TX1: - case WCD938X_DIGITAL_PAD_CTL_PDM_TX2: - case WCD938X_DIGITAL_PAD_INP_DIS_0: - case WCD938X_DIGITAL_PAD_INP_DIS_1: - case WCD938X_DIGITAL_DRIVE_STRENGTH_0: - case WCD938X_DIGITAL_DRIVE_STRENGTH_1: - case WCD938X_DIGITAL_DRIVE_STRENGTH_2: - case WCD938X_DIGITAL_RX_DATA_EDGE_CTL: - case WCD938X_DIGITAL_TX_DATA_EDGE_CTL: - case WCD938X_DIGITAL_GPIO_MODE: - case WCD938X_DIGITAL_PIN_CTL_OE: - case WCD938X_DIGITAL_PIN_CTL_DATA_0: - case WCD938X_DIGITAL_PIN_CTL_DATA_1: - case WCD938X_DIGITAL_DIG_DEBUG_CTL: - case WCD938X_DIGITAL_DIG_DEBUG_EN: - case WCD938X_DIGITAL_ANA_CSR_DBG_ADD: - case WCD938X_DIGITAL_ANA_CSR_DBG_CTL: - case WCD938X_DIGITAL_SSP_DBG: - case WCD938X_DIGITAL_SPARE_0: - case WCD938X_DIGITAL_SPARE_1: - case WCD938X_DIGITAL_SPARE_2: - case WCD938X_DIGITAL_TX_REQ_FB_CTL_0: - case WCD938X_DIGITAL_TX_REQ_FB_CTL_1: - case WCD938X_DIGITAL_TX_REQ_FB_CTL_2: - case WCD938X_DIGITAL_TX_REQ_FB_CTL_3: - case WCD938X_DIGITAL_TX_REQ_FB_CTL_4: - case WCD938X_DIGITAL_DEM_BYPASS_DATA0: - case WCD938X_DIGITAL_DEM_BYPASS_DATA1: - case WCD938X_DIGITAL_DEM_BYPASS_DATA2: - case WCD938X_DIGITAL_DEM_BYPASS_DATA3: - return true; - } - - return false; -} - -static bool wcd938x_readonly_register(struct device *dev, unsigned int reg) -{ - switch (reg) { - case WCD938X_ANA_MBHC_RESULT_1: - case WCD938X_ANA_MBHC_RESULT_2: - case WCD938X_ANA_MBHC_RESULT_3: - case WCD938X_MBHC_MOISTURE_DET_FSM_STATUS: - case WCD938X_TX_1_2_SAR2_ERR: - case WCD938X_TX_1_2_SAR1_ERR: - case WCD938X_TX_3_4_SAR4_ERR: - case WCD938X_TX_3_4_SAR3_ERR: - case WCD938X_HPH_L_STATUS: - case WCD938X_HPH_R_STATUS: - case WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS: - case WCD938X_EAR_STATUS_REG_1: - case WCD938X_EAR_STATUS_REG_2: - case WCD938X_MBHC_NEW_FSM_STATUS: - case WCD938X_MBHC_NEW_ADC_RESULT: - case WCD938X_DIE_CRACK_DIE_CRK_DET_OUT: - case WCD938X_AUX_INT_STATUS_REG: - case WCD938X_LDORXTX_INT_STATUS: - case WCD938X_DIGITAL_CHIP_ID0: - case WCD938X_DIGITAL_CHIP_ID1: - case WCD938X_DIGITAL_CHIP_ID2: - case WCD938X_DIGITAL_CHIP_ID3: - case WCD938X_DIGITAL_INTR_STATUS_0: - case WCD938X_DIGITAL_INTR_STATUS_1: - case WCD938X_DIGITAL_INTR_STATUS_2: - case WCD938X_DIGITAL_INTR_CLEAR_0: - case WCD938X_DIGITAL_INTR_CLEAR_1: - case WCD938X_DIGITAL_INTR_CLEAR_2: - case WCD938X_DIGITAL_SWR_HM_TEST_0: - case WCD938X_DIGITAL_SWR_HM_TEST_1: - case WCD938X_DIGITAL_EFUSE_T_DATA_0: - case WCD938X_DIGITAL_EFUSE_T_DATA_1: - case WCD938X_DIGITAL_PIN_STATUS_0: - case WCD938X_DIGITAL_PIN_STATUS_1: - case WCD938X_DIGITAL_MODE_STATUS_0: - case WCD938X_DIGITAL_MODE_STATUS_1: - case WCD938X_DIGITAL_EFUSE_REG_0: - case WCD938X_DIGITAL_EFUSE_REG_1: - case WCD938X_DIGITAL_EFUSE_REG_2: - case WCD938X_DIGITAL_EFUSE_REG_3: - case WCD938X_DIGITAL_EFUSE_REG_4: - case WCD938X_DIGITAL_EFUSE_REG_5: - case WCD938X_DIGITAL_EFUSE_REG_6: - case WCD938X_DIGITAL_EFUSE_REG_7: - case WCD938X_DIGITAL_EFUSE_REG_8: - case WCD938X_DIGITAL_EFUSE_REG_9: - case WCD938X_DIGITAL_EFUSE_REG_10: - case WCD938X_DIGITAL_EFUSE_REG_11: - case WCD938X_DIGITAL_EFUSE_REG_12: - case WCD938X_DIGITAL_EFUSE_REG_13: - case WCD938X_DIGITAL_EFUSE_REG_14: - case WCD938X_DIGITAL_EFUSE_REG_15: - case WCD938X_DIGITAL_EFUSE_REG_16: - case WCD938X_DIGITAL_EFUSE_REG_17: - case WCD938X_DIGITAL_EFUSE_REG_18: - case WCD938X_DIGITAL_EFUSE_REG_19: - case WCD938X_DIGITAL_EFUSE_REG_20: - case WCD938X_DIGITAL_EFUSE_REG_21: - case WCD938X_DIGITAL_EFUSE_REG_22: - case WCD938X_DIGITAL_EFUSE_REG_23: - case WCD938X_DIGITAL_EFUSE_REG_24: - case WCD938X_DIGITAL_EFUSE_REG_25: - case WCD938X_DIGITAL_EFUSE_REG_26: - case WCD938X_DIGITAL_EFUSE_REG_27: - case WCD938X_DIGITAL_EFUSE_REG_28: - case WCD938X_DIGITAL_EFUSE_REG_29: - case WCD938X_DIGITAL_EFUSE_REG_30: - case WCD938X_DIGITAL_EFUSE_REG_31: - return true; - } - return false; -} - -static bool wcd938x_readable_register(struct device *dev, unsigned int reg) -{ - bool ret; - - ret = wcd938x_readonly_register(dev, reg); - if (!ret) - return wcd938x_rdwr_register(dev, reg); - - return ret; -} - -static bool wcd938x_writeable_register(struct device *dev, unsigned int reg) -{ - return wcd938x_rdwr_register(dev, reg); -} - -static bool wcd938x_volatile_register(struct device *dev, unsigned int reg) -{ - if (reg <= WCD938X_BASE_ADDRESS) - return false; - - if (reg == WCD938X_DIGITAL_SWR_TX_CLK_RATE) - return true; - - if (wcd938x_readonly_register(dev, reg)) - return true; - - return false; -} - -static struct regmap_config wcd938x_regmap_config = { - .name = "wcd938x_csr", - .reg_bits = 32, - .val_bits = 8, - .cache_type = REGCACHE_RBTREE, - .reg_defaults = wcd938x_defaults, - .num_reg_defaults = ARRAY_SIZE(wcd938x_defaults), - .max_register = WCD938X_MAX_REGISTER, - .readable_reg = wcd938x_readable_register, - .writeable_reg = wcd938x_writeable_register, - .volatile_reg = wcd938x_volatile_register, - .can_multi_write = true, -}; - static const struct regmap_irq wcd938x_irqs[WCD938X_NUM_IRQS] = { REGMAP_IRQ_REG(WCD938X_IRQ_MBHC_BUTTON_PRESS_DET, 0, 0x01), REGMAP_IRQ_REG(WCD938X_IRQ_MBHC_BUTTON_RELEASE_DET, 0, 0x02), @@ -4412,10 +3417,10 @@ static int wcd938x_bind(struct device *dev) return -EINVAL; }
- wcd938x->regmap = devm_regmap_init_sdw(wcd938x->tx_sdw_dev, &wcd938x_regmap_config); - if (IS_ERR(wcd938x->regmap)) { - dev_err(dev, "%s: tx csr regmap not found\n", __func__); - return PTR_ERR(wcd938x->regmap); + wcd938x->regmap = dev_get_regmap(&wcd938x->tx_sdw_dev->dev, NULL); + if (!wcd938x->regmap) { + dev_err(dev, "could not get TX device regmap\n"); + return -EINVAL; }
ret = wcd938x_irq_init(wcd938x, dev); diff --git a/sound/soc/codecs/wcd938x.h b/sound/soc/codecs/wcd938x.h index ea82039e78435..74b1498fec38b 100644 --- a/sound/soc/codecs/wcd938x.h +++ b/sound/soc/codecs/wcd938x.h @@ -663,6 +663,7 @@ struct wcd938x_sdw_priv { bool is_tx; struct wcd938x_priv *wcd938x; struct irq_domain *slave_irq; + struct regmap *regmap; };
#if IS_ENABLED(CONFIG_SND_SOC_WCD938X_SDW)
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit fa24e116f1ce3dcc55474f0b6ab0cac4e3ee34e1 ]
[Why & How] Update from HW, need to lower watermarks for enter/enter+exit latency.
Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Brian Chang Brian.Chang@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: 8f586cc16c1f ("drm/amd/display: Change default Z8 watermark values") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c index 34b6c763a4554..9214ce23c3bd3 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c @@ -148,8 +148,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_14_soc = { .num_states = 5, .sr_exit_time_us = 16.5, .sr_enter_plus_exit_time_us = 18.5, - .sr_exit_z8_time_us = 442.0, - .sr_enter_plus_exit_z8_time_us = 560.0, + .sr_exit_z8_time_us = 280.0, + .sr_enter_plus_exit_z8_time_us = 350.0, .writeback_latency_us = 12.0, .dram_channel_width_bytes = 4, .round_trip_ping_latency_dcfclk_cycles = 106,
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 9b0f51e8449f6f76170fda6a8dd9c417a43ce270 ]
[Why] Request from HW team to update the latencies to the new measured values.
[How] Update the values in the bounding box.
Reviewed-by: Charlene Liu Charlene.Liu@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: 8f586cc16c1f ("drm/amd/display: Change default Z8 watermark values") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c index 9214ce23c3bd3..2c99193b63fa6 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c @@ -148,8 +148,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_14_soc = { .num_states = 5, .sr_exit_time_us = 16.5, .sr_enter_plus_exit_time_us = 18.5, - .sr_exit_z8_time_us = 280.0, - .sr_enter_plus_exit_z8_time_us = 350.0, + .sr_exit_z8_time_us = 210.0, + .sr_enter_plus_exit_z8_time_us = 310.0, .writeback_latency_us = 12.0, .dram_channel_width_bytes = 4, .round_trip_ping_latency_dcfclk_cycles = 106,
From: Leo Chen sancchen@amd.com
[ Upstream commit 8f586cc16c1fc3c2202c9d54563db8c7ed365f82 ]
[Why & How] Previous Z8 watermark values were causing flickering and OTC underflow. Updating Z8 watermark values based on the measurement.
Reviewed-by: Nicholas Kazlauskas Nicholas.Kazlauskas@amd.com Cc: Mario Limonciello mario.limonciello@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Acked-by: Alan Liu HaoPing.Liu@amd.com Signed-off-by: Leo Chen sancchen@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c index 2c99193b63fa6..4f91e64754239 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c @@ -148,8 +148,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_14_soc = { .num_states = 5, .sr_exit_time_us = 16.5, .sr_enter_plus_exit_time_us = 18.5, - .sr_exit_z8_time_us = 210.0, - .sr_enter_plus_exit_z8_time_us = 310.0, + .sr_exit_z8_time_us = 268.0, + .sr_enter_plus_exit_z8_time_us = 393.0, .writeback_latency_us = 12.0, .dram_channel_width_bytes = 4, .round_trip_ping_latency_dcfclk_cycles = 106,
From: Dawei Li set_pte_at@outlook.com
[ Upstream commit 1d9c4172110e645b383ff13eee759728d74f1a5d ]
For some ops on channel: 1. lookup_chann_list(), possibly on high frequency. 2. ksmbd_chann_del().
Connection is used as indexing key to lookup channel, in that case, linear search based on list may suffer a bit for performance.
Implements sess->ksmbd_chann_list as xarray.
Signed-off-by: Dawei Li set_pte_at@outlook.com Acked-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Stable-dep-of: f5c779b7ddbd ("ksmbd: fix racy issue from session setup and logoff") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ksmbd/mgmt/user_session.c | 61 ++++++++++++++---------------------- fs/ksmbd/mgmt/user_session.h | 4 +-- fs/ksmbd/smb2pdu.c | 34 +++----------------- 3 files changed, 30 insertions(+), 69 deletions(-)
diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c index 92b1603b5abeb..a2b128dedcfcf 100644 --- a/fs/ksmbd/mgmt/user_session.c +++ b/fs/ksmbd/mgmt/user_session.c @@ -30,15 +30,15 @@ struct ksmbd_session_rpc {
static void free_channel_list(struct ksmbd_session *sess) { - struct channel *chann, *tmp; + struct channel *chann; + unsigned long index;
- write_lock(&sess->chann_lock); - list_for_each_entry_safe(chann, tmp, &sess->ksmbd_chann_list, - chann_list) { - list_del(&chann->chann_list); + xa_for_each(&sess->ksmbd_chann_list, index, chann) { + xa_erase(&sess->ksmbd_chann_list, index); kfree(chann); } - write_unlock(&sess->chann_lock); + + xa_destroy(&sess->ksmbd_chann_list); }
static void __session_rpc_close(struct ksmbd_session *sess, @@ -190,21 +190,15 @@ int ksmbd_session_register(struct ksmbd_conn *conn,
static int ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) { - struct channel *chann, *tmp; - - write_lock(&sess->chann_lock); - list_for_each_entry_safe(chann, tmp, &sess->ksmbd_chann_list, - chann_list) { - if (chann->conn == conn) { - list_del(&chann->chann_list); - kfree(chann); - write_unlock(&sess->chann_lock); - return 0; - } - } - write_unlock(&sess->chann_lock); + struct channel *chann; + + chann = xa_erase(&sess->ksmbd_chann_list, (long)conn); + if (!chann) + return -ENOENT;
- return -ENOENT; + kfree(chann); + + return 0; }
void ksmbd_sessions_deregister(struct ksmbd_conn *conn) @@ -234,7 +228,7 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn) return;
sess_destroy: - if (list_empty(&sess->ksmbd_chann_list)) { + if (xa_empty(&sess->ksmbd_chann_list)) { xa_erase(&conn->sessions, sess->id); ksmbd_session_destroy(sess); } @@ -320,6 +314,9 @@ static struct ksmbd_session *__session_create(int protocol) struct ksmbd_session *sess; int ret;
+ if (protocol != CIFDS_SESSION_FLAG_SMB2) + return NULL; + sess = kzalloc(sizeof(struct ksmbd_session), GFP_KERNEL); if (!sess) return NULL; @@ -329,30 +326,20 @@ static struct ksmbd_session *__session_create(int protocol)
set_session_flag(sess, protocol); xa_init(&sess->tree_conns); - INIT_LIST_HEAD(&sess->ksmbd_chann_list); + xa_init(&sess->ksmbd_chann_list); INIT_LIST_HEAD(&sess->rpc_handle_list); sess->sequence_number = 1; - rwlock_init(&sess->chann_lock); - - switch (protocol) { - case CIFDS_SESSION_FLAG_SMB2: - ret = __init_smb2_session(sess); - break; - default: - ret = -EINVAL; - break; - }
+ ret = __init_smb2_session(sess); if (ret) goto error;
ida_init(&sess->tree_conn_ida);
- if (protocol == CIFDS_SESSION_FLAG_SMB2) { - down_write(&sessions_table_lock); - hash_add(sessions_table, &sess->hlist, sess->id); - up_write(&sessions_table_lock); - } + down_write(&sessions_table_lock); + hash_add(sessions_table, &sess->hlist, sess->id); + up_write(&sessions_table_lock); + return sess;
error: diff --git a/fs/ksmbd/mgmt/user_session.h b/fs/ksmbd/mgmt/user_session.h index 8934b8ee275ba..44a3c67b2bd92 100644 --- a/fs/ksmbd/mgmt/user_session.h +++ b/fs/ksmbd/mgmt/user_session.h @@ -21,7 +21,6 @@ struct ksmbd_file_table; struct channel { __u8 smb3signingkey[SMB3_SIGN_KEY_SIZE]; struct ksmbd_conn *conn; - struct list_head chann_list; };
struct preauth_session { @@ -50,8 +49,7 @@ struct ksmbd_session { char sess_key[CIFS_KEY_SIZE];
struct hlist_node hlist; - rwlock_t chann_lock; - struct list_head ksmbd_chann_list; + struct xarray ksmbd_chann_list; struct xarray tree_conns; struct ida tree_conn_ida; struct list_head rpc_handle_list; diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index acd66fb40c5f0..ac79d4c86067f 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -74,14 +74,7 @@ static inline bool check_session_id(struct ksmbd_conn *conn, u64 id)
struct channel *lookup_chann_list(struct ksmbd_session *sess, struct ksmbd_conn *conn) { - struct channel *chann; - - list_for_each_entry(chann, &sess->ksmbd_chann_list, chann_list) { - if (chann->conn == conn) - return chann; - } - - return NULL; + return xa_load(&sess->ksmbd_chann_list, (long)conn); }
/** @@ -592,6 +585,7 @@ static void destroy_previous_session(struct ksmbd_conn *conn, struct ksmbd_session *prev_sess = ksmbd_session_lookup_slowpath(id); struct ksmbd_user *prev_user; struct channel *chann; + long index;
if (!prev_sess) return; @@ -605,10 +599,8 @@ static void destroy_previous_session(struct ksmbd_conn *conn, return;
prev_sess->state = SMB2_SESSION_EXPIRED; - write_lock(&prev_sess->chann_lock); - list_for_each_entry(chann, &prev_sess->ksmbd_chann_list, chann_list) + xa_for_each(&prev_sess->ksmbd_chann_list, index, chann) chann->conn->status = KSMBD_SESS_EXITING; - write_unlock(&prev_sess->chann_lock); }
/** @@ -1520,19 +1512,14 @@ static int ntlm_authenticate(struct ksmbd_work *work)
binding_session: if (conn->dialect >= SMB30_PROT_ID) { - read_lock(&sess->chann_lock); chann = lookup_chann_list(sess, conn); - read_unlock(&sess->chann_lock); if (!chann) { chann = kmalloc(sizeof(struct channel), GFP_KERNEL); if (!chann) return -ENOMEM;
chann->conn = conn; - INIT_LIST_HEAD(&chann->chann_list); - write_lock(&sess->chann_lock); - list_add(&chann->chann_list, &sess->ksmbd_chann_list); - write_unlock(&sess->chann_lock); + xa_store(&sess->ksmbd_chann_list, (long)conn, chann, GFP_KERNEL); } }
@@ -1606,19 +1593,14 @@ static int krb5_authenticate(struct ksmbd_work *work) }
if (conn->dialect >= SMB30_PROT_ID) { - read_lock(&sess->chann_lock); chann = lookup_chann_list(sess, conn); - read_unlock(&sess->chann_lock); if (!chann) { chann = kmalloc(sizeof(struct channel), GFP_KERNEL); if (!chann) return -ENOMEM;
chann->conn = conn; - INIT_LIST_HEAD(&chann->chann_list); - write_lock(&sess->chann_lock); - list_add(&chann->chann_list, &sess->ksmbd_chann_list); - write_unlock(&sess->chann_lock); + xa_store(&sess->ksmbd_chann_list, (long)conn, chann, GFP_KERNEL); } }
@@ -8428,14 +8410,11 @@ int smb3_check_sign_req(struct ksmbd_work *work) if (le16_to_cpu(hdr->Command) == SMB2_SESSION_SETUP_HE) { signing_key = work->sess->smb3signingkey; } else { - read_lock(&work->sess->chann_lock); chann = lookup_chann_list(work->sess, conn); if (!chann) { - read_unlock(&work->sess->chann_lock); return 0; } signing_key = chann->smb3signingkey; - read_unlock(&work->sess->chann_lock); }
if (!signing_key) { @@ -8495,14 +8474,11 @@ void smb3_set_sign_rsp(struct ksmbd_work *work) le16_to_cpu(hdr->Command) == SMB2_SESSION_SETUP_HE) { signing_key = work->sess->smb3signingkey; } else { - read_lock(&work->sess->chann_lock); chann = lookup_chann_list(work->sess, work->conn); if (!chann) { - read_unlock(&work->sess->chann_lock); return; } signing_key = chann->smb3signingkey; - read_unlock(&work->sess->chann_lock); }
if (!signing_key)
From: Namjae Jeon linkinjeon@kernel.org
[ Upstream commit f5c779b7ddbda30866cf2a27c63e34158f858c73 ]
This racy issue is triggered by sending concurrent session setup and logoff requests. This patch does not set connection status as KSMBD_SESS_GOOD if state is KSMBD_SESS_NEED_RECONNECT in session setup. And relookup session to validate if session is deleted in logoff.
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20481, ZDI-CAN-20590, ZDI-CAN-20596 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ksmbd/connection.c | 14 ++++---- fs/ksmbd/connection.h | 39 ++++++++++++--------- fs/ksmbd/mgmt/user_session.c | 1 + fs/ksmbd/server.c | 3 +- fs/ksmbd/smb2pdu.c | 67 +++++++++++++++++++++++------------- fs/ksmbd/transport_tcp.c | 2 +- 6 files changed, 77 insertions(+), 49 deletions(-)
diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c index b8f9d627f241d..3cb88853d6932 100644 --- a/fs/ksmbd/connection.c +++ b/fs/ksmbd/connection.c @@ -56,7 +56,7 @@ struct ksmbd_conn *ksmbd_conn_alloc(void) return NULL;
conn->need_neg = true; - conn->status = KSMBD_SESS_NEW; + ksmbd_conn_set_new(conn); conn->local_nls = load_nls("utf8"); if (!conn->local_nls) conn->local_nls = load_nls_default(); @@ -149,12 +149,12 @@ int ksmbd_conn_try_dequeue_request(struct ksmbd_work *work) return ret; }
-static void ksmbd_conn_lock(struct ksmbd_conn *conn) +void ksmbd_conn_lock(struct ksmbd_conn *conn) { mutex_lock(&conn->srv_mutex); }
-static void ksmbd_conn_unlock(struct ksmbd_conn *conn) +void ksmbd_conn_unlock(struct ksmbd_conn *conn) { mutex_unlock(&conn->srv_mutex); } @@ -245,7 +245,7 @@ bool ksmbd_conn_alive(struct ksmbd_conn *conn) if (!ksmbd_server_running()) return false;
- if (conn->status == KSMBD_SESS_EXITING) + if (ksmbd_conn_exiting(conn)) return false;
if (kthread_should_stop()) @@ -305,7 +305,7 @@ int ksmbd_conn_handler_loop(void *p) pdu_size = get_rfc1002_len(hdr_buf); ksmbd_debug(CONN, "RFC1002 header %u bytes\n", pdu_size);
- if (conn->status == KSMBD_SESS_GOOD) + if (ksmbd_conn_good(conn)) max_allowed_pdu_size = SMB3_MAX_MSGSIZE + conn->vals->max_write_size; else @@ -314,7 +314,7 @@ int ksmbd_conn_handler_loop(void *p) if (pdu_size > max_allowed_pdu_size) { pr_err_ratelimited("PDU length(%u) excceed maximum allowed pdu size(%u) on connection(%d)\n", pdu_size, max_allowed_pdu_size, - conn->status); + READ_ONCE(conn->status)); break; }
@@ -418,7 +418,7 @@ static void stop_sessions(void) if (task) ksmbd_debug(CONN, "Stop session handler %s/%d\n", task->comm, task_pid_nr(task)); - conn->status = KSMBD_SESS_EXITING; + ksmbd_conn_set_exiting(conn); if (t->ops->shutdown) { read_unlock(&conn_list_lock); t->ops->shutdown(t); diff --git a/fs/ksmbd/connection.h b/fs/ksmbd/connection.h index 0e3a848defaf3..98bb5f199fa24 100644 --- a/fs/ksmbd/connection.h +++ b/fs/ksmbd/connection.h @@ -162,6 +162,8 @@ void ksmbd_conn_init_server_callbacks(struct ksmbd_conn_ops *ops); int ksmbd_conn_handler_loop(void *p); int ksmbd_conn_transport_init(void); void ksmbd_conn_transport_destroy(void); +void ksmbd_conn_lock(struct ksmbd_conn *conn); +void ksmbd_conn_unlock(struct ksmbd_conn *conn);
/* * WARNING @@ -169,43 +171,48 @@ void ksmbd_conn_transport_destroy(void); * This is a hack. We will move status to a proper place once we land * a multi-sessions support. */ -static inline bool ksmbd_conn_good(struct ksmbd_work *work) +static inline bool ksmbd_conn_good(struct ksmbd_conn *conn) { - return work->conn->status == KSMBD_SESS_GOOD; + return READ_ONCE(conn->status) == KSMBD_SESS_GOOD; }
-static inline bool ksmbd_conn_need_negotiate(struct ksmbd_work *work) +static inline bool ksmbd_conn_need_negotiate(struct ksmbd_conn *conn) { - return work->conn->status == KSMBD_SESS_NEED_NEGOTIATE; + return READ_ONCE(conn->status) == KSMBD_SESS_NEED_NEGOTIATE; }
-static inline bool ksmbd_conn_need_reconnect(struct ksmbd_work *work) +static inline bool ksmbd_conn_need_reconnect(struct ksmbd_conn *conn) { - return work->conn->status == KSMBD_SESS_NEED_RECONNECT; + return READ_ONCE(conn->status) == KSMBD_SESS_NEED_RECONNECT; }
-static inline bool ksmbd_conn_exiting(struct ksmbd_work *work) +static inline bool ksmbd_conn_exiting(struct ksmbd_conn *conn) { - return work->conn->status == KSMBD_SESS_EXITING; + return READ_ONCE(conn->status) == KSMBD_SESS_EXITING; }
-static inline void ksmbd_conn_set_good(struct ksmbd_work *work) +static inline void ksmbd_conn_set_new(struct ksmbd_conn *conn) { - work->conn->status = KSMBD_SESS_GOOD; + WRITE_ONCE(conn->status, KSMBD_SESS_NEW); }
-static inline void ksmbd_conn_set_need_negotiate(struct ksmbd_work *work) +static inline void ksmbd_conn_set_good(struct ksmbd_conn *conn) { - work->conn->status = KSMBD_SESS_NEED_NEGOTIATE; + WRITE_ONCE(conn->status, KSMBD_SESS_GOOD); }
-static inline void ksmbd_conn_set_need_reconnect(struct ksmbd_work *work) +static inline void ksmbd_conn_set_need_negotiate(struct ksmbd_conn *conn) { - work->conn->status = KSMBD_SESS_NEED_RECONNECT; + WRITE_ONCE(conn->status, KSMBD_SESS_NEED_NEGOTIATE); }
-static inline void ksmbd_conn_set_exiting(struct ksmbd_work *work) +static inline void ksmbd_conn_set_need_reconnect(struct ksmbd_conn *conn) { - work->conn->status = KSMBD_SESS_EXITING; + WRITE_ONCE(conn->status, KSMBD_SESS_NEED_RECONNECT); +} + +static inline void ksmbd_conn_set_exiting(struct ksmbd_conn *conn) +{ + WRITE_ONCE(conn->status, KSMBD_SESS_EXITING); } #endif /* __CONNECTION_H__ */ diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c index a2b128dedcfcf..69b85a98e2c35 100644 --- a/fs/ksmbd/mgmt/user_session.c +++ b/fs/ksmbd/mgmt/user_session.c @@ -324,6 +324,7 @@ static struct ksmbd_session *__session_create(int protocol) if (ksmbd_init_file_table(&sess->file_table)) goto error;
+ sess->state = SMB2_SESSION_IN_PROGRESS; set_session_flag(sess, protocol); xa_init(&sess->tree_conns); xa_init(&sess->ksmbd_chann_list); diff --git a/fs/ksmbd/server.c b/fs/ksmbd/server.c index 8c2bc513445c3..8a0ad399f2456 100644 --- a/fs/ksmbd/server.c +++ b/fs/ksmbd/server.c @@ -93,7 +93,8 @@ static inline int check_conn_state(struct ksmbd_work *work) { struct smb_hdr *rsp_hdr;
- if (ksmbd_conn_exiting(work) || ksmbd_conn_need_reconnect(work)) { + if (ksmbd_conn_exiting(work->conn) || + ksmbd_conn_need_reconnect(work->conn)) { rsp_hdr = work->response_buf; rsp_hdr->Status.CifsError = STATUS_CONNECTION_DISCONNECTED; return 1; diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index ac79d4c86067f..6c26934272ad5 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -247,7 +247,7 @@ int init_smb2_neg_rsp(struct ksmbd_work *work)
rsp = smb2_get_msg(work->response_buf);
- WARN_ON(ksmbd_conn_good(work)); + WARN_ON(ksmbd_conn_good(conn));
rsp->StructureSize = cpu_to_le16(65); ksmbd_debug(SMB, "conn->dialect 0x%x\n", conn->dialect); @@ -277,7 +277,7 @@ int init_smb2_neg_rsp(struct ksmbd_work *work) rsp->SecurityMode |= SMB2_NEGOTIATE_SIGNING_REQUIRED_LE; conn->use_spnego = true;
- ksmbd_conn_set_need_negotiate(work); + ksmbd_conn_set_need_negotiate(conn); return 0; }
@@ -567,7 +567,7 @@ int smb2_check_user_session(struct ksmbd_work *work) cmd == SMB2_SESSION_SETUP_HE) return 0;
- if (!ksmbd_conn_good(work)) + if (!ksmbd_conn_good(conn)) return -EINVAL;
sess_id = le64_to_cpu(req_hdr->SessionId); @@ -600,7 +600,7 @@ static void destroy_previous_session(struct ksmbd_conn *conn,
prev_sess->state = SMB2_SESSION_EXPIRED; xa_for_each(&prev_sess->ksmbd_chann_list, index, chann) - chann->conn->status = KSMBD_SESS_EXITING; + ksmbd_conn_set_exiting(chann->conn); }
/** @@ -1067,7 +1067,7 @@ int smb2_handle_negotiate(struct ksmbd_work *work)
ksmbd_debug(SMB, "Received negotiate request\n"); conn->need_neg = false; - if (ksmbd_conn_good(work)) { + if (ksmbd_conn_good(conn)) { pr_err("conn->tcp_status is already in CifsGood State\n"); work->send_no_response = 1; return rc; @@ -1222,7 +1222,7 @@ int smb2_handle_negotiate(struct ksmbd_work *work) }
conn->srv_sec_mode = le16_to_cpu(rsp->SecurityMode); - ksmbd_conn_set_need_negotiate(work); + ksmbd_conn_set_need_negotiate(conn);
err_out: if (rc < 0) @@ -1643,6 +1643,7 @@ int smb2_sess_setup(struct ksmbd_work *work) rsp->SecurityBufferLength = 0; inc_rfc1001_len(work->response_buf, 9);
+ ksmbd_conn_lock(conn); if (!req->hdr.SessionId) { sess = ksmbd_smb2_session_create(); if (!sess) { @@ -1690,6 +1691,12 @@ int smb2_sess_setup(struct ksmbd_work *work) goto out_err; }
+ if (ksmbd_conn_need_reconnect(conn)) { + rc = -EFAULT; + sess = NULL; + goto out_err; + } + if (ksmbd_session_lookup(conn, sess_id)) { rc = -EACCES; goto out_err; @@ -1714,12 +1721,20 @@ int smb2_sess_setup(struct ksmbd_work *work) rc = -ENOENT; goto out_err; } + + if (sess->state == SMB2_SESSION_EXPIRED) { + rc = -EFAULT; + goto out_err; + } + + if (ksmbd_conn_need_reconnect(conn)) { + rc = -EFAULT; + sess = NULL; + goto out_err; + } } work->sess = sess;
- if (sess->state == SMB2_SESSION_EXPIRED) - sess->state = SMB2_SESSION_IN_PROGRESS; - negblob_off = le16_to_cpu(req->SecurityBufferOffset); negblob_len = le16_to_cpu(req->SecurityBufferLength); if (negblob_off < offsetof(struct smb2_sess_setup_req, Buffer) || @@ -1749,8 +1764,10 @@ int smb2_sess_setup(struct ksmbd_work *work) goto out_err; }
- ksmbd_conn_set_good(work); - sess->state = SMB2_SESSION_VALID; + if (!ksmbd_conn_need_reconnect(conn)) { + ksmbd_conn_set_good(conn); + sess->state = SMB2_SESSION_VALID; + } kfree(sess->Preauth_HashValue); sess->Preauth_HashValue = NULL; } else if (conn->preferred_auth_mech == KSMBD_AUTH_NTLMSSP) { @@ -1772,8 +1789,10 @@ int smb2_sess_setup(struct ksmbd_work *work) if (rc) goto out_err;
- ksmbd_conn_set_good(work); - sess->state = SMB2_SESSION_VALID; + if (!ksmbd_conn_need_reconnect(conn)) { + ksmbd_conn_set_good(conn); + sess->state = SMB2_SESSION_VALID; + } if (conn->binding) { struct preauth_session *preauth_sess;
@@ -1841,14 +1860,13 @@ int smb2_sess_setup(struct ksmbd_work *work) if (sess->user && sess->user->flags & KSMBD_USER_FLAG_DELAY_SESSION) try_delay = true;
- xa_erase(&conn->sessions, sess->id); - ksmbd_session_destroy(sess); - work->sess = NULL; + sess->state = SMB2_SESSION_EXPIRED; if (try_delay) ssleep(5); } }
+ ksmbd_conn_unlock(conn); return rc; }
@@ -2073,21 +2091,24 @@ int smb2_session_logoff(struct ksmbd_work *work) { struct ksmbd_conn *conn = work->conn; struct smb2_logoff_rsp *rsp = smb2_get_msg(work->response_buf); - struct ksmbd_session *sess = work->sess; + struct ksmbd_session *sess; + struct smb2_logoff_req *req = smb2_get_msg(work->request_buf);
rsp->StructureSize = cpu_to_le16(4); inc_rfc1001_len(work->response_buf, 4);
ksmbd_debug(SMB, "request\n");
- /* setting CifsExiting here may race with start_tcp_sess */ - ksmbd_conn_set_need_reconnect(work); + ksmbd_conn_set_need_reconnect(conn); ksmbd_close_session_fds(work); ksmbd_conn_wait_idle(conn);
+ /* + * Re-lookup session to validate if session is deleted + * while waiting request complete + */ + sess = ksmbd_session_lookup(conn, le64_to_cpu(req->hdr.SessionId)); if (ksmbd_tree_conn_session_logoff(sess)) { - struct smb2_logoff_req *req = smb2_get_msg(work->request_buf); - ksmbd_debug(SMB, "Invalid tid %d\n", req->hdr.Id.SyncId.TreeId); rsp->hdr.Status = STATUS_NETWORK_NAME_DELETED; smb2_set_err_rsp(work); @@ -2099,9 +2120,7 @@ int smb2_session_logoff(struct ksmbd_work *work)
ksmbd_free_user(sess->user); sess->user = NULL; - - /* let start_tcp_sess free connection info now */ - ksmbd_conn_set_need_negotiate(work); + ksmbd_conn_set_need_negotiate(conn); return 0; }
diff --git a/fs/ksmbd/transport_tcp.c b/fs/ksmbd/transport_tcp.c index 20e85e2701f26..eff7a1d793f00 100644 --- a/fs/ksmbd/transport_tcp.c +++ b/fs/ksmbd/transport_tcp.c @@ -333,7 +333,7 @@ static int ksmbd_tcp_readv(struct tcp_transport *t, struct kvec *iov_orig, if (length == -EINTR) { total_read = -ESHUTDOWN; break; - } else if (conn->status == KSMBD_SESS_NEED_RECONNECT) { + } else if (ksmbd_conn_need_reconnect(conn)) { total_read = -EAGAIN; break; } else if (length == -ERESTARTSYS || length == -EAGAIN) {
From: Namjae Jeon linkinjeon@kernel.org
[ Upstream commit ea174a91893956450510945a0c5d1a10b5323656 ]
client can indefinitely send smb2 session setup requests with the SessionId set to 0, thus indefinitely spawning new sessions, and causing indefinite memory usage. This patch limit to the number of sessions using expired timeout and session state.
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20478 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ksmbd/mgmt/user_session.c | 68 ++++++++++++++++++++---------------- fs/ksmbd/mgmt/user_session.h | 1 + fs/ksmbd/smb2pdu.c | 1 + fs/ksmbd/smb2pdu.h | 2 ++ 4 files changed, 41 insertions(+), 31 deletions(-)
diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c index 69b85a98e2c35..b809f7987b9f4 100644 --- a/fs/ksmbd/mgmt/user_session.c +++ b/fs/ksmbd/mgmt/user_session.c @@ -174,70 +174,73 @@ static struct ksmbd_session *__session_lookup(unsigned long long id) struct ksmbd_session *sess;
hash_for_each_possible(sessions_table, sess, hlist, id) { - if (id == sess->id) + if (id == sess->id) { + sess->last_active = jiffies; return sess; + } } return NULL; }
+static void ksmbd_expire_session(struct ksmbd_conn *conn) +{ + unsigned long id; + struct ksmbd_session *sess; + + xa_for_each(&conn->sessions, id, sess) { + if (sess->state != SMB2_SESSION_VALID || + time_after(jiffies, + sess->last_active + SMB2_SESSION_TIMEOUT)) { + xa_erase(&conn->sessions, sess->id); + ksmbd_session_destroy(sess); + continue; + } + } +} + int ksmbd_session_register(struct ksmbd_conn *conn, struct ksmbd_session *sess) { sess->dialect = conn->dialect; memcpy(sess->ClientGUID, conn->ClientGUID, SMB2_CLIENT_GUID_SIZE); + ksmbd_expire_session(conn); return xa_err(xa_store(&conn->sessions, sess->id, sess, GFP_KERNEL)); }
-static int ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) +static void ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) { struct channel *chann;
chann = xa_erase(&sess->ksmbd_chann_list, (long)conn); if (!chann) - return -ENOENT; + return;
kfree(chann); - - return 0; }
void ksmbd_sessions_deregister(struct ksmbd_conn *conn) { struct ksmbd_session *sess; + unsigned long id;
- if (conn->binding) { - int bkt; - - down_write(&sessions_table_lock); - hash_for_each(sessions_table, bkt, sess, hlist) { - if (!ksmbd_chann_del(conn, sess)) { - up_write(&sessions_table_lock); - goto sess_destroy; - } + xa_for_each(&conn->sessions, id, sess) { + ksmbd_chann_del(conn, sess); + if (xa_empty(&sess->ksmbd_chann_list)) { + xa_erase(&conn->sessions, sess->id); + ksmbd_session_destroy(sess); } - up_write(&sessions_table_lock); - } else { - unsigned long id; - - xa_for_each(&conn->sessions, id, sess) { - if (!ksmbd_chann_del(conn, sess)) - goto sess_destroy; - } - } - - return; - -sess_destroy: - if (xa_empty(&sess->ksmbd_chann_list)) { - xa_erase(&conn->sessions, sess->id); - ksmbd_session_destroy(sess); } }
struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn, unsigned long long id) { - return xa_load(&conn->sessions, id); + struct ksmbd_session *sess; + + sess = xa_load(&conn->sessions, id); + if (sess) + sess->last_active = jiffies; + return sess; }
struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id) @@ -246,6 +249,8 @@ struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id)
down_read(&sessions_table_lock); sess = __session_lookup(id); + if (sess) + sess->last_active = jiffies; up_read(&sessions_table_lock);
return sess; @@ -324,6 +329,7 @@ static struct ksmbd_session *__session_create(int protocol) if (ksmbd_init_file_table(&sess->file_table)) goto error;
+ sess->last_active = jiffies; sess->state = SMB2_SESSION_IN_PROGRESS; set_session_flag(sess, protocol); xa_init(&sess->tree_conns); diff --git a/fs/ksmbd/mgmt/user_session.h b/fs/ksmbd/mgmt/user_session.h index 44a3c67b2bd92..51f38e5b61abb 100644 --- a/fs/ksmbd/mgmt/user_session.h +++ b/fs/ksmbd/mgmt/user_session.h @@ -59,6 +59,7 @@ struct ksmbd_session { __u8 smb3signingkey[SMB3_SIGN_KEY_SIZE];
struct ksmbd_file_table file_table; + unsigned long last_active; };
static inline int test_session_flag(struct ksmbd_session *sess, int bit) diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index 6c26934272ad5..d3f33194faf1a 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -1860,6 +1860,7 @@ int smb2_sess_setup(struct ksmbd_work *work) if (sess->user && sess->user->flags & KSMBD_USER_FLAG_DELAY_SESSION) try_delay = true;
+ sess->last_active = jiffies; sess->state = SMB2_SESSION_EXPIRED; if (try_delay) ssleep(5); diff --git a/fs/ksmbd/smb2pdu.h b/fs/ksmbd/smb2pdu.h index f4baa9800f6ee..dd10f8031606b 100644 --- a/fs/ksmbd/smb2pdu.h +++ b/fs/ksmbd/smb2pdu.h @@ -61,6 +61,8 @@ struct preauth_integrity_info { #define SMB2_SESSION_IN_PROGRESS BIT(0) #define SMB2_SESSION_VALID BIT(1)
+#define SMB2_SESSION_TIMEOUT (10 * HZ) + struct create_durable_req_v2 { struct create_context ccontext; __u8 Name[8];
From: Namjae Jeon linkinjeon@kernel.org
[ Upstream commit b096d97f47326b1e2dbdef1c91fab69ffda54d17 ]
ksmbd make a delay of 5 seconds on session setup to avoid dictionary attacks. But the 5 seconds delay can be bypassed by using asynchronous requests. This patch block all requests on current connection when making a delay on sesstion setup failure.
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20482 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ksmbd/smb2pdu.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index d3f33194faf1a..e7594a56cbfe3 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -1862,8 +1862,11 @@ int smb2_sess_setup(struct ksmbd_work *work)
sess->last_active = jiffies; sess->state = SMB2_SESSION_EXPIRED; - if (try_delay) + if (try_delay) { + ksmbd_conn_set_need_reconnect(conn); ssleep(5); + ksmbd_conn_set_need_negotiate(conn); + } } }
From: Namjae Jeon linkinjeon@kernel.org
[ Upstream commit abcc506a9a71976a8b4c9bf3ee6efd13229c1e19 ]
When smb client send concurrent smb2 close and logoff request with multichannel connection, It can cause racy issue. logoff request free tcon and can cause UAF issues in smb2 close. When receiving logoff request with multichannel, ksmbd should wait until all remaning requests complete as well as ones in the current connection, and then make session expired.
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20796 ZDI-CAN-20595 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ksmbd/connection.c | 54 +++++++++++++++++++++++++++--------- fs/ksmbd/connection.h | 19 +++++++++++-- fs/ksmbd/mgmt/tree_connect.c | 3 ++ fs/ksmbd/mgmt/user_session.c | 36 ++++++++++++++++++++---- fs/ksmbd/smb2pdu.c | 21 +++++++------- 5 files changed, 101 insertions(+), 32 deletions(-)
diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c index 3cb88853d6932..e3312fbf4c090 100644 --- a/fs/ksmbd/connection.c +++ b/fs/ksmbd/connection.c @@ -20,7 +20,7 @@ static DEFINE_MUTEX(init_lock); static struct ksmbd_conn_ops default_conn_ops;
LIST_HEAD(conn_list); -DEFINE_RWLOCK(conn_list_lock); +DECLARE_RWSEM(conn_list_lock);
/** * ksmbd_conn_free() - free resources of the connection instance @@ -32,9 +32,9 @@ DEFINE_RWLOCK(conn_list_lock); */ void ksmbd_conn_free(struct ksmbd_conn *conn) { - write_lock(&conn_list_lock); + down_write(&conn_list_lock); list_del(&conn->conns_list); - write_unlock(&conn_list_lock); + up_write(&conn_list_lock);
xa_destroy(&conn->sessions); kvfree(conn->request_buf); @@ -84,9 +84,9 @@ struct ksmbd_conn *ksmbd_conn_alloc(void) spin_lock_init(&conn->llist_lock); INIT_LIST_HEAD(&conn->lock_list);
- write_lock(&conn_list_lock); + down_write(&conn_list_lock); list_add(&conn->conns_list, &conn_list); - write_unlock(&conn_list_lock); + up_write(&conn_list_lock); return conn; }
@@ -95,7 +95,7 @@ bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c) struct ksmbd_conn *t; bool ret = false;
- read_lock(&conn_list_lock); + down_read(&conn_list_lock); list_for_each_entry(t, &conn_list, conns_list) { if (memcmp(t->ClientGUID, c->ClientGUID, SMB2_CLIENT_GUID_SIZE)) continue; @@ -103,7 +103,7 @@ bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c) ret = true; break; } - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); return ret; }
@@ -159,9 +159,37 @@ void ksmbd_conn_unlock(struct ksmbd_conn *conn) mutex_unlock(&conn->srv_mutex); }
-void ksmbd_conn_wait_idle(struct ksmbd_conn *conn) +void ksmbd_all_conn_set_status(u64 sess_id, u32 status) { + struct ksmbd_conn *conn; + + down_read(&conn_list_lock); + list_for_each_entry(conn, &conn_list, conns_list) { + if (conn->binding || xa_load(&conn->sessions, sess_id)) + WRITE_ONCE(conn->status, status); + } + up_read(&conn_list_lock); +} + +void ksmbd_conn_wait_idle(struct ksmbd_conn *conn, u64 sess_id) +{ + struct ksmbd_conn *bind_conn; + wait_event(conn->req_running_q, atomic_read(&conn->req_running) < 2); + + down_read(&conn_list_lock); + list_for_each_entry(bind_conn, &conn_list, conns_list) { + if (bind_conn == conn) + continue; + + if ((bind_conn->binding || xa_load(&bind_conn->sessions, sess_id)) && + !ksmbd_conn_releasing(bind_conn) && + atomic_read(&bind_conn->req_running)) { + wait_event(bind_conn->req_running_q, + atomic_read(&bind_conn->req_running) == 0); + } + } + up_read(&conn_list_lock); }
int ksmbd_conn_write(struct ksmbd_work *work) @@ -362,10 +390,10 @@ int ksmbd_conn_handler_loop(void *p) }
out: + ksmbd_conn_set_releasing(conn); /* Wait till all reference dropped to the Server object*/ wait_event(conn->r_count_q, atomic_read(&conn->r_count) == 0);
- if (IS_ENABLED(CONFIG_UNICODE)) utf8_unload(conn->um); unload_nls(conn->local_nls); @@ -409,7 +437,7 @@ static void stop_sessions(void) struct ksmbd_transport *t;
again: - read_lock(&conn_list_lock); + down_read(&conn_list_lock); list_for_each_entry(conn, &conn_list, conns_list) { struct task_struct *task;
@@ -420,12 +448,12 @@ static void stop_sessions(void) task->comm, task_pid_nr(task)); ksmbd_conn_set_exiting(conn); if (t->ops->shutdown) { - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); t->ops->shutdown(t); - read_lock(&conn_list_lock); + down_read(&conn_list_lock); } } - read_unlock(&conn_list_lock); + up_read(&conn_list_lock);
if (!list_empty(&conn_list)) { schedule_timeout_interruptible(HZ / 10); /* 100ms */ diff --git a/fs/ksmbd/connection.h b/fs/ksmbd/connection.h index 98bb5f199fa24..ad8dfaa48ffb3 100644 --- a/fs/ksmbd/connection.h +++ b/fs/ksmbd/connection.h @@ -26,7 +26,8 @@ enum { KSMBD_SESS_GOOD, KSMBD_SESS_EXITING, KSMBD_SESS_NEED_RECONNECT, - KSMBD_SESS_NEED_NEGOTIATE + KSMBD_SESS_NEED_NEGOTIATE, + KSMBD_SESS_RELEASING };
struct ksmbd_stats { @@ -140,10 +141,10 @@ struct ksmbd_transport { #define KSMBD_TCP_PEER_SOCKADDR(c) ((struct sockaddr *)&((c)->peer_addr))
extern struct list_head conn_list; -extern rwlock_t conn_list_lock; +extern struct rw_semaphore conn_list_lock;
bool ksmbd_conn_alive(struct ksmbd_conn *conn); -void ksmbd_conn_wait_idle(struct ksmbd_conn *conn); +void ksmbd_conn_wait_idle(struct ksmbd_conn *conn, u64 sess_id); struct ksmbd_conn *ksmbd_conn_alloc(void); void ksmbd_conn_free(struct ksmbd_conn *conn); bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c); @@ -191,6 +192,11 @@ static inline bool ksmbd_conn_exiting(struct ksmbd_conn *conn) return READ_ONCE(conn->status) == KSMBD_SESS_EXITING; }
+static inline bool ksmbd_conn_releasing(struct ksmbd_conn *conn) +{ + return READ_ONCE(conn->status) == KSMBD_SESS_RELEASING; +} + static inline void ksmbd_conn_set_new(struct ksmbd_conn *conn) { WRITE_ONCE(conn->status, KSMBD_SESS_NEW); @@ -215,4 +221,11 @@ static inline void ksmbd_conn_set_exiting(struct ksmbd_conn *conn) { WRITE_ONCE(conn->status, KSMBD_SESS_EXITING); } + +static inline void ksmbd_conn_set_releasing(struct ksmbd_conn *conn) +{ + WRITE_ONCE(conn->status, KSMBD_SESS_RELEASING); +} + +void ksmbd_all_conn_set_status(u64 sess_id, u32 status); #endif /* __CONNECTION_H__ */ diff --git a/fs/ksmbd/mgmt/tree_connect.c b/fs/ksmbd/mgmt/tree_connect.c index f19de20c2960c..f07a05f376513 100644 --- a/fs/ksmbd/mgmt/tree_connect.c +++ b/fs/ksmbd/mgmt/tree_connect.c @@ -137,6 +137,9 @@ int ksmbd_tree_conn_session_logoff(struct ksmbd_session *sess) struct ksmbd_tree_connect *tc; unsigned long id;
+ if (!sess) + return -EINVAL; + xa_for_each(&sess->tree_conns, id, tc) ret |= ksmbd_tree_conn_disconnect(sess, tc); xa_destroy(&sess->tree_conns); diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c index b809f7987b9f4..ea4b56d570fbb 100644 --- a/fs/ksmbd/mgmt/user_session.c +++ b/fs/ksmbd/mgmt/user_session.c @@ -153,10 +153,6 @@ void ksmbd_session_destroy(struct ksmbd_session *sess) if (!sess) return;
- down_write(&sessions_table_lock); - hash_del(&sess->hlist); - up_write(&sessions_table_lock); - if (sess->user) ksmbd_free_user(sess->user);
@@ -187,15 +183,18 @@ static void ksmbd_expire_session(struct ksmbd_conn *conn) unsigned long id; struct ksmbd_session *sess;
+ down_write(&sessions_table_lock); xa_for_each(&conn->sessions, id, sess) { if (sess->state != SMB2_SESSION_VALID || time_after(jiffies, sess->last_active + SMB2_SESSION_TIMEOUT)) { xa_erase(&conn->sessions, sess->id); + hash_del(&sess->hlist); ksmbd_session_destroy(sess); continue; } } + up_write(&sessions_table_lock); }
int ksmbd_session_register(struct ksmbd_conn *conn, @@ -207,15 +206,16 @@ int ksmbd_session_register(struct ksmbd_conn *conn, return xa_err(xa_store(&conn->sessions, sess->id, sess, GFP_KERNEL)); }
-static void ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) +static int ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) { struct channel *chann;
chann = xa_erase(&sess->ksmbd_chann_list, (long)conn); if (!chann) - return; + return -ENOENT;
kfree(chann); + return 0; }
void ksmbd_sessions_deregister(struct ksmbd_conn *conn) @@ -223,13 +223,37 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn) struct ksmbd_session *sess; unsigned long id;
+ down_write(&sessions_table_lock); + if (conn->binding) { + int bkt; + struct hlist_node *tmp; + + hash_for_each_safe(sessions_table, bkt, tmp, sess, hlist) { + if (!ksmbd_chann_del(conn, sess) && + xa_empty(&sess->ksmbd_chann_list)) { + hash_del(&sess->hlist); + ksmbd_session_destroy(sess); + } + } + } + xa_for_each(&conn->sessions, id, sess) { + unsigned long chann_id; + struct channel *chann; + + xa_for_each(&sess->ksmbd_chann_list, chann_id, chann) { + if (chann->conn != conn) + ksmbd_conn_set_exiting(chann->conn); + } + ksmbd_chann_del(conn, sess); if (xa_empty(&sess->ksmbd_chann_list)) { xa_erase(&conn->sessions, sess->id); + hash_del(&sess->hlist); ksmbd_session_destroy(sess); } } + up_write(&sessions_table_lock); }
struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn, diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index e7594a56cbfe3..8f96b96dbac1a 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -2097,21 +2097,22 @@ int smb2_session_logoff(struct ksmbd_work *work) struct smb2_logoff_rsp *rsp = smb2_get_msg(work->response_buf); struct ksmbd_session *sess; struct smb2_logoff_req *req = smb2_get_msg(work->request_buf); + u64 sess_id = le64_to_cpu(req->hdr.SessionId);
rsp->StructureSize = cpu_to_le16(4); inc_rfc1001_len(work->response_buf, 4);
ksmbd_debug(SMB, "request\n");
- ksmbd_conn_set_need_reconnect(conn); + ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_RECONNECT); ksmbd_close_session_fds(work); - ksmbd_conn_wait_idle(conn); + ksmbd_conn_wait_idle(conn, sess_id);
/* * Re-lookup session to validate if session is deleted * while waiting request complete */ - sess = ksmbd_session_lookup(conn, le64_to_cpu(req->hdr.SessionId)); + sess = ksmbd_session_lookup_all(conn, sess_id); if (ksmbd_tree_conn_session_logoff(sess)) { ksmbd_debug(SMB, "Invalid tid %d\n", req->hdr.Id.SyncId.TreeId); rsp->hdr.Status = STATUS_NETWORK_NAME_DELETED; @@ -2124,7 +2125,7 @@ int smb2_session_logoff(struct ksmbd_work *work)
ksmbd_free_user(sess->user); sess->user = NULL; - ksmbd_conn_set_need_negotiate(conn); + ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_NEGOTIATE); return 0; }
@@ -6952,7 +6953,7 @@ int smb2_lock(struct ksmbd_work *work)
nolock = 1; /* check locks in connection list */ - read_lock(&conn_list_lock); + down_read(&conn_list_lock); list_for_each_entry(conn, &conn_list, conns_list) { spin_lock(&conn->llist_lock); list_for_each_entry_safe(cmp_lock, tmp2, &conn->lock_list, clist) { @@ -6969,7 +6970,7 @@ int smb2_lock(struct ksmbd_work *work) list_del(&cmp_lock->flist); list_del(&cmp_lock->clist); spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock);
locks_free_lock(cmp_lock->fl); kfree(cmp_lock); @@ -6991,7 +6992,7 @@ int smb2_lock(struct ksmbd_work *work) cmp_lock->start > smb_lock->start && cmp_lock->start < smb_lock->end) { spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); pr_err("previous lock conflict with zero byte lock range\n"); goto out; } @@ -7000,7 +7001,7 @@ int smb2_lock(struct ksmbd_work *work) smb_lock->start > cmp_lock->start && smb_lock->start < cmp_lock->end) { spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); pr_err("current lock conflict with zero byte lock range\n"); goto out; } @@ -7011,14 +7012,14 @@ int smb2_lock(struct ksmbd_work *work) cmp_lock->end >= smb_lock->end)) && !cmp_lock->zero_len && !smb_lock->zero_len) { spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); pr_err("Not allow lock operation on exclusive lock range\n"); goto out; } } spin_unlock(&conn->llist_lock); } - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); out_check_cl: if (smb_lock->fl->fl_type == F_UNLCK && nolock) { pr_err("Try to unlock nolocked range\n");
From: Stanislav Lisovskiy stanislav.lisovskiy@intel.com
[ Upstream commit 1482ec00be4a3634aeffbcc799791a723df69339 ]
Adding DP DSC register definitions, we might need for further DSC implementation, supporting MST and DP branch pass-through mode.
v2: - Fixed checkpatch comment warning v3: - Removed function which is not yet used(Jani Nikula)
Reviewed-by: Vinod Govindapillai vinod.govindapillai@intel.com Acked-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com Signed-off-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221101094222.22091-2-stanisl... Stable-dep-of: 13525645e224 ("drm/dsc: fix drm_edp_dsc_sink_output_bpp() DPCD high byte usage") Signed-off-by: Sasha Levin sashal@kernel.org --- include/drm/display/drm_dp.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h index e934aab357bea..9bc22a02874d9 100644 --- a/include/drm/display/drm_dp.h +++ b/include/drm/display/drm_dp.h @@ -240,6 +240,8 @@ #define DP_DSC_SUPPORT 0x060 /* DP 1.4 */ # define DP_DSC_DECOMPRESSION_IS_SUPPORTED (1 << 0) # define DP_DSC_PASSTHROUGH_IS_SUPPORTED (1 << 1) +# define DP_DSC_DYNAMIC_PPS_UPDATE_SUPPORT_COMP_TO_COMP (1 << 2) +# define DP_DSC_DYNAMIC_PPS_UPDATE_SUPPORT_UNCOMP_TO_COMP (1 << 3)
#define DP_DSC_REV 0x061 # define DP_DSC_MAJOR_MASK (0xf << 0) @@ -278,12 +280,15 @@
#define DP_DSC_BLK_PREDICTION_SUPPORT 0x066 # define DP_DSC_BLK_PREDICTION_IS_SUPPORTED (1 << 0) +# define DP_DSC_RGB_COLOR_CONV_BYPASS_SUPPORT (1 << 1)
#define DP_DSC_MAX_BITS_PER_PIXEL_LOW 0x067 /* eDP 1.4 */
#define DP_DSC_MAX_BITS_PER_PIXEL_HI 0x068 /* eDP 1.4 */ # define DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK (0x3 << 0) # define DP_DSC_MAX_BITS_PER_PIXEL_HI_SHIFT 8 +# define DP_DSC_MAX_BPP_DELTA_VERSION_MASK 0x06 +# define DP_DSC_MAX_BPP_DELTA_AVAILABILITY 0x08
#define DP_DSC_DEC_COLOR_FORMAT_CAP 0x069 # define DP_DSC_RGB (1 << 0) @@ -345,11 +350,13 @@ # define DP_DSC_24_PER_DP_DSC_SINK (1 << 2)
#define DP_DSC_BITS_PER_PIXEL_INC 0x06F +# define DP_DSC_RGB_YCbCr444_MAX_BPP_DELTA_MASK 0x1f +# define DP_DSC_RGB_YCbCr420_MAX_BPP_DELTA_MASK 0xe0 # define DP_DSC_BITS_PER_PIXEL_1_16 0x0 # define DP_DSC_BITS_PER_PIXEL_1_8 0x1 # define DP_DSC_BITS_PER_PIXEL_1_4 0x2 # define DP_DSC_BITS_PER_PIXEL_1_2 0x3 -# define DP_DSC_BITS_PER_PIXEL_1 0x4 +# define DP_DSC_BITS_PER_PIXEL_1_1 0x4
#define DP_PSR_SUPPORT 0x070 /* XXX 1.2? */ # define DP_PSR_IS_SUPPORTED 1
From: Jani Nikula jani.nikula@intel.com
[ Upstream commit 13525645e2246ebc8a21bd656248d86022a6ee8f ]
The operator precedence between << and & is wrong, leading to the high byte being completely ignored. For example, with the 6.4 format, 32 becomes 0 and 24 becomes 8. Fix it, and remove the slightly confusing and unnecessary DP_DSC_MAX_BITS_PER_PIXEL_HI_SHIFT macro while at it.
Fixes: 0575650077ea ("drm/dp: DRM DP helper/macros to get DP sink DSC parameters") Cc: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Cc: Manasi Navare navaremanasi@google.com Cc: Anusha Srivatsa anusha.srivatsa@intel.com Cc: stable@vger.kernel.org # v5.0+ Signed-off-by: Jani Nikula jani.nikula@intel.com Reviewed-by: Ankit Nautiyal ankit.k.nautiyal@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230406134615.1422509-1-jani.... Signed-off-by: Sasha Levin sashal@kernel.org --- include/drm/display/drm_dp.h | 1 - include/drm/display/drm_dp_helper.h | 5 ++--- 2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h index 9bc22a02874d9..50428ba92ce8b 100644 --- a/include/drm/display/drm_dp.h +++ b/include/drm/display/drm_dp.h @@ -286,7 +286,6 @@
#define DP_DSC_MAX_BITS_PER_PIXEL_HI 0x068 /* eDP 1.4 */ # define DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK (0x3 << 0) -# define DP_DSC_MAX_BITS_PER_PIXEL_HI_SHIFT 8 # define DP_DSC_MAX_BPP_DELTA_VERSION_MASK 0x06 # define DP_DSC_MAX_BPP_DELTA_AVAILABILITY 0x08
diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h index ab55453f2d2cd..ade9df59e156a 100644 --- a/include/drm/display/drm_dp_helper.h +++ b/include/drm/display/drm_dp_helper.h @@ -181,9 +181,8 @@ static inline u16 drm_edp_dsc_sink_output_bpp(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE]) { return dsc_dpcd[DP_DSC_MAX_BITS_PER_PIXEL_LOW - DP_DSC_SUPPORT] | - (dsc_dpcd[DP_DSC_MAX_BITS_PER_PIXEL_HI - DP_DSC_SUPPORT] & - DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK << - DP_DSC_MAX_BITS_PER_PIXEL_HI_SHIFT); + ((dsc_dpcd[DP_DSC_MAX_BITS_PER_PIXEL_HI - DP_DSC_SUPPORT] & + DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK) << 8); }
static inline u32
From: John Stultz jstultz@google.com
commit 92cc5d00a431e96e5a49c0b97e5ad4fa7536bd4b upstream.
Apparently despite it being marked inline, the compiler may not inline __down_read_common() which makes it difficult to identify the cause of lock contention, as the blocked function in traceevents will always be listed as __down_read_common().
So this patch adds __always_inline annotation to the common function (as well as the inlined helper callers) to force it to be inlined so the blocking function will be listed (via Wchan) in traceevents.
Fixes: c995e638ccbb ("locking/rwsem: Fold __down_{read,write}*()") Reported-by: Tim Murray timmurray@google.com Signed-off-by: John Stultz jstultz@google.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Waiman Long longman@redhat.com Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20230503023351.2832796-1-jstultz@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/locking/rwsem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -1251,7 +1251,7 @@ static struct rw_semaphore *rwsem_downgr /* * lock for reading */ -static inline int __down_read_common(struct rw_semaphore *sem, int state) +static __always_inline int __down_read_common(struct rw_semaphore *sem, int state) { int ret = 0; long count; @@ -1269,17 +1269,17 @@ out: return ret; }
-static inline void __down_read(struct rw_semaphore *sem) +static __always_inline void __down_read(struct rw_semaphore *sem) { __down_read_common(sem, TASK_UNINTERRUPTIBLE); }
-static inline int __down_read_interruptible(struct rw_semaphore *sem) +static __always_inline int __down_read_interruptible(struct rw_semaphore *sem) { return __down_read_common(sem, TASK_INTERRUPTIBLE); }
-static inline int __down_read_killable(struct rw_semaphore *sem) +static __always_inline int __down_read_killable(struct rw_semaphore *sem) { return __down_read_common(sem, TASK_KILLABLE); }
From: Ye Bin yebin10@huawei.com
commit fa08a7b61dff8a4df11ff1e84abfc214b487caf7 upstream.
Syzbot found the following issue:
EXT4-fs: Warning: mounting with data=journal disables delayed allocation, dioread_nolock, O_DIRECT and fast_commit support! EXT4-fs (loop0): orphan cleanup on readonly fs ------------[ cut here ]------------ WARNING: CPU: 1 PID: 5067 at fs/ext4/mballoc.c:1869 mb_find_extent+0x8a1/0xe30 Modules linked in: CPU: 1 PID: 5067 Comm: syz-executor307 Not tainted 6.2.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 RIP: 0010:mb_find_extent+0x8a1/0xe30 fs/ext4/mballoc.c:1869 RSP: 0018:ffffc90003c9e098 EFLAGS: 00010293 RAX: ffffffff82405731 RBX: 0000000000000041 RCX: ffff8880783457c0 RDX: 0000000000000000 RSI: 0000000000000041 RDI: 0000000000000040 RBP: 0000000000000040 R08: ffffffff82405723 R09: ffffed10053c9402 R10: ffffed10053c9402 R11: 1ffff110053c9401 R12: 0000000000000000 R13: ffffc90003c9e538 R14: dffffc0000000000 R15: ffffc90003c9e2cc FS: 0000555556665300(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000056312f6796f8 CR3: 0000000022437000 CR4: 00000000003506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> ext4_mb_complex_scan_group+0x353/0x1100 fs/ext4/mballoc.c:2307 ext4_mb_regular_allocator+0x1533/0x3860 fs/ext4/mballoc.c:2735 ext4_mb_new_blocks+0xddf/0x3db0 fs/ext4/mballoc.c:5605 ext4_ext_map_blocks+0x1868/0x6880 fs/ext4/extents.c:4286 ext4_map_blocks+0xa49/0x1cc0 fs/ext4/inode.c:651 ext4_getblk+0x1b9/0x770 fs/ext4/inode.c:864 ext4_bread+0x2a/0x170 fs/ext4/inode.c:920 ext4_quota_write+0x225/0x570 fs/ext4/super.c:7105 write_blk fs/quota/quota_tree.c:64 [inline] get_free_dqblk+0x34a/0x6d0 fs/quota/quota_tree.c:130 do_insert_tree+0x26b/0x1aa0 fs/quota/quota_tree.c:340 do_insert_tree+0x722/0x1aa0 fs/quota/quota_tree.c:375 do_insert_tree+0x722/0x1aa0 fs/quota/quota_tree.c:375 do_insert_tree+0x722/0x1aa0 fs/quota/quota_tree.c:375 dq_insert_tree fs/quota/quota_tree.c:401 [inline] qtree_write_dquot+0x3b6/0x530 fs/quota/quota_tree.c:420 v2_write_dquot+0x11b/0x190 fs/quota/quota_v2.c:358 dquot_acquire+0x348/0x670 fs/quota/dquot.c:444 ext4_acquire_dquot+0x2dc/0x400 fs/ext4/super.c:6740 dqget+0x999/0xdc0 fs/quota/dquot.c:914 __dquot_initialize+0x3d0/0xcf0 fs/quota/dquot.c:1492 ext4_process_orphan+0x57/0x2d0 fs/ext4/orphan.c:329 ext4_orphan_cleanup+0xb60/0x1340 fs/ext4/orphan.c:474 __ext4_fill_super fs/ext4/super.c:5516 [inline] ext4_fill_super+0x81cd/0x8700 fs/ext4/super.c:5644 get_tree_bdev+0x400/0x620 fs/super.c:1282 vfs_get_tree+0x88/0x270 fs/super.c:1489 do_new_mount+0x289/0xad0 fs/namespace.c:3145 do_mount fs/namespace.c:3488 [inline] __do_sys_mount fs/namespace.c:3697 [inline] __se_sys_mount+0x2d3/0x3c0 fs/namespace.c:3674 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Add some debug information: mb_find_extent: mb_find_extent block=41, order=0 needed=64 next=0 ex=0/41/1@3735929054 64 64 7 block_bitmap: ff 3f 0c 00 fc 01 00 00 d2 3d 00 00 00 00 00 00 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Acctually, blocks per group is 64, but block bitmap indicate at least has 128 blocks. Now, ext4_validate_block_bitmap() didn't check invalid block's bitmap if set. To resolve above issue, add check like fsck "Padding at end of block bitmap is not set".
Cc: stable@kernel.org Reported-by: syzbot+68223fe9f6c95ad43bed@syzkaller.appspotmail.com Signed-off-by: Ye Bin yebin10@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230116020015.1506120-1-yebin@huaweicloud.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/balloc.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+)
--- a/fs/ext4/balloc.c +++ b/fs/ext4/balloc.c @@ -303,6 +303,22 @@ struct ext4_group_desc * ext4_get_group_ return desc; }
+static ext4_fsblk_t ext4_valid_block_bitmap_padding(struct super_block *sb, + ext4_group_t block_group, + struct buffer_head *bh) +{ + ext4_grpblk_t next_zero_bit; + unsigned long bitmap_size = sb->s_blocksize * 8; + unsigned int offset = num_clusters_in_group(sb, block_group); + + if (bitmap_size <= offset) + return 0; + + next_zero_bit = ext4_find_next_zero_bit(bh->b_data, bitmap_size, offset); + + return (next_zero_bit < bitmap_size ? next_zero_bit : 0); +} + /* * Return the block number which was discovered to be invalid, or 0 if * the block bitmap is valid. @@ -401,6 +417,15 @@ static int ext4_validate_block_bitmap(st EXT4_GROUP_INFO_BBITMAP_CORRUPT); return -EFSCORRUPTED; } + blk = ext4_valid_block_bitmap_padding(sb, block_group, bh); + if (unlikely(blk != 0)) { + ext4_unlock_group(sb, block_group); + ext4_error(sb, "bg %u: block %llu: padding at end of block bitmap is not set", + block_group, blk); + ext4_mark_group_bitmap_corrupted(sb, block_group, + EXT4_GROUP_INFO_BBITMAP_CORRUPT); + return -EFSCORRUPTED; + } set_buffer_verified(bh); verified: ext4_unlock_group(sb, block_group);
From: Tudor Ambarus tudor.ambarus@linaro.org
commit 4f04351888a83e595571de672e0a4a8b74f4fb31 upstream.
When modifying the block device while it is mounted by the filesystem, syzbot reported the following:
BUG: KASAN: slab-out-of-bounds in crc16+0x206/0x280 lib/crc16.c:58 Read of size 1 at addr ffff888075f5c0a8 by task syz-executor.2/15586
CPU: 1 PID: 15586 Comm: syz-executor.2 Not tainted 6.2.0-rc5-syzkaller-00205-gc96618275234 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/12/2023 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1b1/0x290 lib/dump_stack.c:106 print_address_description+0x74/0x340 mm/kasan/report.c:306 print_report+0x107/0x1f0 mm/kasan/report.c:417 kasan_report+0xcd/0x100 mm/kasan/report.c:517 crc16+0x206/0x280 lib/crc16.c:58 ext4_group_desc_csum+0x81b/0xb20 fs/ext4/super.c:3187 ext4_group_desc_csum_set+0x195/0x230 fs/ext4/super.c:3210 ext4_mb_clear_bb fs/ext4/mballoc.c:6027 [inline] ext4_free_blocks+0x191a/0x2810 fs/ext4/mballoc.c:6173 ext4_remove_blocks fs/ext4/extents.c:2527 [inline] ext4_ext_rm_leaf fs/ext4/extents.c:2710 [inline] ext4_ext_remove_space+0x24ef/0x46a0 fs/ext4/extents.c:2958 ext4_ext_truncate+0x177/0x220 fs/ext4/extents.c:4416 ext4_truncate+0xa6a/0xea0 fs/ext4/inode.c:4342 ext4_setattr+0x10c8/0x1930 fs/ext4/inode.c:5622 notify_change+0xe50/0x1100 fs/attr.c:482 do_truncate+0x200/0x2f0 fs/open.c:65 handle_truncate fs/namei.c:3216 [inline] do_open fs/namei.c:3561 [inline] path_openat+0x272b/0x2dd0 fs/namei.c:3714 do_filp_open+0x264/0x4f0 fs/namei.c:3741 do_sys_openat2+0x124/0x4e0 fs/open.c:1310 do_sys_open fs/open.c:1326 [inline] __do_sys_creat fs/open.c:1402 [inline] __se_sys_creat fs/open.c:1396 [inline] __x64_sys_creat+0x11f/0x160 fs/open.c:1396 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f72f8a8c0c9 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f72f97e3168 EFLAGS: 00000246 ORIG_RAX: 0000000000000055 RAX: ffffffffffffffda RBX: 00007f72f8bac050 RCX: 00007f72f8a8c0c9 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000280 RBP: 00007f72f8ae7ae9 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffd165348bf R14: 00007f72f97e3300 R15: 0000000000022000
Replace le16_to_cpu(sbi->s_es->s_desc_size) with sbi->s_desc_size
It reduces ext4's compiled text size, and makes the code more efficient (we remove an extra indirect reference and a potential byte swap on big endian systems), and there is no downside. It also avoids the potential KASAN / syzkaller failure, as a bonus.
Reported-by: syzbot+fc51227e7100c9294894@syzkaller.appspotmail.com Reported-by: syzbot+8785e41224a3afd04321@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=70d28d11ab14bd7938f3e088365252aa923cff4... Link: https://syzkaller.appspot.com/bug?id=b85721b38583ecc6b5e72ff524c67302abbc30f... Link: https://lore.kernel.org/all/000000000000ece18705f3b20934@google.com/ Fixes: 717d50e4971b ("Ext4: Uninitialized Block Groups") Cc: stable@vger.kernel.org Signed-off-by: Tudor Ambarus tudor.ambarus@linaro.org Link: https://lore.kernel.org/r/20230504121525.3275886-1-tudor.ambarus@linaro.org Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/super.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
--- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -3195,11 +3195,9 @@ static __le16 ext4_group_desc_csum(struc crc = crc16(crc, (__u8 *)gdp, offset); offset += sizeof(gdp->bg_checksum); /* skip checksum */ /* for checksum of struct ext4_group_desc do the rest...*/ - if (ext4_has_feature_64bit(sb) && - offset < le16_to_cpu(sbi->s_es->s_desc_size)) + if (ext4_has_feature_64bit(sb) && offset < sbi->s_desc_size) crc = crc16(crc, (__u8 *)gdp + offset, - le16_to_cpu(sbi->s_es->s_desc_size) - - offset); + sbi->s_desc_size - offset);
out: return cpu_to_le16(crc);
From: Jan Kara jack@suse.cz
commit 492888df0c7b42fc0843631168b0021bc4caee84 upstream.
When using cached extent stored in extent status tree in tree->cache_es another process holding ei->i_es_lock for reading can be racing with us setting new value of tree->cache_es. If the compiler would decide to refetch tree->cache_es at an unfortunate moment, it could result in a bogus in_range() check. Fix the possible race by using READ_ONCE() when using tree->cache_es only under ei->i_es_lock for reading.
Cc: stable@kernel.org Reported-by: syzbot+4a03518df1e31b537066@syzkaller.appspotmail.com Link: https://lore.kernel.org/all/000000000000d3b33905fa0fd4a6@google.com Suggested-by: Dmitry Vyukov dvyukov@google.com Signed-off-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230504125524.10802-1-jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/extents_status.c | 30 +++++++++++++----------------- 1 file changed, 13 insertions(+), 17 deletions(-)
--- a/fs/ext4/extents_status.c +++ b/fs/ext4/extents_status.c @@ -269,14 +269,12 @@ static void __es_find_extent_range(struc
/* see if the extent has been cached */ es->es_lblk = es->es_len = es->es_pblk = 0; - if (tree->cache_es) { - es1 = tree->cache_es; - if (in_range(lblk, es1->es_lblk, es1->es_len)) { - es_debug("%u cached by [%u/%u) %llu %x\n", - lblk, es1->es_lblk, es1->es_len, - ext4_es_pblock(es1), ext4_es_status(es1)); - goto out; - } + es1 = READ_ONCE(tree->cache_es); + if (es1 && in_range(lblk, es1->es_lblk, es1->es_len)) { + es_debug("%u cached by [%u/%u) %llu %x\n", + lblk, es1->es_lblk, es1->es_len, + ext4_es_pblock(es1), ext4_es_status(es1)); + goto out; }
es1 = __es_tree_search(&tree->root, lblk); @@ -295,7 +293,7 @@ out: }
if (es1 && matching_fn(es1)) { - tree->cache_es = es1; + WRITE_ONCE(tree->cache_es, es1); es->es_lblk = es1->es_lblk; es->es_len = es1->es_len; es->es_pblk = es1->es_pblk; @@ -933,14 +931,12 @@ int ext4_es_lookup_extent(struct inode *
/* find extent in cache firstly */ es->es_lblk = es->es_len = es->es_pblk = 0; - if (tree->cache_es) { - es1 = tree->cache_es; - if (in_range(lblk, es1->es_lblk, es1->es_len)) { - es_debug("%u cached by [%u/%u)\n", - lblk, es1->es_lblk, es1->es_len); - found = 1; - goto out; - } + es1 = READ_ONCE(tree->cache_es); + if (es1 && in_range(lblk, es1->es_lblk, es1->es_len)) { + es_debug("%u cached by [%u/%u)\n", + lblk, es1->es_lblk, es1->es_len); + found = 1; + goto out; }
node = tree->root.rb_node;
From: Baokun Li libaokun1@huawei.com
commit fa83c34e3e56b3c672af38059e066242655271b1 upstream.
When ext4_iomap_overwrite_begin() calls ext4_iomap_begin() map blocks may fail for some reason (e.g. memory allocation failure, bare disk write), and later because "iomap->type ! = IOMAP_MAPPED" triggers WARN_ON(). When ext4 iomap_begin() returns an error, it is normal that the type of iomap->type may not match the expectation. Therefore, we only determine if iomap->type is as expected when ext4_iomap_begin() is executed successfully.
Cc: stable@kernel.org Reported-by: syzbot+08106c4b7d60702dbc14@syzkaller.appspotmail.com Link: https://lore.kernel.org/all/00000000000015760b05f9b4eee9@google.com Signed-off-by: Baokun Li libaokun1@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230505132429.714648-1-libaokun1@huawei.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/inode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3503,7 +3503,7 @@ static int ext4_iomap_overwrite_begin(st */ flags &= ~IOMAP_WRITE; ret = ext4_iomap_begin(inode, offset, length, flags, iomap, srcmap); - WARN_ON_ONCE(iomap->type != IOMAP_MAPPED); + WARN_ON_ONCE(!ret && iomap->type != IOMAP_MAPPED); return ret; }
From: Theodore Ts'o tytso@mit.edu
commit 4c0b4818b1f636bc96359f7817a2d8bab6370162 upstream.
If there are failures while changing the mount options in __ext4_remount(), we need to restore the old mount options.
This commit fixes two problem. The first is there is a chance that we will free the old quota file names before a potential failure leading to a use-after-free. The second problem addressed in this commit is if there is a failed read/write to read-only transition, if the quota has already been suspended, we need to renable quota handling.
Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230506142419.984260-2-tytso@mit.edu Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/super.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
--- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -6566,9 +6566,6 @@ static int __ext4_remount(struct fs_cont }
#ifdef CONFIG_QUOTA - /* Release old quota file names */ - for (i = 0; i < EXT4_MAXQUOTAS; i++) - kfree(old_opts.s_qf_names[i]); if (enable_quota) { if (sb_any_quota_suspended(sb)) dquot_resume(sb, -1); @@ -6578,6 +6575,9 @@ static int __ext4_remount(struct fs_cont goto restore_opts; } } + /* Release old quota file names */ + for (i = 0; i < EXT4_MAXQUOTAS; i++) + kfree(old_opts.s_qf_names[i]); #endif if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks) ext4_release_system_zone(sb); @@ -6588,6 +6588,13 @@ static int __ext4_remount(struct fs_cont return 0;
restore_opts: + /* + * If there was a failing r/w to ro transition, we may need to + * re-enable quota + */ + if ((sb->s_flags & SB_RDONLY) && !(old_sb_flags & SB_RDONLY) && + sb_any_quota_suspended(sb)) + dquot_resume(sb, -1); sb->s_flags = old_sb_flags; sbi->s_mount_opt = old_opts.s_mount_opt; sbi->s_mount_opt2 = old_opts.s_mount_opt2;
From: Theodore Ts'o tytso@mit.edu
commit 4b3cb1d108bfc2aebb0d7c8a52261a53cf7f5786 upstream.
The ext4_dirhash() will *almost* never fail, especially when the hash tree feature was first introduced. However, with the addition of support of encrypted, casefolded file names, that function can most certainly fail today.
So make sure the callers of ext4_dirhash() properly check for failures, and reflect the errors back up to their callers.
Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230506142419.984260-1-tytso@mit.edu Reported-by: syzbot+394aa8a792cb99dbc837@syzkaller.appspotmail.com Reported-by: syzbot+344aaa8697ebd232bfc8@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=db56459ea4ac4a676ae4b4678f633e55da005a9... Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/hash.c | 6 +++++- fs/ext4/namei.c | 53 +++++++++++++++++++++++++++++++++++++---------------- 2 files changed, 42 insertions(+), 17 deletions(-)
--- a/fs/ext4/hash.c +++ b/fs/ext4/hash.c @@ -277,7 +277,11 @@ static int __ext4fs_dirhash(const struct } default: hinfo->hash = 0; - return -1; + hinfo->minor_hash = 0; + ext4_warning(dir->i_sb, + "invalid/unsupported hash tree version %u", + hinfo->hash_version); + return -EINVAL; } hash = hash & ~1; if (hash == (EXT4_HTREE_EOF_32BIT << 1)) --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -674,7 +674,7 @@ static struct stats dx_show_leaf(struct len = de->name_len; if (!IS_ENCRYPTED(dir)) { /* Directory is not encrypted */ - ext4fs_dirhash(dir, de->name, + (void) ext4fs_dirhash(dir, de->name, de->name_len, &h); printk("%*.s:(U)%x.%u ", len, name, h.hash, @@ -709,8 +709,9 @@ static struct stats dx_show_leaf(struct if (IS_CASEFOLDED(dir)) h.hash = EXT4_DIRENT_HASH(de); else - ext4fs_dirhash(dir, de->name, - de->name_len, &h); + (void) ext4fs_dirhash(dir, + de->name, + de->name_len, &h); printk("%*.s:(E)%x.%u ", len, name, h.hash, (unsigned) ((char *) de - base)); @@ -720,7 +721,8 @@ static struct stats dx_show_leaf(struct #else int len = de->name_len; char *name = de->name; - ext4fs_dirhash(dir, de->name, de->name_len, &h); + (void) ext4fs_dirhash(dir, de->name, + de->name_len, &h); printk("%*.s:%x.%u ", len, name, h.hash, (unsigned) ((char *) de - base)); #endif @@ -849,8 +851,14 @@ dx_probe(struct ext4_filename *fname, st hinfo->seed = EXT4_SB(dir->i_sb)->s_hash_seed; /* hash is already computed for encrypted casefolded directory */ if (fname && fname_name(fname) && - !(IS_ENCRYPTED(dir) && IS_CASEFOLDED(dir))) - ext4fs_dirhash(dir, fname_name(fname), fname_len(fname), hinfo); + !(IS_ENCRYPTED(dir) && IS_CASEFOLDED(dir))) { + int ret = ext4fs_dirhash(dir, fname_name(fname), + fname_len(fname), hinfo); + if (ret < 0) { + ret_err = ERR_PTR(ret); + goto fail; + } + } hash = hinfo->hash;
if (root->info.unused_flags & 1) { @@ -1111,7 +1119,12 @@ static int htree_dirblock_to_tree(struct hinfo->minor_hash = 0; } } else { - ext4fs_dirhash(dir, de->name, de->name_len, hinfo); + err = ext4fs_dirhash(dir, de->name, + de->name_len, hinfo); + if (err < 0) { + count = err; + goto errout; + } } if ((hinfo->hash < start_hash) || ((hinfo->hash == start_hash) && @@ -1313,8 +1326,12 @@ static int dx_make_map(struct inode *dir if (de->name_len && de->inode) { if (ext4_hash_in_dirent(dir)) h.hash = EXT4_DIRENT_HASH(de); - else - ext4fs_dirhash(dir, de->name, de->name_len, &h); + else { + int err = ext4fs_dirhash(dir, de->name, + de->name_len, &h); + if (err < 0) + return err; + } map_tail--; map_tail->hash = h.hash; map_tail->offs = ((char *) de - base)>>2; @@ -1452,10 +1469,9 @@ int ext4_fname_setup_ci_filename(struct hinfo->hash_version = DX_HASH_SIPHASH; hinfo->seed = NULL; if (cf_name->name) - ext4fs_dirhash(dir, cf_name->name, cf_name->len, hinfo); + return ext4fs_dirhash(dir, cf_name->name, cf_name->len, hinfo); else - ext4fs_dirhash(dir, iname->name, iname->len, hinfo); - return 0; + return ext4fs_dirhash(dir, iname->name, iname->len, hinfo); } #endif
@@ -2298,10 +2314,15 @@ static int make_indexed_dir(handle_t *ha fname->hinfo.seed = EXT4_SB(dir->i_sb)->s_hash_seed;
/* casefolded encrypted hashes are computed on fname setup */ - if (!ext4_hash_in_dirent(dir)) - ext4fs_dirhash(dir, fname_name(fname), - fname_len(fname), &fname->hinfo); - + if (!ext4_hash_in_dirent(dir)) { + int err = ext4fs_dirhash(dir, fname_name(fname), + fname_len(fname), &fname->hinfo); + if (err < 0) { + brelse(bh2); + brelse(bh); + return err; + } + } memset(frames, 0, sizeof(frames)); frame = frames; frame->entries = entries;
From: Theodore Ts'o tytso@mit.edu
commit f4ce24f54d9cca4f09a395f3eecce20d6bec4663 upstream.
In no journal mode, ext4_finish_convert_inline_dir() can self-deadlock by calling ext4_handle_dirty_dirblock() when it already has taken the directory lock. There is a similar self-deadlock in ext4_incvert_inline_data_nolock() for data files which we'll fix at the same time.
A simple reproducer demonstrating the problem:
mke2fs -Fq -t ext2 -O inline_data -b 4k /dev/vdc 64 mount -t ext4 -o dirsync /dev/vdc /vdc cd /vdc mkdir file0 cd file0 touch file0 touch file1 attr -s BurnSpaceInEA -V abcde . touch supercalifragilisticexpialidocious
Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230507021608.1290720-1-tytso@mit.edu Reported-by: syzbot+91dccab7c64e2850a4e5@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=ba84cc80a9491d65416bc7877e1650c87530fe8... Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/inline.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -1178,6 +1178,7 @@ static int ext4_finish_convert_inline_di ext4_initialize_dirent_tail(dir_block, inode->i_sb->s_blocksize); set_buffer_uptodate(dir_block); + unlock_buffer(dir_block); err = ext4_handle_dirty_dirblock(handle, inode, dir_block); if (err) return err; @@ -1252,6 +1253,7 @@ static int ext4_convert_inline_data_nolo if (!S_ISDIR(inode->i_mode)) { memcpy(data_bh->b_data, buf, inline_size); set_buffer_uptodate(data_bh); + unlock_buffer(data_bh); error = ext4_handle_dirty_metadata(handle, inode, data_bh); } else { @@ -1259,7 +1261,6 @@ static int ext4_convert_inline_data_nolo buf, inline_size); }
- unlock_buffer(data_bh); out_restore: if (error) ext4_restore_inline_data(handle, inode, iloc, buf, inline_size);
From: Theodore Ts'o tytso@mit.edu
commit 2220eaf90992c11d888fe771055d4de330385f01 upstream.
Normally the extended attributes in the inode body would have been checked when the inode is first opened, but if someone is writing to the block device while the file system is mounted, it's possible for the inode table to get corrupted. Add bounds checking to avoid reading beyond the end of allocated memory if this happens.
Reported-by: syzbot+1966db24521e5f6e23f7@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=1966db24521e5f6e23f7 Cc: stable@kernel.org Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/inline.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
--- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -34,6 +34,7 @@ static int get_max_inline_xattr_value_si struct ext4_xattr_ibody_header *header; struct ext4_xattr_entry *entry; struct ext4_inode *raw_inode; + void *end; int free, min_offs;
if (!EXT4_INODE_HAS_XATTR_SPACE(inode)) @@ -57,14 +58,23 @@ static int get_max_inline_xattr_value_si raw_inode = ext4_raw_inode(iloc); header = IHDR(inode, raw_inode); entry = IFIRST(header); + end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
/* Compute min_offs. */ - for (; !IS_LAST_ENTRY(entry); entry = EXT4_XATTR_NEXT(entry)) { + while (!IS_LAST_ENTRY(entry)) { + void *next = EXT4_XATTR_NEXT(entry); + + if (next >= end) { + EXT4_ERROR_INODE(inode, + "corrupt xattr in inline inode"); + return 0; + } if (!entry->e_value_inum && entry->e_value_size) { size_t offs = le16_to_cpu(entry->e_value_offs); if (offs < min_offs) min_offs = offs; } + entry = next; } free = min_offs - ((void *)entry - (void *)IFIRST(header)) - sizeof(__u32);
From: Theodore Ts'o tytso@mit.edu
commit 2a534e1d0d1591e951f9ece2fb460b2ff92edabd upstream.
In ext4_update_inline_data(), if ext4_xattr_ibody_get() fails for any reason, it's best if we just fail as opposed to stumbling on, especially if the failure is EFSCORRUPTED.
Cc: stable@kernel.org Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/inline.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -361,7 +361,7 @@ static int ext4_update_inline_data(handl
error = ext4_xattr_ibody_get(inode, i.name_index, i.name, value, len); - if (error == -ENODATA) + if (error < 0) goto out;
BUFFER_TRACE(is.iloc.bh, "get_write_access");
From: Jan Kara jack@suse.cz
commit 949f95ff39bf188e594e7ecd8e29b82eb108f5bf upstream.
When we enable MMP in ext4_multi_mount_protect() during mount or remount, we end up calling sb_start_write() from write_mmp_block(). This triggers lockdep warning because freeze protection ranks above s_umount semaphore we are holding during mount / remount. The problem is harmless because we are guaranteed the filesystem is not frozen during mount / remount but still let's fix the warning by not grabbing freeze protection from ext4_multi_mount_protect().
Cc: stable@kernel.org Reported-by: syzbot+6b7df7d5506b32467149@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=ab7e5b6f400b7778d46f01841422e5718fb8184... Signed-off-by: Jan Kara jack@suse.cz Reviewed-by: Christian Brauner brauner@kernel.org Link: https://lore.kernel.org/r/20230411121019.21940-1-jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/mmp.c | 30 +++++++++++++++++++++--------- 1 file changed, 21 insertions(+), 9 deletions(-)
--- a/fs/ext4/mmp.c +++ b/fs/ext4/mmp.c @@ -39,28 +39,36 @@ static void ext4_mmp_csum_set(struct sup * Write the MMP block using REQ_SYNC to try to get the block on-disk * faster. */ -static int write_mmp_block(struct super_block *sb, struct buffer_head *bh) +static int write_mmp_block_thawed(struct super_block *sb, + struct buffer_head *bh) { struct mmp_struct *mmp = (struct mmp_struct *)(bh->b_data);
- /* - * We protect against freezing so that we don't create dirty buffers - * on frozen filesystem. - */ - sb_start_write(sb); ext4_mmp_csum_set(sb, mmp); lock_buffer(bh); bh->b_end_io = end_buffer_write_sync; get_bh(bh); submit_bh(REQ_OP_WRITE | REQ_SYNC | REQ_META | REQ_PRIO, bh); wait_on_buffer(bh); - sb_end_write(sb); if (unlikely(!buffer_uptodate(bh))) return -EIO; - return 0; }
+static int write_mmp_block(struct super_block *sb, struct buffer_head *bh) +{ + int err; + + /* + * We protect against freezing so that we don't create dirty buffers + * on frozen filesystem. + */ + sb_start_write(sb); + err = write_mmp_block_thawed(sb, bh); + sb_end_write(sb); + return err; +} + /* * Read the MMP block. It _must_ be read from disk and hence we clear the * uptodate flag on the buffer. @@ -346,7 +354,11 @@ skip: seq = mmp_new_seq(); mmp->mmp_seq = cpu_to_le32(seq);
- retval = write_mmp_block(sb, bh); + /* + * On mount / remount we are protected against fs freezing (by s_umount + * semaphore) and grabbing freeze protection upsets lockdep + */ + retval = write_mmp_block_thawed(sb, bh); if (retval) goto failed;
From: Theodore Ts'o tytso@mit.edu
commit 463808f237cf73e98a1a45ff7460c2406a150a0b upstream.
If a malicious fuzzer overwrites the ext4 superblock while it is mounted such that the s_first_data_block is set to a very large number, the calculation of the block group can underflow, and trigger a BUG_ON check. Change this to be an ext4_warning so that we don't crash the kernel.
Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230430154311.579720-3-tytso@mit.edu Reported-by: syzbot+e2efa3efc15a1c9e95c3@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=69b28112e098b070f639efb356393af3ffec422... Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/mballoc.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -4820,7 +4820,11 @@ ext4_mb_release_group_pa(struct ext4_bud trace_ext4_mb_release_group_pa(sb, pa); BUG_ON(pa->pa_deleted == 0); ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, &bit); - BUG_ON(group != e4b->bd_group && pa->pa_len != 0); + if (unlikely(group != e4b->bd_group && pa->pa_len != 0)) { + ext4_warning(sb, "bad group: expected %u, group %u, pa_start %llu", + e4b->bd_group, group, pa->pa_pstart); + return 0; + } mb_free_blocks(pa->pa_inode, e4b, bit, pa->pa_len); atomic_add(pa->pa_len, &EXT4_SB(sb)->s_mb_discarded); trace_ext4_mballoc_discard(sb, NULL, group, bit, pa->pa_len);
From: Theodore Ts'o tytso@mit.edu
commit b87c7cdf2bed4928b899e1ce91ef0d147017ba45 upstream.
In ext4_xattr_move_to_block(), the value of the extended attribute which we need to move to an external block may be allocated by kvmalloc() if the value is stored in an external inode. So at the end of the function the code tried to check if this was the case by testing entry->e_value_inum.
However, at this point, the pointer to the xattr entry is no longer valid, because it was removed from the original location where it had been stored. So we could end up calling kvfree() on a pointer which was not allocated by kvmalloc(); or we could also potentially leak memory by not freeing the buffer when it should be freed. Fix this by storing whether it should be freed in a separate variable.
Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230430160426.581366-1-tytso@mit.edu Link: https://syzkaller.appspot.com/bug?id=5c2aee8256e30b55ccf57312c16d88417adbd5e... Link: https://syzkaller.appspot.com/bug?id=41a6b5d4917c0412eb3b3c3c604965bed7d7420... Reported-by: syzbot+64b645917ce07d89bde5@syzkaller.appspotmail.com Reported-by: syzbot+0d042627c4f2ad332195@syzkaller.appspotmail.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/xattr.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/fs/ext4/xattr.c +++ b/fs/ext4/xattr.c @@ -2564,6 +2564,7 @@ static int ext4_xattr_move_to_block(hand .in_inode = !!entry->e_value_inum, }; struct ext4_xattr_ibody_header *header = IHDR(inode, raw_inode); + int needs_kvfree = 0; int error;
is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS); @@ -2586,7 +2587,7 @@ static int ext4_xattr_move_to_block(hand error = -ENOMEM; goto out; } - + needs_kvfree = 1; error = ext4_xattr_inode_get(inode, entry, buffer, value_size); if (error) goto out; @@ -2625,7 +2626,7 @@ static int ext4_xattr_move_to_block(hand
out: kfree(b_entry_name); - if (entry->e_value_inum && buffer) + if (needs_kvfree && buffer) kvfree(buffer); if (is) brelse(is->iloc.bh);
From: Jani Nikula jani.nikula@intel.com
commit 0d68683838f2850dd8ff31f1121e05bfb7a2def0 upstream.
The macro values just don't match the specs. Fix them.
Fixes: 1482ec00be4a ("drm: Add missing DP DSC extended capability definitions.") Cc: Vinod Govindapillai vinod.govindapillai@intel.com Cc: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Jani Nikula jani.nikula@intel.com Reviewed-by: Ankit Nautiyal ankit.k.nautiyal@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230406134615.1422509-2-jani.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/drm/display/drm_dp.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/include/drm/display/drm_dp.h +++ b/include/drm/display/drm_dp.h @@ -286,8 +286,8 @@
#define DP_DSC_MAX_BITS_PER_PIXEL_HI 0x068 /* eDP 1.4 */ # define DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK (0x3 << 0) -# define DP_DSC_MAX_BPP_DELTA_VERSION_MASK 0x06 -# define DP_DSC_MAX_BPP_DELTA_AVAILABILITY 0x08 +# define DP_DSC_MAX_BPP_DELTA_VERSION_MASK (0x3 << 5) /* eDP 1.5 & DP 2.0 */ +# define DP_DSC_MAX_BPP_DELTA_AVAILABILITY (1 << 7) /* eDP 1.5 & DP 2.0 */
#define DP_DSC_DEC_COLOR_FORMAT_CAP 0x069 # define DP_DSC_RGB (1 << 0)
From: Chao Yu chao@kernel.org
commit d48a7b3a72f121655d95b5157c32c7d555e44c05 upstream.
In do_read_inode(), sanity_check_inode() should be called after f2fs_init_read_extent_tree(), fix it.
Fixes: 72840cccc0a1 ("f2fs: allocate the extent_cache by default") Signed-off-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/f2fs/inode.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
--- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -413,12 +413,6 @@ static int do_read_inode(struct inode *i fi->i_inline_xattr_size = 0; }
- if (!sanity_check_inode(inode, node_page)) { - f2fs_put_page(node_page, 1); - f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); - return -EFSCORRUPTED; - } - /* check data exist */ if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode)) __recover_inline_status(inode, node_page); @@ -481,6 +475,12 @@ static int do_read_inode(struct inode *i /* Need all the flag bits */ f2fs_init_read_extent_tree(inode, node_page);
+ if (!sanity_check_inode(inode, node_page)) { + f2fs_put_page(node_page, 1); + f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); + return -EFSCORRUPTED; + } + f2fs_put_page(node_page, 1);
stat_inc_inline_xattr(inode);
From: Chao Yu chao@kernel.org
commit 269d119481008cd725ce32553332593c0ecfc91c upstream.
In do_read_inode(), sanity check for extent cache should be called after f2fs_init_read_extent_tree(), fix it.
Fixes: 72840cccc0a1 ("f2fs: allocate the extent_cache by default") Signed-off-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/f2fs/extent_cache.c | 25 +++++++++++++++++++++++++ fs/f2fs/f2fs.h | 1 + fs/f2fs/inode.c | 22 ++++++---------------- 3 files changed, 32 insertions(+), 16 deletions(-)
--- a/fs/f2fs/extent_cache.c +++ b/fs/f2fs/extent_cache.c @@ -15,6 +15,31 @@ #include "node.h" #include <trace/events/f2fs.h>
+bool sanity_check_extent_cache(struct inode *inode) +{ + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + struct f2fs_inode_info *fi = F2FS_I(inode); + struct extent_info *ei; + + if (!fi->extent_tree[EX_READ]) + return true; + + ei = &fi->extent_tree[EX_READ]->largest; + + if (ei->len && + (!f2fs_is_valid_blkaddr(sbi, ei->blk, + DATA_GENERIC_ENHANCE) || + !f2fs_is_valid_blkaddr(sbi, ei->blk + ei->len - 1, + DATA_GENERIC_ENHANCE))) { + set_sbi_flag(sbi, SBI_NEED_FSCK); + f2fs_warn(sbi, "%s: inode (ino=%lx) extent info [%u, %u, %u] is incorrect, run fsck to fix", + __func__, inode->i_ino, + ei->blk, ei->fofs, ei->len); + return false; + } + return true; +} + static void __set_extent_info(struct extent_info *ei, unsigned int fofs, unsigned int len, block_t blk, bool keep_clen, --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -4125,6 +4125,7 @@ void f2fs_leave_shrinker(struct f2fs_sb_ /* * extent_cache.c */ +bool sanity_check_extent_cache(struct inode *inode); struct rb_entry *f2fs_lookup_rb_tree(struct rb_root_cached *root, struct rb_entry *cached_re, unsigned int ofs); struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi, --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -262,22 +262,6 @@ static bool sanity_check_inode(struct in return false; }
- if (fi->extent_tree[EX_READ]) { - struct extent_info *ei = &fi->extent_tree[EX_READ]->largest; - - if (ei->len && - (!f2fs_is_valid_blkaddr(sbi, ei->blk, - DATA_GENERIC_ENHANCE) || - !f2fs_is_valid_blkaddr(sbi, ei->blk + ei->len - 1, - DATA_GENERIC_ENHANCE))) { - set_sbi_flag(sbi, SBI_NEED_FSCK); - f2fs_warn(sbi, "%s: inode (ino=%lx) extent info [%u, %u, %u] is incorrect, run fsck to fix", - __func__, inode->i_ino, - ei->blk, ei->fofs, ei->len); - return false; - } - } - if (f2fs_sanity_check_inline_data(inode)) { set_sbi_flag(sbi, SBI_NEED_FSCK); f2fs_warn(sbi, "%s: inode (ino=%lx, mode=%u) should not have inline_data, run fsck to fix", @@ -479,6 +463,12 @@ static int do_read_inode(struct inode *i f2fs_put_page(node_page, 1); f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); return -EFSCORRUPTED; + } + + if (!sanity_check_extent_cache(inode)) { + f2fs_put_page(node_page, 1); + f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); + return -EFSCORRUPTED; }
f2fs_put_page(node_page, 1);
From: Mario Limonciello mario.limonciello@amd.com
commit 23a5b8bb022c1e071ca91b1a9c10f0ad6a0966e9 upstream.
Commit
310e782a99c7 ("platform/x86/amd: pmc: Utilize SMN index 0 for driver probe")
switched to using amd_smn_read() which relies upon the misc PCI ID used by DF function 3 being included in a table. The ID for model 78h is missing in that table, so amd_smn_read() doesn't work.
Add the missing ID into amd_nb, restoring s2idle on this system.
[ bp: Simplify commit message. ]
Fixes: 310e782a99c7 ("platform/x86/amd: pmc: Utilize SMN index 0 for driver probe") Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Acked-by: Bjorn Helgaas bhelgaas@google.com # pci_ids.h Acked-by: Guenter Roeck linux@roeck-us.net Link: https://lore.kernel.org/r/20230427053338.16653-2-mario.limonciello@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/amd_nb.c | 2 ++ include/linux/pci_ids.h | 1 + 2 files changed, 3 insertions(+)
--- a/arch/x86/kernel/amd_nb.c +++ b/arch/x86/kernel/amd_nb.c @@ -36,6 +36,7 @@ #define PCI_DEVICE_ID_AMD_19H_M50H_DF_F4 0x166e #define PCI_DEVICE_ID_AMD_19H_M60H_DF_F4 0x14e4 #define PCI_DEVICE_ID_AMD_19H_M70H_DF_F4 0x14f4 +#define PCI_DEVICE_ID_AMD_19H_M78H_DF_F4 0x12fc
/* Protect the PCI config register pairs used for SMN. */ static DEFINE_MUTEX(smn_mutex); @@ -79,6 +80,7 @@ static const struct pci_device_id amd_nb { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M50H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M60H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M70H_DF_F3) }, + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M78H_DF_F3) }, {} };
--- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -567,6 +567,7 @@ #define PCI_DEVICE_ID_AMD_19H_M50H_DF_F3 0x166d #define PCI_DEVICE_ID_AMD_19H_M60H_DF_F3 0x14e3 #define PCI_DEVICE_ID_AMD_19H_M70H_DF_F3 0x14f3 +#define PCI_DEVICE_ID_AMD_19H_M78H_DF_F3 0x12fb #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703 #define PCI_DEVICE_ID_AMD_LANCE 0x2000 #define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001
From: Linus Torvalds torvalds@linux-foundation.org
This code no longer exists in mainline, because it was removed in commit d2c95f9d6802 ("x86: don't use REP_GOOD or ERMS for user memory clearing") upstream.
However, rather than backport the full range of x86 memory clearing and copying cleanups, fix the exception table annotation placement for the final 'rep movsb' in clear_user_rep_good(): rather than pointing at the actual instruction that did the user space access, it pointed to the register move just before it.
That made sense from a code flow standpoint, but not from an actual usage standpoint: it means that if user access takes an exception, the exception handler won't actually find the instruction in the exception tables.
As a result, rather than fixing it up and returning -EFAULT, it would then turn it into a kernel oops report instead, something like:
BUG: unable to handle page fault for address: 0000000020081000 #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page ... RIP: 0010:clear_user_rep_good+0x1c/0x30 arch/x86/lib/clear_page_64.S:147 ... Call Trace: __clear_user arch/x86/include/asm/uaccess_64.h:103 [inline] clear_user arch/x86/include/asm/uaccess_64.h:124 [inline] iov_iter_zero+0x709/0x1290 lib/iov_iter.c:800 iomap_dio_hole_iter fs/iomap/direct-io.c:389 [inline] iomap_dio_iter fs/iomap/direct-io.c:440 [inline] __iomap_dio_rw+0xe3d/0x1cd0 fs/iomap/direct-io.c:601 iomap_dio_rw+0x40/0xa0 fs/iomap/direct-io.c:689 ext4_dio_read_iter fs/ext4/file.c:94 [inline] ext4_file_read_iter+0x4be/0x690 fs/ext4/file.c:145 call_read_iter include/linux/fs.h:2183 [inline] do_iter_readv_writev+0x2e0/0x3b0 fs/read_write.c:733 do_iter_read+0x2f2/0x750 fs/read_write.c:796 vfs_readv+0xe5/0x150 fs/read_write.c:916 do_preadv+0x1b6/0x270 fs/read_write.c:1008 __do_sys_preadv2 fs/read_write.c:1070 [inline] __se_sys_preadv2 fs/read_write.c:1061 [inline] __x64_sys_preadv2+0xef/0x150 fs/read_write.c:1061 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
which then looks like a filesystem bug rather than the incorrect exception annotation that it is.
[ The alternative to this one-liner fix is to take the upstream series that cleans this all up:
68674f94ffc9 ("x86: don't use REP_GOOD or ERMS for small memory copies") 20f3337d350c ("x86: don't use REP_GOOD or ERMS for small memory clearing") adfcf4231b8c ("x86: don't use REP_GOOD or ERMS for user memory copies") * d2c95f9d6802 ("x86: don't use REP_GOOD or ERMS for user memory clearing") 3639a535587d ("x86: move stac/clac from user copy routines into callers") 577e6a7fd50d ("x86: inline the 'rep movs' in user copies for the FSRM case") 8c9b6a88b7e2 ("x86: improve on the non-rep 'clear_user' function") 427fda2c8a49 ("x86: improve on the non-rep 'copy_user' function") * e046fe5a36a9 ("x86: set FSRS automatically on AMD CPUs that have FSRM") e1f2750edc4a ("x86: remove 'zerorest' argument from __copy_user_nocache()") 034ff37d3407 ("x86: rewrite '__copy_user_nocache' function")
with either the whole series or at a minimum the two marked commits being needed to fix this issue ]
Reported-by: syzbot syzbot+401145a9a237779feb26@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=401145a9a237779feb26 Fixes: 0db7058e8e23 ("x86/clear_user: Make it faster") Cc: Borislav Petkov bp@alien8.de Cc: stable@kernel.org Signed-off-by: Linus Torvalds torvalds@linux-foundation Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/lib/clear_page_64.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/lib/clear_page_64.S +++ b/arch/x86/lib/clear_page_64.S @@ -142,8 +142,8 @@ SYM_FUNC_START(clear_user_rep_good) and $7, %edx jz .Lrep_good_exit
-.Lrep_good_bytes: mov %edx, %ecx +.Lrep_good_bytes: rep stosb
.Lrep_good_exit:
From: Christophe Leroy christophe.leroy@csgroup.eu
commit 8a5299a1278eadf1e08a598a5345c376206f171e upstream.
For different reasons, fsl-spi driver performs bits_per_word modifications for different reasons: - On CPU mode, to minimise amount of interrupts - On CPM/QE mode to work around controller byte order
For CPU mode that's done in fsl_spi_prepare_message() while for CPM mode that's done in fsl_spi_setup_transfer().
Reunify all of it in fsl_spi_prepare_message(), and catch impossible cases early through master's bits_per_word_mask instead of returning EINVAL later.
Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Link: https://lore.kernel.org/r/0ce96fe96e8b07cba0613e4097cfd94d09b8919a.168037180... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/spi/spi-fsl-spi.c | 46 +++++++++++++++++++++------------------------- 1 file changed, 21 insertions(+), 25 deletions(-)
--- a/drivers/spi/spi-fsl-spi.c +++ b/drivers/spi/spi-fsl-spi.c @@ -177,26 +177,6 @@ static int mspi_apply_cpu_mode_quirks(st return bits_per_word; }
-static int mspi_apply_qe_mode_quirks(struct spi_mpc8xxx_cs *cs, - struct spi_device *spi, - int bits_per_word) -{ - /* CPM/QE uses Little Endian for words > 8 - * so transform 16 and 32 bits words into 8 bits - * Unfortnatly that doesn't work for LSB so - * reject these for now */ - /* Note: 32 bits word, LSB works iff - * tfcr/rfcr is set to CPMFCR_GBL */ - if (spi->mode & SPI_LSB_FIRST && - bits_per_word > 8) - return -EINVAL; - if (bits_per_word <= 8) - return bits_per_word; - if (bits_per_word == 16 || bits_per_word == 32) - return 8; /* pretend its 8 bits */ - return -EINVAL; -} - static int fsl_spi_setup_transfer(struct spi_device *spi, struct spi_transfer *t) { @@ -224,9 +204,6 @@ static int fsl_spi_setup_transfer(struct bits_per_word = mspi_apply_cpu_mode_quirks(cs, spi, mpc8xxx_spi, bits_per_word); - else - bits_per_word = mspi_apply_qe_mode_quirks(cs, spi, - bits_per_word);
if (bits_per_word < 0) return bits_per_word; @@ -361,6 +338,19 @@ static int fsl_spi_prepare_message(struc t->bits_per_word = 32; else if ((t->len & 1) == 0) t->bits_per_word = 16; + } else { + /* + * CPM/QE uses Little Endian for words > 8 + * so transform 16 and 32 bits words into 8 bits + * Unfortnatly that doesn't work for LSB so + * reject these for now + * Note: 32 bits word, LSB works iff + * tfcr/rfcr is set to CPMFCR_GBL + */ + if (m->spi->mode & SPI_LSB_FIRST && t->bits_per_word > 8) + return -EINVAL; + if (t->bits_per_word == 16 || t->bits_per_word == 32) + t->bits_per_word = 8; /* pretend its 8 bits */ } } return fsl_spi_setup_transfer(m->spi, first); @@ -594,8 +584,14 @@ static struct spi_master *fsl_spi_probe( if (mpc8xxx_spi->type == TYPE_GRLIB) fsl_spi_grlib_probe(dev);
- master->bits_per_word_mask = - (SPI_BPW_RANGE_MASK(4, 16) | SPI_BPW_MASK(32)) & + if (mpc8xxx_spi->flags & SPI_CPM_MODE) + master->bits_per_word_mask = + (SPI_BPW_RANGE_MASK(4, 8) | SPI_BPW_MASK(16) | SPI_BPW_MASK(32)); + else + master->bits_per_word_mask = + (SPI_BPW_RANGE_MASK(4, 16) | SPI_BPW_MASK(32)); + + master->bits_per_word_mask &= SPI_BPW_RANGE_MASK(1, mpc8xxx_spi->max_bits_per_word);
if (mpc8xxx_spi->flags & SPI_QE_CPU_MODE)
From: Christophe Leroy christophe.leroy@csgroup.eu
commit fc96ec826bced75cc6b9c07a4ac44bbf651337ab upstream.
On CPM, the RISC core is a lot more efficiant when doing transfers in 16-bits chunks than in 8-bits chunks, but unfortunately the words need to be byte swapped as seen in a previous commit.
So, for large tranfers with an even size, allocate a temporary tx buffer and byte-swap data before and after transfer.
This change allows setting higher speed for transfer. For instance on an MPC 8xx (CPM1 comms RISC processor), the documentation tells that transfer in byte mode at 1 kbit/s uses 0.200% of CPM load at 25 MHz while a word transfer at the same speed uses 0.032% of CPM load. This means the speed can be 6 times higher in word mode for the same CPM load.
For the time being, only do it on CPM1 as there must be a trade-off between the CPM load reduction and the CPU load required to byte swap the data.
Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Link: https://lore.kernel.org/r/f2e981f20f92dd28983c3949702a09248c23845c.168037180... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/spi/spi-fsl-cpm.c | 23 +++++++++++++++++++++++ drivers/spi/spi-fsl-spi.c | 3 +++ 2 files changed, 26 insertions(+)
--- a/drivers/spi/spi-fsl-cpm.c +++ b/drivers/spi/spi-fsl-cpm.c @@ -21,6 +21,7 @@ #include <linux/spi/spi.h> #include <linux/types.h> #include <linux/platform_device.h> +#include <linux/byteorder/generic.h>
#include "spi-fsl-cpm.h" #include "spi-fsl-lib.h" @@ -120,6 +121,21 @@ int fsl_spi_cpm_bufs(struct mpc8xxx_spi mspi->rx_dma = mspi->dma_dummy_rx; mspi->map_rx_dma = 0; } + if (t->bits_per_word == 16 && t->tx_buf) { + const u16 *src = t->tx_buf; + u16 *dst; + int i; + + dst = kmalloc(t->len, GFP_KERNEL); + if (!dst) + return -ENOMEM; + + for (i = 0; i < t->len >> 1; i++) + dst[i] = cpu_to_le16p(src + i); + + mspi->tx = dst; + mspi->map_tx_dma = 1; + }
if (mspi->map_tx_dma) { void *nonconst_tx = (void *)mspi->tx; /* shut up gcc */ @@ -173,6 +189,13 @@ void fsl_spi_cpm_bufs_complete(struct mp if (mspi->map_rx_dma) dma_unmap_single(dev, mspi->rx_dma, t->len, DMA_FROM_DEVICE); mspi->xfer_in_progress = NULL; + + if (t->bits_per_word == 16 && t->rx_buf) { + int i; + + for (i = 0; i < t->len; i += 2) + le16_to_cpus(t->rx_buf + i); + } } EXPORT_SYMBOL_GPL(fsl_spi_cpm_bufs_complete);
--- a/drivers/spi/spi-fsl-spi.c +++ b/drivers/spi/spi-fsl-spi.c @@ -351,6 +351,9 @@ static int fsl_spi_prepare_message(struc return -EINVAL; if (t->bits_per_word == 16 || t->bits_per_word == 32) t->bits_per_word = 8; /* pretend its 8 bits */ + if (t->bits_per_word == 8 && t->len >= 256 && + (mpc8xxx_spi->flags & SPI_CPM1)) + t->bits_per_word = 16; } } return fsl_spi_setup_transfer(m->spi, first);
From: Aurabindo Pillai aurabindo.pillai@amd.com
commit da5e14909776edea4462672fb4a3007802d262e7 upstream.
[Why&How]
When skipping full modeset since the only state change was a front porch change, the DC commit sequence requires extra checks to handle non existant plane states being asked to be removed from context.
Reviewed-by: Alvin Lee Alvin.Lee2@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Aurabindo Pillai aurabindo.pillai@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 ++++- drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 3 +++ 2 files changed, 7 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7673,6 +7673,8 @@ static void amdgpu_dm_commit_planes(stru continue;
dc_plane = dm_new_plane_state->dc_state; + if (!dc_plane) + continue;
bundle->surface_updates[planes_count].surface = dc_plane; if (new_pcrtc_state->color_mgmt_changed) { @@ -9217,8 +9219,9 @@ static int dm_update_plane_state(struct return -EINVAL; }
+ if (dm_old_plane_state->dc_state) + dc_plane_state_release(dm_old_plane_state->dc_state);
- dc_plane_state_release(dm_old_plane_state->dc_state); dm_new_plane_state->dc_state = NULL;
*lock_and_validation_needed = true; --- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c @@ -1707,6 +1707,9 @@ bool dc_remove_plane_from_context( struct dc_stream_status *stream_status = NULL; struct resource_pool *pool = dc->res_pool;
+ if (!plane_state) + return true; + for (i = 0; i < context->stream_count; i++) if (context->streams[i] == stream) { stream_status = &context->stream_status[i];
Hello Greg,
From: Greg Kroah-Hartman gregkh@linuxfoundation.org Sent: Monday, May 15, 2023 5:24 PM
This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
CIP configurations built and booted with Linux 6.1.29-rc1 (b82733c0ff99): https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/pipelines/86... https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/commits/linu...
Tested-by: Chris Paterson (CIP) chris.paterson2@renesas.com
Kind regards, Chris
On 5/15/23 10:24, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.29-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
On 5/15/23 9:24 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.29-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
On Mon, May 15, 2023 at 06:24:23PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Successfully compiled and installed bindeb-pkgs on my computer (Acer Aspire E15, Intel Core i3 Haswell). No noticeable regressions.
Tested-by: Bagas Sanjaya bagasdotme@gmail.com
Hi Greg,
On Mon, May 15, 2023 at 06:24:23PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Build test (gcc version 12.2.1 20230511): mips: 52 configs -> no failure arm: 100 configs -> no failure arm64: 3 configs -> no failure x86_64: 4 configs -> no failure alpha allmodconfig -> no failure csky allmodconfig -> no failure powerpc allmodconfig -> no failure riscv allmodconfig -> no failure s390 allmodconfig -> no failure xtensa allmodconfig -> no failure
Boot test: x86_64: Booted on my test laptop. No regression. x86_64: Booted on qemu. No regression. [1] arm64: Booted on rpi4b (4GB model). No regression. [2]
[1]. https://openqa.qa.codethink.co.uk/tests/3535 [2]. https://openqa.qa.codethink.co.uk/tests/3536
Tested-by: Sudip Mukherjee sudip.mukherjee@codethink.co.uk
Hi Greg
On Tue, May 16, 2023 at 2:05 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.29-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
6.1.29-rc1 tested.
x86_64
Build successfully completed. Boot successfully completed. No dmesg regressions. Video output normal. Sound output normal.
Lenovo ThinkPad X1 Carbon Gen10(Intel i7-1260P, arch linux)
Thanks
Tested-by: Takeshi Ogasawara takeshi.ogasawara@futuring-girl.com
On Mon, May 15, 2023 at 06:24:23PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Looks grand here too. Tested-by: Conor Dooley conor.dooley@microchip.com
Thanks, Conor.
On Mon, 15 May 2023 at 22:31, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.29-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. Regressions on arm64, arm, x86_64, and i386.
Reported-by: Linux Kernel Functional Testing lkft@linaro.org
We have recently upgraded our selftest sources to stable-rc 6.3 and running on stable rc 6.1 kernel.
List of test regressions: ======== kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
kselftest-memfd - memfd_memfd_test
kselftest-rseq - rseq_basic_test
kselftest-kvm - kvm_hyperv_features - kvm_xapic_state_test
ltp-commands - mkfs01_ntfs_sh
Details: =====
# selftests: membarrier: membarrier_test_single_thread # TAP version 13 # 1..18 # Bail out! sys membarrier MEMBARRIER_CMD_GET_REGISTRATIONS test: flags = 0, errno = 22 # # Planned tests != run tests (18 != 0) # # Totals: pass:0 fail:0 xfail:0 xpass:0 skip:0 error:0 not ok 1 selftests: membarrier: membarrier_test_single_thread # exit=1 # selftests: membarrier: membarrier_test_multi_thread # TAP version 13 # 1..16 # ok 1 sys_membarrier available # ok 2 sys membarrier invalid command test: command = -1, flags = 0, errno = 22. Failed as expected # ok 3 sys membarrier MEMBARRIER_CMD_QUERY invalid flags test: flags = 1, errno = 22. Failed as expected # ok 4 sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED not registered failure test: flags = 0, errno = 1 # ok 5 sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE not registered failure test: flags = 0, errno = 1 # ok 6 sys membarrier MEMBARRIER_CMD_GLOBAL test: flags = 0 # ok 7 sys membarrier MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED test: flags = 0 # Bail out! sys membarrier MEMBARRIER_CMD_GET_REGISTRATIONS test: flags = 0, errno = 22 # # Planned tests != run tests (16 != 7) # # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 not ok 2 selftests: membarrier: membarrier_test_multi_thread # exit=1
========
kselftest: Running tests in memfd TAP version 13 1..3 # selftests: memfd: memfd_test # Aborted not ok 1 selftests: memfd: memfd_test # exit=134
========
kselftest: Running tests in rseq TAP version 13 1..9 # selftests: rseq: basic_test # basic_test: rseq.h:204: rseq_current_node_id: Assertion `rseq_node_id_available()' failed. # ./kselftest/runner.sh: line 35: 1709 Aborted /usr/bin/timeout --foreground "$kselftest_timeout" $1 not ok 1 selftests: rseq: basic_test # exit=134 # selftests: rseq: basic_percpu_ops_test # spinlock # percpu_list ok 2 selftests: rseq: basic_percpu_ops_test # selftests: rseq: basic_percpu_ops_mm_cid_test # spinlock # percpu_list ok 3 selftests: rseq: basic_percpu_ops_mm_cid_test # selftests: rseq: param_test ok 4 selftests: rseq: param_test # selftests: rseq: param_test_benchmark ok 5 selftests: rseq: param_test_benchmark # selftests: rseq: param_test_compare_twice ok 6 selftests: rseq: param_test_compare_twice # selftests: rseq: param_test_mm_cid # Error: cpu id getter unavailable not ok 7 selftests: rseq: param_test_mm_cid # exit=255 # selftests: rseq: param_test_mm_cid_benchmark # Error: cpu id getter unavailable not ok 8 selftests: rseq: param_test_mm_cid_benchmark # exit=255 # selftests: rseq: param_test_mm_cid_compare_twice # Error: cpu id getter unavailable not ok 9 selftests: rseq: param_test_mm_cid_compare_twice # exit=255
======
# selftests: kvm: hyperv_features # Testing access to Hyper-V specific MSRs # ==== Test Assertion Failure ==== # x86_64/hyperv_features.c:498: false # pid=1043 tid=1043 errno=4 - Interrupted system call # 1 0x0000000000403722: ?? ??:0 # 2 0x00007fc1e890157a: ?? ??:0 # 3 0x00007fc1e890162f: ?? ??:0 # 4 0x00000000004037c4: ?? ??:0 # Failed guest assert: !vector at x86_64/hyperv_features.c:58 # MSR = 40000118, arg1 = d, arg2 = 0 not ok 10 selftests: kvm: hyperv_features # exit=254
# selftests: kvm: xapic_state_test # ==== Test Assertion Failure ==== # x86_64/xapic_state_test.c:147: apic_id == expected # pid=2175 tid=2175 errno=4 - Interrupted system call # 1 0x0000000000402bac: ?? ??:0 # 2 0x00000000004025ba: ?? ??:0 # 3 0x00007f0040cd457a: ?? ??:0 # 4 0x00007f0040cd462f: ?? ??:0 # 5 0x0000000000402624: ?? ??:0 # APIC_ID not set back to xAPIC format; wanted = 1000000, got = 1 not ok 46 selftests: kvm: xapic_state_test # exit=254
====
tst_device.c:93:[ 147.226163] loop0: detected capacity change from 0 to 614400 TINFO: Found free device 0 '/dev/loop0' mkfs01 1 TINFO: timeout per run is 0h 50m 0s mkfs01 1 TPASS: 'mkfs -t ntfs /dev/loop0 ' passed. mkfs01 2 TINFO: Mounting device: mount -t ntfs /dev/loop0 /scratch/ltp-zKu0Zn6L6o/LTP_mkfs01.ULVi9nN6XF/mntpoint modprobe: FATAL: Module fuse not found in directory /lib/modules/6.2.16-rc1 ntfs-3g-mount: fuse device is missing, try 'modprobe fuse' as root mkfs01 2 TBROK: Failed to mount device ntfs type: mount exit = 21 [ 166.420602] I/O error, dev loop0, sector 614272 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
Steps to reproduce: ============ # To install tuxrun on your system globally: # sudo pip3 install -U tuxrun==0.42.0 # # See https://tuxrun.org/ for complete documentation.
tuxrun \ --runtime podman \ --device qemu-x86_64 \ --boot-args rw \ --kernel https://storage.tuxsuite.com/public/linaro/lkft/builds/2Pq4UnS1BCkrcnT51kVe9... \ --modules https://storage.tuxsuite.com/public/linaro/lkft/builds/2Pq4UnS1BCkrcnT51kVe9... \ --rootfs https://storage.tuxboot.com/debian/bookworm/amd64/rootfs.ext4.xz \ --parameters SKIPFILE=skipfile-lkft.yaml \ --parameters KSELFTEST=https://storage.tuxsuite.com/public/linaro/lkft/builds/2PeR5CFkuV3xkzzYLrOV8... \ --image docker.io/lavasoftware/lava-dispatcher:2023.01.0020.gc1598238f \ --tests kselftest-membarrier \ --timeouts boot=15
Test log links and details, ==========
membarrier: test_multi_thread: ========== - https://lkft.validation.linaro.org/scheduler/job/6427965#L1776 - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28... - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28... - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28...
memfd_test: ========= - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28... - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28...
rseq_basic_test: ========= - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28... - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28... - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28...
kvm: kvm_hyperv_features and kvm_xapic_state_test: ========= - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28... - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28... - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28...
ltp-commands: mkfs01_ntfs_sh: ======== - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28...
## Build * kernel: 6.1.29-rc1 * git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc * git branch: linux-6.1.y * git commit: b82733c0ff99435bb7eb5ed4ea2e1c1fd69e7ebb * git describe: v6.1.28-240-gb82733c0ff99 * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28...
## Test Regressions (compared to v6.1.28)
* bcm2711-rpi-4-b, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* bcm2711-rpi-4-b, kselftest-memfd - memfd_memfd_test
* dragonboard-410c, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* dragonboard-410c, kselftest-memfd - memfd_memfd_test
* fvp-aemva, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* fvp-aemva, kselftest-memfd - memfd_memfd_test
* i386, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* i386, kselftest-memfd - memfd_memfd_test
* juno-r2, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* juno-r2, kselftest-memfd - memfd_memfd_test
* qemu-arm64, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* qemu-arm64, kselftest-memfd - memfd_memfd_test
* qemu-armv7, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* qemu-i386, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* qemu-i386, kselftest-memfd - memfd_memfd_test
* qemu-x86_64, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* qemu-x86_64, kselftest-memfd - memfd_memfd_test
* qemu_i386, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* qemu_i386, kselftest-memfd - memfd_memfd_test
* qemu_x86_64, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* qemu_x86_64, kselftest-memfd - memfd_memfd_test
* x15, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* x15, kselftest-rseq - rseq_basic_test
* x15, ltp-commands - mkfs01_ntfs_sh
* x86, kselftest-kvm - kvm_hyperv_features - kvm_xapic_state_test
* x86, kselftest-membarrier - membarrier_membarrier_test_multi_thread - membarrier_membarrier_test_single_thread
* x86, kselftest-memfd - memfd_memfd_test
* x86, kselftest-rseq - rseq_basic_test - rseq_run_param_test_sh
* x86-kasan, ltp-commands - mkfs01_ntfs_sh
## Metric Regressions (compared to v6.1.28)
## Test Fixes (compared to v6.1.28)
## Metric Fixes (compared to v6.1.28)
## Test result summary total: 161145, pass: 139501, fail: 3428, skip: 17949, xfail: 267
## Build Summary * arc: 5 total, 5 passed, 0 failed * arm: 151 total, 150 passed, 1 failed * arm64: 54 total, 53 passed, 1 failed * i386: 41 total, 38 passed, 3 failed * mips: 30 total, 28 passed, 2 failed * parisc: 8 total, 8 passed, 0 failed * powerpc: 38 total, 36 passed, 2 failed * riscv: 16 total, 15 passed, 1 failed * s390: 16 total, 16 passed, 0 failed * sh: 14 total, 12 passed, 2 failed * sparc: 8 total, 8 passed, 0 failed * x86_64: 46 total, 46 passed, 0 failed
## Test suites summary * boot * fwts * kselftest-android * kselftest-arm64 * kselftest-breakpoints * kselftest-capabilities * kselftest-cgroup * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-cpufreq * kselftest-drivers-dma-buf * kselftest-efivarfs * kselftest-exec * kselftest-filesystems * kselftest-filesystems-binderfs * kselftest-firmware * kselftest-fpu * kselftest-ftrace * kselftest-futex * kselftest-gpio * kselftest-intel_pstate * kselftest-ipc * kselftest-ir * kselftest-kcmp * kselftest-kexec * kselftest-kvm * kselftest-lib * kselftest-livepatch * kselftest-membarrier * kselftest-memfd * kselftest-memory-hotplug * kselftest-mincore * kselftest-mount * kselftest-mqueue * kselftest-net * kselftest-net-forwarding * kselftest-net-mptcp * kselftest-netfilter * kselftest-nsfs * kselftest-openat2 * kselftest-pid_namespace * kselftest-pidfd * kselftest-proc * kselftest-pstore * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-seccomp * kselftest-sigaltstack * kselftest-size * kselftest-splice * kselftest-static_keys * kselftest-sync * kselftest-sysctl * kselftest-tc-testing * kselftest-timens * kselftest-timers * kselftest-tmpfs * kselftest-tpm2 * kselftest-user * kselftest-user_events * kselftest-vDSO * kselftest-watchdog * kselftest-x86 * kselftest-zram * kunit * kvm-unit-tests * libhugetlbfs * log-parser-boot * log-parser-test * ltp-cap_bounds * ltp-commands * ltp-containers * ltp-controllers * ltp-cpuhotplug * ltp-crypto * ltp-cve * ltp-dio * ltp-fcntl-locktests * ltp-filecaps * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-fsx * ltp-hugetlb * ltp-io * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-pty * ltp-sched * ltp-securebits * ltp-smoke * ltp-syscalls * ltp-tracing * network-basic-tests * perf * rcutorture * v4l2-compliance * vdso
-- Linaro LKFT https://lkft.linaro.org
* Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
Hi Greg
6.1.29-rc1
compiles, boots and runs here on x86_64 (AMD Ryzen 5 PRO 4650G, Slackware64-15.0)
Tested-by: Markus Reichelt lkt+2023@mareichelt.com
On Mon, May 15, 2023 at 06:24:23PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.29 release. There are 239 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
Build results: total: 155 pass: 155 fail: 0 Qemu test results: total: 519 pass: 519 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
Guenter
linux-stable-mirror@lists.linaro.org