This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.16-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.2.16-rc1
Aurabindo Pillai aurabindo.pillai@amd.com drm/amd/display: Fix hang when skipping modeset
Christophe Leroy christophe.leroy@csgroup.eu spi: fsl-cpm: Use 16 bit mode for large transfers with even size
Christophe Leroy christophe.leroy@csgroup.eu spi: fsl-spi: Re-organise transfer bits_per_word adaptation
Linus Torvalds torvalds@linux-foundation.org x86: fix clear_user_rep_good() exception handling annotation
Mario Limonciello mario.limonciello@amd.com x86/amd_nb: Add PCI ID for family 19h model 78h
Jani Nikula jani.nikula@intel.com drm/dsc: fix DP_DSC_MAX_BPP_DELTA_* macro values
Theodore Ts'o tytso@mit.edu ext4: fix invalid free tracking in ext4_xattr_move_to_block()
Theodore Ts'o tytso@mit.edu ext4: remove a BUG_ON in ext4_mb_release_group_pa()
Jan Kara jack@suse.cz ext4: fix lockdep warning when enabling MMP
Theodore Ts'o tytso@mit.edu ext4: bail out of ext4_xattr_ibody_get() fails for any reason
Theodore Ts'o tytso@mit.edu ext4: add bounds checking in get_max_inline_xattr_value_size()
Theodore Ts'o tytso@mit.edu ext4: fix deadlock when converting an inline directory in nojournal mode
Theodore Ts'o tytso@mit.edu ext4: improve error handling from ext4_dirhash()
Theodore Ts'o tytso@mit.edu ext4: improve error recovery code paths in __ext4_remount()
Baokun Li libaokun1@huawei.com ext4: check iomap type only if ext4_iomap_begin() does not fail
Jan Kara jack@suse.cz ext4: avoid deadlock in fs reclaim with page writeback
Jan Kara jack@suse.cz ext4: fix data races when using cached status extents
Tudor Ambarus tudor.ambarus@linaro.org ext4: avoid a potential slab-out-of-bounds in ext4_group_desc_csum
Ye Bin yebin10@huawei.com ext4: fix WARNING in mb_find_extent
John Stultz jstultz@google.com locking/rwsem: Add __always_inline annotation to __down_read_common() and inlined callers
Jani Nikula jani.nikula@intel.com drm/dsc: fix drm_edp_dsc_sink_output_bpp() DPCD high byte usage
Stanislav Lisovskiy stanislav.lisovskiy@intel.com drm: Add missing DP DSC extended capability definitions.
Leo Chen sancchen@amd.com drm/amd/display: Change default Z8 watermark values
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Update Z8 SR exit/enter latencies
Leo Chen sancchen@amd.com drm/amd/display: Lowering min Z8 residency time
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Update minimum stutter residency for DCN314 Z8
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Add minimum Z8 residency debug option
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Fix Z8 support configurations
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915: Check pipe source size when using skl+ scalers
Animesh Manna animesh.manna@intel.com drm/i915/mtl: update scaler source and destination limits for MTL
Lionel Landwerlin lionel.g.landwerlin@intel.com drm/i915: disable sampler indirect state in bindless heap
Haridhar Kalvala haridhar.kalvala@intel.com drm/i915/mtl: Add Wa_14017856879
Radhakrishna Sripada radhakrishna.sripada@intel.com drm/i915/mtl: Add workarounds Wa_14017066071 and Wa_14017654203
Konrad Dybcio konrad.dybcio@linaro.org drm/msm/adreno: adreno_gpu: Use suspend() instead of idle() on load error
Konstantin Komarov almaz.alexandrovich@paragon-software.com fs/ntfs3: Refactoring of various minor issues
Ping Cheng pinglinux@gmail.com HID: wacom: insert timestamp to packed Bluetooth (BT) events
Ping Cheng pinglinux@gmail.com HID: wacom: Set a default resolution for older tablets
Mario Limonciello mario.limonciello@amd.com drm/amd: Use `amdgpu_ucode_*` helpers for MES
Mario Limonciello mario.limonciello@amd.com drm/amd: Add a new helper for loading/validating microcode
Mario Limonciello mario.limonciello@amd.com drm/amd: Load MES microcode during early_init
Guchun Chen guchun.chen@amd.com drm/amd/pm: avoid potential UBSAN issue on legacy asics
Guchun Chen guchun.chen@amd.com drm/amdgpu: disable sdma ecc irq only when sdma RAS is enabled in suspend
Guchun Chen guchun.chen@amd.com drm/amd/pm: parse pp_handle under appropriate conditions
Alvin Lee Alvin.Lee2@amd.com drm/amd/display: Enforce 60us prefetch for 200Mhz DCFCLK modes
Lin.Cao lincao12@amd.com drm/amdgpu: Fix vram recover doesn't work after whole GPU reset (v2)
Yifan Zhang yifan1.zhang@amd.com drm/amdgpu: change gfx 11.0.4 external_id range
Saleemkhan Jamadar saleemkhan.jamadar@amd.com drm/amdgpu/jpeg: Remove harvest checking for JPEG3
Guchun Chen guchun.chen@amd.com drm/amdgpu/gfx: disable gfx9 cp_ecc_error_irq only when enabling legacy gfx ras
Horatio Zhang Hongkun.Zhang@amd.com drm/amdgpu: fix amdgpu_irq_put call trace in gmc_v11_0_hw_fini
Hamza Mahfooz hamza.mahfooz@amd.com drm/amdgpu: fix an amdgpu_irq_put() issue in gmc_v9_0_hw_fini()
Horatio Zhang Hongkun.Zhang@amd.com drm/amdgpu: fix amdgpu_irq_put call trace in gmc_v10_0_hw_fini
Guchun Chen guchun.chen@amd.com drm/amdgpu: drop redundant sched job cleanup when cs is aborted
Hamza Mahfooz hamza.mahfooz@amd.com drm/amd/display: fix flickering caused by S/G mode
Samson Tam Samson.Tam@amd.com drm/amd/display: filter out invalid bits in pipe_fuses
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Fix 4to1 MPC black screen with DPP RCO
Nicholas Kazlauskas nicholas.kazlauskas@amd.com drm/amd/display: Add NULL plane_state check for cursor disable logic
James Cowgill james.cowgill@blaize.com drm/panel: otm8009a: Set backlight parent to panel device
Jianmin Lv lvjianmin@loongson.cn irqchip/loongson-eiointc: Fix registration of syscore_ops
Jianmin Lv lvjianmin@loongson.cn irqchip/loongson-eiointc: Fix incorrect use of acpi_get_vec_parent
Jianmin Lv lvjianmin@loongson.cn irqchip/loongson-eiointc: Fix returned value on parsing MADT
Jianmin Lv lvjianmin@loongson.cn irqchip/loongson-pch-pic: Fix registration of syscore_ops
Jianmin Lv lvjianmin@loongson.cn irqchip/loongson-pch-pic: Fix pch_pic_acpi_init calling
Jaegeuk Kim jaegeuk@kernel.org f2fs: fix potential corruption when moving a directory
Jaegeuk Kim jaegeuk@kernel.org f2fs: fix null pointer panic in tracepoint in __replace_atomic_write_block
Jaegeuk Kim jaegeuk@kernel.org f2fs: factor out victim_entry usage from general rb_tree use
Hans de Goede hdegoede@redhat.com drm/i915/dsi: Use unconditional msleep() instead of intel_dsi_msleep()
Johan Hovold johan+linaro@kernel.org drm/msm: fix workqueue leak on bind errors
Johan Hovold johan+linaro@kernel.org drm/msm: fix missing wq allocation error handling
Johan Hovold johan+linaro@kernel.org drm/msm: fix vram leak on bind errors
Johan Hovold johan+linaro@kernel.org drm/msm: fix drm device leak on bind errors
Johan Hovold johan+linaro@kernel.org drm/msm: fix NULL-deref on irq uninstall
Johan Hovold johan+linaro@kernel.org drm/msm: fix NULL-deref on snapshot tear down
Chaitanya Kumar Borah chaitanya.kumar.borah@intel.com drm/i915/color: Fix typo for Plane CSC indexes
Francesco Dolcini francesco.dolcini@toradex.com drm/bridge: lt8912b: Fix DSI Video Mode
Johan Hovold johan+linaro@kernel.org drm/msm/adreno: fix runtime PM imbalance at gpu load
Zev Weiss zev@bewilderbeest.net ARM: dts: aspeed: romed8hm3: Fix GPIO polarity of system-fault LED
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org ARM: dts: s5pv210: correct MIPI CSIS clock name
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org ARM: dts: exynos: fix WM8960 clock name in Itop Elite
Zev Weiss zev@bewilderbeest.net ARM: dts: aspeed: asrock: Correct firmware flash SPI clocks
Luis Chamberlain mcgrof@kernel.org sysctl: clarify register_sysctl_init() base directory order
Mathieu Poirier mathieu.poirier@linaro.org remoteproc: rcar_rproc: Call of_node_put() on iteration error
Mathieu Poirier mathieu.poirier@linaro.org remoteproc: imx_rproc: Call of_node_put() on iteration error
Mathieu Poirier mathieu.poirier@linaro.org remoteproc: imx_dsp_rproc: Call of_node_put() on iteration error
Mathieu Poirier mathieu.poirier@linaro.org remoteproc: st: Call of_node_put() on iteration error
Mathieu Poirier mathieu.poirier@linaro.org remoteproc: stm32: Call of_node_put() on iteration error
Luis Chamberlain mcgrof@kernel.org proc_sysctl: enhance documentation
Luis Chamberlain mcgrof@kernel.org proc_sysctl: update docs for __register_sysctl_table()
Randy Dunlap rdunlap@infradead.org sh: nmi_debug: fix return value of __setup handler
Randy Dunlap rdunlap@infradead.org sh: init: use OF_EARLY_FLATTREE for early init
Randy Dunlap rdunlap@infradead.org sh: mcount.S: fix build error when PRINTK is not enabled
Randy Dunlap rdunlap@infradead.org sh: math-emu: fix macro redefined warning
Steve French stfrench@microsoft.com SMB3: force unmount was failing to close deferred close files
Steve French stfrench@microsoft.com smb3: fix problem remounting a share after shutdown
Jan Kara jack@suse.cz inotify: Avoid reporting event with invalid wd
Mark Pearson mpearson-lenovo@squebb.ca platform/x86: thinkpad_acpi: Add profile force ability
Andrey Avdeev jamesstoun@gmail.com platform/x86: touchscreen_dmi: Add info for the Dexp Ursus KX210i
Fae faenkhauser@gmail.com platform/x86: hp-wmi: add micmute to hp_wmi_keymap struct
Mark Pearson mpearson-lenovo@squebb.ca platform/x86: thinkpad_acpi: Fix platform profiles on T490
Hans de Goede hdegoede@redhat.com platform/x86: touchscreen_dmi: Add upside-down quirk for GDIX1002 ts on the Juno Tablet
Srinivas Pandruvada srinivas.pandruvada@linux.intel.com platform/x86/intel-uncore-freq: Return error on write frequency
Steve French stfrench@microsoft.com cifs: release leases for deferred close handles when freezing
Pawel Witek pawel.ireneusz.witek@gmail.com cifs: fix pcchunk length type in smb2_copychunk_range
Filipe Manana fdmanana@suse.com btrfs: fix backref walking not returning all inode refs
Naohiro Aota Naohiro.Aota@wdc.com btrfs: zoned: fix full zone super block reading on ZNS
Naohiro Aota Naohiro.Aota@wdc.com btrfs: zoned: zone finish data relocation BG with last IO
Filipe Manana fdmanana@suse.com btrfs: fix space cache inconsistency after error loading it from disk
Anastasia Belova abelova@astralinux.ru btrfs: print-tree: parent bytenr must be aligned to sector size
Qu Wenruo wqu@suse.com btrfs: make clear_cache mount option to rebuild FST without disabling it
Christoph Hellwig hch@lst.de btrfs: zero the buffer before marking it dirty in btrfs_redirty_list_add
Josef Bacik josef@toxicpanda.com btrfs: don't free qgroup space unless specified
Boris Burkov boris@bur.io btrfs: fix encoded write i_size corruption with no-holes
xiaoshoukui xiaoshoukui@gmail.com btrfs: fix assertion of exclop condition when starting balance
Qu Wenruo wqu@suse.com btrfs: properly reject clear_cache and v1 cache for block-group-tree
Naohiro Aota naohiro.aota@wdc.com btrfs: zoned: fix wrong use of bitops API in btrfs_ensure_empty_zones
Filipe Manana fdmanana@suse.com btrfs: fix btrfs_prev_leaf() to not return the same key twice
Borislav Petkov (AMD) bp@alien8.de x86/retbleed: Fix return thunk alignment
Pablo Neira Ayuso pablo@netfilter.org netfilter: nf_tables: hit ENOENT on unexisting chain/flowtable update with missing attributes
Pablo Neira Ayuso pablo@netfilter.org netfilter: nf_tables: rename function to destroy hook list
Pablo Neira Ayuso pablo@netfilter.org netfilter: nf_tables: extended netlink error reporting for netdevice
Paulo Alcantara pc@manguebit.com cifs: avoid potential races when handling multiple dfs tcons
Shyam Prasad N sprasad@microsoft.com cifs: check only tcon status on tcon related functions
Johannes Berg johannes.berg@intel.com wifi: iwlwifi: mvm: fix potential memory leak
Namjae Jeon linkinjeon@kernel.org ksmbd: fix racy issue from smb2 close and logoff with multichannel
Namjae Jeon linkinjeon@kernel.org ksmbd: destroy expired sessions
Namjae Jeon linkinjeon@kernel.org ksmbd: block asynchronous requests when making a delay on session setup
Namjae Jeon linkinjeon@kernel.org ksmbd: fix racy issue from session setup and logoff
Dawei Li set_pte_at@outlook.com ksmbd: Implements sess->ksmbd_chann_list as xarray
Sean Christopherson seanjc@google.com KVM: x86: Preserve TDP MMU roots until they are explicitly invalidated
Sean Christopherson seanjc@google.com KVM: x86/mmu: Replace open coded usage of tdp_mmu_page with is_tdp_mmu_page()
David Matlack dmatlack@google.com KVM: x86/mmu: Move TDP MMU VM init/uninit behind tdp_mmu_enabled
David Matlack dmatlack@google.com KVM: x86/mmu: Change tdp_mmu to a read-only parameter
Dmitrii Dolgov 9erthalion6@gmail.com perf stat: Separate bperf from bpf_profiler
Yang Jihong yangjihong1@huawei.com perf tracepoint: Fix memory leak in is_valid_tracepoint()
Yang Jihong yangjihong1@huawei.com perf symbols: Fix return incorrect build_id size in elf_read_build_id()
Olivier Bacon olivierb89@gmail.com crypto: engine - fix crypto_queue backlog handling
Herbert Xu herbert@gondor.apana.org.au crypto: engine - Use crypto_request_complete
Herbert Xu herbert@gondor.apana.org.au crypto: api - Add scaffolding to change completion function signature
Christophe JAILLET christophe.jaillet@wanadoo.fr crypto: sun8i-ss - Fix a test in sun8i_ss_setup_ivs()
James Clark james.clark@arm.com perf cs-etm: Fix timeless decode mode detection
Markus Elfring Markus.Elfring@web.de perf map: Delete two variable initialisations before null pointer checks in sort__sym_from_cmp()
Arnaldo Carvalho de Melo acme@redhat.com perf pmu: zfree() expects a pointer to a pointer to zero it after freeing its contents
Kajol Jain kjain@linux.ibm.com perf vendor events power9: Remove UTF-8 characters from JSON files
Yang Jihong yangjihong1@huawei.com perf ftrace: Make system wide the default target for latency subcommand
Patrice Duroux patrice.duroux@gmail.com perf tests record_offcpu.sh: Fix redirection of stderr to stdin
Thomas Richter tmricht@linux.ibm.com perf vendor events s390: Remove UTF-8 characters from JSON file
Namhyung Kim namhyung@kernel.org perf hist: Improve srcfile sort key performance (really)
Adrian Hunter adrian.hunter@intel.com perf script: Fix Python support when no libtraceevent
Roman Lozko lozko.roma@gmail.com perf scripts intel-pt-events.py: Fix IPC output for Python 2
Ian Rogers irogers@google.com perf build: Support python/perf.so testing
Kan Liang kan.liang@linux.intel.com perf record: Fix "read LOST count failed" msg with sample read
Florian Fainelli f.fainelli@gmail.com net: bcmgenet: Remove phy_stop() from bcmgenet_netif_stop()
Shenwei Wang shenwei.wang@nxp.com net: fec: correct the counting of XDP sent frames
Wei Fang wei.fang@nxp.com net: enetc: check the index of the SFI rather than the handle
Wenliang Wang wangwenliang.1995@bytedance.com virtio_net: suppress cpu stall when free_unused_bufs
Michal Swiatkowski michal.swiatkowski@linux.intel.com ice: block LAN in case of VF to VF offload
Arınç ÜNAL arinc.unal@arinc9.com net: dsa: mt7530: fix network connectivity with multiple CPU ports
Daniel Golle daniel@makrotopia.org net: dsa: mt7530: split-off common parts from mt7531_setup
Arınç ÜNAL arinc.unal@arinc9.com net: dsa: mt7530: fix corrupt frames using trgmii on 40 MHz XTAL MT7621
Claudio Imbrenda imbrenda@linux.ibm.com KVM: s390: fix race in gmap_make_secure()
Claudio Imbrenda imbrenda@linux.ibm.com KVM: s390: pv: fix asynchronous teardown for small VMs
Ruliang Lin u202112092@hust.edu.cn ALSA: caiaq: input: Add error handling for unsupported input methods in `snd_usb_caiaq_input_init`
Chia-I Wu olvaffe@gmail.com drm/amdgpu: add a missing lock for AMDGPU_SCHED
Kuniyuki Iwashima kuniyu@amazon.com af_packet: Don't send zero-byte data in packet_sendmsg_spkt().
Shannon Nelson shannon.nelson@amd.com ionic: catch failure from devlink_alloc
Ido Schimmel idosch@nvidia.com ethtool: Fix uninitialized number of lanes
Shannon Nelson shannon.nelson@amd.com ionic: remove noise from ethtool rxnfc error msg
Subbaraya Sundeep sbhatta@marvell.com octeontx2-vf: Detach LF resources on probe cleanup
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: Disable packet I/O for graceful exit
Ratheesh Kannoth rkannoth@marvell.com octeontx2-af: Skip PFs if not enabled
Ratheesh Kannoth rkannoth@marvell.com octeontx2-af: Fix issues with NPC field hash extract
Ratheesh Kannoth rkannoth@marvell.com octeontx2-af: Update/Fix NPC field hash extract feature
Suman Ghosh sumang@marvell.com octeontx2-af: Update correct mask to filter IPv4 fragments
Hariprasad Kelam hkelam@marvell.com octeontx2-af: Add validation for lmac type
Ratheesh Kannoth rkannoth@marvell.com octeontx2-pf: Increase the size of dmac filter flows
Ratheesh Kannoth rkannoth@marvell.com octeontx2-af: Fix depth of cam and mem table.
Ratheesh Kannoth rkannoth@marvell.com octeontx2-af: Fix start and end bit for scan config
Geetha sowjanya gakula@marvell.com octeontx2-af: Secure APR table update with the lock
Jeremy Sowden jeremy@azazel.net selftests: netfilter: fix libmnl pkg-config usage
Radhakrishna Sripada radhakrishna.sripada@intel.com drm/i915/mtl: Add the missing CPU transcoder mask in intel_device_info
Felix Fietkau nbd@nbd.name net: ethernet: mtk_eth_soc: drop generic vlan rx offload, only use DSA untagging
Guo Ren guoren@kernel.org riscv: compat_syscall_table: Fixup compile warning
David Howells dhowells@redhat.com rxrpc: Fix timeout of a call that hasn't yet been granted a channel
David Howells dhowells@redhat.com rxrpc: Make it so that a waiting process can be aborted
David Howells dhowells@redhat.com rxrpc: Fix hard call timeout units
Andy Moreton andy.moreton@amd.com sfc: Fix module EEPROM reporting for QSFP modules
Hayes Wang hayeswang@realtek.com r8152: move setting r8153b_rx_agg_chg_indicate()
Hayes Wang hayeswang@realtek.com r8152: fix the poor throughput for 2.5G devices
Hayes Wang hayeswang@realtek.com r8152: fix flow control issue of RTL8156A
Victor Nogueira victor@mojatatu.com net/sched: act_mirred: Add carrier check
Akhil R akhilrajeev@nvidia.com i2c: tegra: Fix PEC support for SMBUS block read
Sia Jee Heng jeeheng.sia@starfivetech.com RISC-V: mm: Enable huge page support to kernel_page_present() function
Christophe JAILLET christophe.jaillet@wanadoo.fr watchdog: dw_wdt: Fix the error handling path of dw_wdt_drv_probe()
Tao Su tao1.su@linux.intel.com block: Skip destroyed blkg when restart in blkg_destroy_all()
Maxim Korotkov korotkov.maxim.s@gmail.com writeback: fix call of incorrect macro
Angelo Dureghello angelo.dureghello@timesys.com net: dsa: mv88e6xxx: add mv88e6321 rsvd2cpu
Antoine Tenart atenart@kernel.org net: ipv6: fix skb hash for some RST packets
Andrea Mayer andrea.mayer@uniroma2.it selftests: srv6: make srv6_end_dt46_l3vpn_test more robust
Cong Wang cong.wang@bytedance.com sit: update dev->needed_headroom in ipip6_tunnel_bind_dev()
Vlad Buslov vladbu@nvidia.com net/sched: cls_api: remove block_cb from driver_list before freeing
Eric Dumazet edumazet@google.com tcp: fix skb_copy_ubufs() vs BIG TCP
Cosmo Chou chou.cosmo@gmail.com net/ncsi: clear Tx enable mode when handling a Config required AEN
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: mcs: Do not reset PN while updating secy
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: mcs: Fix shared counters logic
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: mcs: Clear stats before freeing resource
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: mcs: Match macsec ethertype along with DMAC
Subbaraya Sundeep sbhatta@marvell.com octeontx2-pf: mcs: Fix NULL pointer dereferences
Geetha sowjanya gakula@marvell.com octeontx2-af: mcs: Fix MCS block interrupt
Geetha sowjanya gakula@marvell.com octeontx2-af: mcs: Config parser to skip 8B header
Subbaraya Sundeep sbhatta@marvell.com octeontx2-af: mcs: Write TCAM_DATA and TCAM_MASK registers at once
Geetha sowjanya gakula@marvell.com octeonxt2-af: mcs: Fix per port bypass config
John Hickey jjh@daedalian.us ixgbe: Fix panic during XDP_TX with > 64 CPUs
David Howells dhowells@redhat.com rxrpc: Fix potential data race in rxrpc_wait_to_be_connected()
Aurabindo Pillai aurabindo.pillai@amd.com drm/amd/display: Update bounding box values for DCN321
Aurabindo Pillai aurabindo.pillai@amd.com drm/amd/display: Do not clear GPINT register when releasing DMUB from reset
Cruise Hung Cruise.Hung@amd.com drm/amd/display: Reset OUTBOX0 r/w pointer on DMUB reset
Aurabindo Pillai aurabindo.pillai@amd.com drm/amd/display: Fixes for dcn32_clk_mgr implementation
Hersen Wu hersenxs.wu@amd.com drm/amd/display: Return error code on DSC atomic check failure
Rodrigo Siqueira Rodrigo.Siqueira@amd.com drm/amd/display: Add missing WA and MCLK validation
Zheng Wang zyytlz.wz@163.com scsi: qedi: Fix use after free bug in qedi_remove()
Hans de Goede hdegoede@redhat.com ASoC: Intel: soc-acpi-byt: Fix "WM510205" match no longer working
Bob Pearson rpearsonhpe@gmail.com RDMA/rxe: Extend dbg log messages to err and info
Bob Pearson rpearsonhpe@gmail.com RDMA/rxe: Change rxe_dbg to rxe_dbg_dev
Bob Pearson rpearsonhpe@gmail.com RDMA/rxe: Remove rxe_alloc()
Sean Christopherson seanjc@google.com KVM: x86/mmu: Refresh CR0.WP prior to checking for emulated permission faults
Mathias Krause minipli@grsecurity.net KVM: VMX: Make CR0.WP a guest owned bit
Mathias Krause minipli@grsecurity.net KVM: x86: Make use of kvm_read_cr*_bits() when testing bits
Mathias Krause minipli@grsecurity.net KVM: x86: Do not unload MMU roots when only toggling CR0.WP with TDP enabled
Paolo Bonzini pbonzini@redhat.com KVM: x86/mmu: Avoid indirect call for get_cr3
Ryan Lin tsung-hua.lin@amd.com drm/amd/display: Ext displays with dock can't recognized after resume
ZhangPeng zhangpeng362@huawei.com fs/ntfs3: Fix null-ptr-deref on inode->i_op in ntfs_lookup()
Takahiro Kuwano Takahiro.Kuwano@infineon.com mtd: spi-nor: spansion: Enable JFFS2 write buffer for Infineon s25hx SEMPER flash
Tanmay Shah tanmay.shah@amd.com mailbox: zynqmp: Fix counts of child nodes
Christophe JAILLET christophe.jaillet@wanadoo.fr mailbox: zynq: Switch to flexible array to simplify code
Manivannan Sadhasivam mani@kernel.org soc: qcom: llcc: Do not create EDAC platform device on SDM845
Manivannan Sadhasivam mani@kernel.org qcom: llcc/edac: Support polling mode for ECC handling
Takahiro Kuwano Takahiro.Kuwano@infineon.com mtd: spi-nor: spansion: Enable JFFS2 write buffer for Infineon s28hx SEMPER flash
Miquel Raynal miquel.raynal@bootlin.com mtd: spi-nor: Add a RWW flag
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org ASoC: codecs: wcd938x: fix accessing regmap on unattached devices
Krzysztof Kozlowski krzysztof.kozlowski@linaro.org ASoC: codecs: constify static sdw_slave_ops struct
Jeremi Piotrowski jpiotrowski@linux.microsoft.com crypto: ccp - Clear PSP interrupt status register before calling handler
Wesley Cheng quic_wcheng@quicinc.com usb: dwc3: gadget: Execute gadget stop after halting the controller
Johan Hovold johan+linaro@kernel.org USB: dwc3: gadget: drop dead hibernation code
-------------
Diffstat:
Makefile | 4 +- arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts | 2 +- arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts | 4 +- arch/arm/boot/dts/exynos4412-itop-elite.dts | 2 +- arch/arm/boot/dts/s5pv210.dtsi | 2 +- arch/riscv/kernel/Makefile | 1 + arch/riscv/mm/pageattr.c | 8 + arch/s390/kernel/uv.c | 32 +- arch/s390/kvm/pv.c | 5 + arch/s390/mm/gmap.c | 7 + arch/sh/Kconfig.debug | 2 +- arch/sh/kernel/head_32.S | 6 +- arch/sh/kernel/nmi_debug.c | 4 +- arch/sh/kernel/setup.c | 4 +- arch/sh/math-emu/sfp-util.h | 4 - arch/x86/include/asm/kvm_host.h | 13 +- arch/x86/kernel/amd_nb.c | 2 + arch/x86/kvm/kvm_cache_regs.h | 2 +- arch/x86/kvm/mmu.h | 32 +- arch/x86/kvm/mmu/mmu.c | 110 ++- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 130 ++- arch/x86/kvm/mmu/tdp_mmu.h | 7 +- arch/x86/kvm/pmu.c | 4 +- arch/x86/kvm/vmx/nested.c | 4 +- arch/x86/kvm/vmx/vmx.c | 6 +- arch/x86/kvm/vmx/vmx.h | 18 + arch/x86/kvm/x86.c | 12 + arch/x86/lib/clear_page_64.S | 2 +- arch/x86/lib/retpoline.S | 4 +- block/blk-cgroup.c | 3 + crypto/algapi.c | 3 + crypto/crypto_engine.c | 10 +- .../crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c | 2 +- drivers/crypto/ccp/psp-dev.c | 6 +- drivers/edac/qcom_edac.c | 52 +- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 13 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c | 59 ++ drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 1 + drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 36 + drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 3 + drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3 +- drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c | 1 - drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c | 1 - drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 1 - drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 1 + drivers/gpu/drm/amd/amdgpu/mes_v10_1.c | 107 +- drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 98 +- drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 8 +- drivers/gpu/drm/amd/amdgpu/soc21.c | 2 +- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 28 +- .../amd/display/amdgpu_dm/amdgpu_dm_mst_types.c | 1 + .../amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c | 5 + drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 3 + drivers/gpu/drm/amd/display/dc/dc.h | 1 + .../drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 16 +- drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 8 +- drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c | 13 +- .../gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c | 23 + .../gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c | 10 + .../gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h | 2 + .../gpu/drm/amd/display/dc/dcn314/dcn314_init.c | 1 + .../drm/amd/display/dc/dcn314/dcn314_resource.c | 1 + drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c | 1 + .../gpu/drm/amd/display/dc/dcn32/dcn32_resource.c | 12 +- .../drm/amd/display/dc/dcn321/dcn321_resource.c | 10 +- .../gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 15 +- .../gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c | 18 +- .../gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c | 4 +- .../amd/display/dc/dml/dcn32/display_mode_vba_32.c | 5 +- .../amd/display/dc/dml/dcn32/display_mode_vba_32.h | 1 + .../gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c | 24 +- drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h | 23 +- .../drm/amd/display/dc/inc/hw_sequencer_private.h | 4 + drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c | 3 +- drivers/gpu/drm/amd/pm/amdgpu_dpm.c | 25 +- drivers/gpu/drm/bridge/lontium-lt8912b.c | 1 - drivers/gpu/drm/i915/display/icl_dsi.c | 2 +- drivers/gpu/drm/i915/display/intel_dsi_vbt.c | 11 - drivers/gpu/drm/i915/display/intel_dsi_vbt.h | 1 - drivers/gpu/drm/i915/display/skl_scaler.c | 57 +- drivers/gpu/drm/i915/display/vlv_dsi.c | 22 +- drivers/gpu/drm/i915/gt/intel_gt_regs.h | 4 + drivers/gpu/drm/i915/gt/intel_workarounds.c | 33 + drivers/gpu/drm/i915/i915_pci.c | 2 + drivers/gpu/drm/i915/i915_reg.h | 4 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 16 +- drivers/gpu/drm/msm/msm_drv.c | 52 +- drivers/gpu/drm/panel/panel-orisetech-otm8009a.c | 2 +- drivers/hid/wacom_wac.c | 38 +- drivers/hid/wacom_wac.h | 1 + drivers/i2c/busses/i2c-tegra.c | 40 +- drivers/infiniband/sw/rxe/rxe.c | 8 +- drivers/infiniband/sw/rxe/rxe.h | 45 +- drivers/infiniband/sw/rxe/rxe_cq.c | 6 +- drivers/infiniband/sw/rxe/rxe_icrc.c | 4 +- drivers/infiniband/sw/rxe/rxe_mmap.c | 6 +- drivers/infiniband/sw/rxe/rxe_mr.c | 2 +- drivers/infiniband/sw/rxe/rxe_net.c | 4 +- drivers/infiniband/sw/rxe/rxe_pool.c | 46 - drivers/infiniband/sw/rxe/rxe_pool.h | 3 - drivers/infiniband/sw/rxe/rxe_qp.c | 16 +- drivers/infiniband/sw/rxe/rxe_srq.c | 6 +- drivers/infiniband/sw/rxe/rxe_verbs.c | 61 +- drivers/irqchip/irq-loongson-eiointc.c | 32 +- drivers/irqchip/irq-loongson-pch-pic.c | 6 +- drivers/mailbox/zynqmp-ipi-mailbox.c | 13 +- drivers/mtd/spi-nor/core.c | 6 + drivers/mtd/spi-nor/core.h | 4 + drivers/mtd/spi-nor/debugfs.c | 2 + drivers/mtd/spi-nor/spansion.c | 20 +- drivers/net/dsa/mt7530.c | 113 ++- drivers/net/dsa/mv88e6xxx/chip.c | 1 + drivers/net/ethernet/broadcom/genet/bcmgenet.c | 1 - drivers/net/ethernet/freescale/enetc/enetc_qos.c | 2 +- drivers/net/ethernet/freescale/fec_main.c | 13 +- drivers/net/ethernet/intel/ice/ice_tc_lib.c | 3 +- drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 3 - drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 6 +- drivers/net/ethernet/marvell/octeontx2/af/cgx.c | 8 + drivers/net/ethernet/marvell/octeontx2/af/mbox.c | 5 +- drivers/net/ethernet/marvell/octeontx2/af/mbox.h | 19 +- drivers/net/ethernet/marvell/octeontx2/af/mcs.c | 110 +-- drivers/net/ethernet/marvell/octeontx2/af/mcs.h | 26 +- .../ethernet/marvell/octeontx2/af/mcs_cnf10kb.c | 63 ++ .../net/ethernet/marvell/octeontx2/af/mcs_reg.h | 6 +- .../net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c | 37 + drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 49 +- drivers/net/ethernet/marvell/octeontx2/af/rvu.h | 1 + .../net/ethernet/marvell/octeontx2/af/rvu_cgx.c | 2 + .../net/ethernet/marvell/octeontx2/af/rvu_cn10k.c | 13 +- .../ethernet/marvell/octeontx2/af/rvu_debugfs.c | 5 +- .../net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c | 26 +- .../net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h | 4 + .../ethernet/marvell/octeontx2/af/rvu_npc_hash.c | 123 ++- .../ethernet/marvell/octeontx2/af/rvu_npc_hash.h | 10 +- .../ethernet/marvell/octeontx2/nic/cn10k_macsec.c | 48 +- .../ethernet/marvell/octeontx2/nic/otx2_common.h | 6 +- .../net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 14 +- .../net/ethernet/marvell/octeontx2/nic/otx2_tc.c | 2 +- .../net/ethernet/marvell/octeontx2/nic/otx2_vf.c | 2 +- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 106 +- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 1 - .../net/ethernet/pensando/ionic/ionic_devlink.c | 2 + .../net/ethernet/pensando/ionic/ionic_ethtool.c | 2 +- drivers/net/ethernet/sfc/mcdi_port_common.c | 11 +- drivers/net/usb/r8152.c | 84 +- drivers/net/virtio_net.c | 2 + drivers/net/wireless/intel/iwlwifi/mvm/d3.c | 1 + drivers/platform/x86/hp/hp-wmi.c | 1 + .../uncore-frequency/uncore-frequency-common.c | 6 +- drivers/platform/x86/thinkpad_acpi.c | 24 +- drivers/platform/x86/touchscreen_dmi.c | 41 + drivers/remoteproc/imx_dsp_rproc.c | 12 +- drivers/remoteproc/imx_rproc.c | 7 +- drivers/remoteproc/rcar_rproc.c | 9 +- drivers/remoteproc/st_remoteproc.c | 5 +- drivers/remoteproc/stm32_rproc.c | 6 +- drivers/scsi/qedi/qedi_main.c | 3 + drivers/soc/qcom/llcc-qcom.c | 15 +- drivers/spi/spi-fsl-cpm.c | 23 + drivers/spi/spi-fsl-spi.c | 49 +- drivers/usb/dwc3/gadget.c | 59 +- drivers/watchdog/dw_wdt.c | 7 +- fs/afs/afs.h | 4 +- fs/afs/internal.h | 2 +- fs/afs/rxrpc.c | 8 +- fs/btrfs/backref.c | 19 +- fs/btrfs/backref.h | 6 + fs/btrfs/block-rsv.c | 3 +- fs/btrfs/ctree.c | 32 +- fs/btrfs/disk-io.c | 25 +- fs/btrfs/file-item.c | 5 +- fs/btrfs/free-space-cache.c | 7 +- fs/btrfs/free-space-tree.c | 50 +- fs/btrfs/free-space-tree.h | 3 +- fs/btrfs/inode.c | 3 + fs/btrfs/ioctl.c | 4 +- fs/btrfs/print-tree.c | 6 +- fs/btrfs/relocation.c | 2 +- fs/btrfs/super.c | 6 +- fs/btrfs/zoned.c | 17 +- fs/cifs/cifsfs.c | 16 + fs/cifs/cifsglob.h | 2 +- fs/cifs/connect.c | 24 +- fs/cifs/dfs.c | 14 +- fs/cifs/dfs_cache.c | 137 ++- fs/cifs/dfs_cache.h | 9 + fs/cifs/file.c | 8 +- fs/cifs/smb2ops.c | 2 +- fs/ext4/balloc.c | 25 + fs/ext4/ext4.h | 24 + fs/ext4/extents_status.c | 30 +- fs/ext4/hash.c | 6 +- fs/ext4/inline.c | 17 +- fs/ext4/inode.c | 20 +- fs/ext4/mballoc.c | 6 +- fs/ext4/migrate.c | 11 +- fs/ext4/mmp.c | 30 +- fs/ext4/namei.c | 53 +- fs/ext4/super.c | 19 +- fs/ext4/xattr.c | 5 +- fs/f2fs/extent_cache.c | 36 +- fs/f2fs/f2fs.h | 15 +- fs/f2fs/gc.c | 139 ++- fs/f2fs/gc.h | 14 +- fs/f2fs/namei.c | 16 +- fs/f2fs/segment.c | 6 +- fs/fs-writeback.c | 2 +- fs/ksmbd/connection.c | 68 +- fs/ksmbd/connection.h | 58 +- fs/ksmbd/mgmt/tree_connect.c | 3 + fs/ksmbd/mgmt/user_session.c | 138 +-- fs/ksmbd/mgmt/user_session.h | 5 +- fs/ksmbd/server.c | 3 +- fs/ksmbd/smb2pdu.c | 122 +-- fs/ksmbd/smb2pdu.h | 2 + fs/ksmbd/transport_tcp.c | 2 +- fs/notify/inotify/inotify_fsnotify.c | 11 +- fs/ntfs3/bitmap.c | 3 +- fs/ntfs3/frecord.c | 2 +- fs/ntfs3/fsntfs.c | 6 +- fs/ntfs3/namei.c | 10 + fs/ntfs3/ntfs.h | 3 - fs/proc/proc_sysctl.c | 42 +- include/crypto/algapi.h | 7 + include/drm/display/drm_dp.h | 10 +- include/drm/display/drm_dp_helper.h | 5 +- include/linux/crypto.h | 6 + include/linux/pci_ids.h | 1 + include/net/af_rxrpc.h | 21 +- kernel/locking/rwsem.c | 8 +- net/core/skbuff.c | 20 +- net/ethtool/ioctl.c | 2 +- net/ipv6/sit.c | 8 +- net/ipv6/tcp_ipv6.c | 2 +- net/ncsi/ncsi-aen.c | 1 + net/netfilter/nf_tables_api.c | 69 +- net/packet/af_packet.c | 2 +- net/rxrpc/af_rxrpc.c | 3 + net/rxrpc/ar-internal.h | 1 + net/rxrpc/call_object.c | 9 +- net/rxrpc/sendmsg.c | 22 +- net/sched/act_mirred.c | 2 +- net/sched/cls_api.c | 1 + sound/soc/codecs/rt1316-sdw.c | 2 +- sound/soc/codecs/rt1318-sdw.c | 2 +- sound/soc/codecs/rt711-sdca-sdw.c | 2 +- sound/soc/codecs/rt715-sdca-sdw.c | 2 +- sound/soc/codecs/wcd938x-sdw.c | 1039 +++++++++++++++++++- sound/soc/codecs/wcd938x.c | 1003 +------------------ sound/soc/codecs/wcd938x.h | 1 + sound/soc/codecs/wsa881x.c | 2 +- sound/soc/codecs/wsa883x.c | 2 +- sound/soc/intel/common/soc-acpi-intel-byt-match.c | 2 +- sound/usb/caiaq/input.c | 1 + tools/perf/Build | 2 +- tools/perf/Makefile.perf | 7 +- tools/perf/builtin-ftrace.c | 6 +- tools/perf/builtin-record.c | 2 +- tools/perf/builtin-script.c | 2 +- tools/perf/builtin-stat.c | 4 +- .../perf/pmu-events/arch/powerpc/power9/other.json | 4 +- .../pmu-events/arch/powerpc/power9/pipeline.json | 2 +- .../perf/pmu-events/arch/s390/cf_z16/extended.json | 10 +- tools/perf/scripts/Build | 4 +- tools/perf/scripts/python/Perf-Trace-Util/Build | 2 +- .../perf/scripts/python/Perf-Trace-Util/Context.c | 4 + tools/perf/scripts/python/intel-pt-events.py | 2 +- tools/perf/tests/make | 5 +- tools/perf/tests/shell/record_offcpu.sh | 2 +- tools/perf/util/Build | 2 +- tools/perf/util/cs-etm.c | 34 +- tools/perf/util/evsel.h | 5 + tools/perf/util/pmu.c | 2 +- tools/perf/util/scripting-engines/Build | 2 +- .../util/scripting-engines/trace-event-python.c | 75 +- tools/perf/util/sort.c | 10 +- tools/perf/util/symbol-elf.c | 2 +- tools/perf/util/trace-event-scripting.c | 9 +- tools/perf/util/tracepoint.c | 1 + .../selftests/net/srv6_end_dt46_l3vpn_test.sh | 10 +- tools/testing/selftests/netfilter/Makefile | 7 +- 285 files changed, 3973 insertions(+), 2755 deletions(-)
From: Johan Hovold johan+linaro@kernel.org
[ Upstream commit bdb19d01026a5cccfa437be8adcf2df472c5889e ]
The hibernation code is broken and has never been enabled in mainline and should thus be dropped.
Remove the hibernation bits from the gadget code, which effectively reverts commits e1dadd3b0f27 ("usb: dwc3: workaround: bogus hibernation events") and 7b2a0368bbc9 ("usb: dwc3: gadget: set KEEP_CONNECT in case of hibernation") except for the spurious interrupt warning.
Acked-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Link: https://lore.kernel.org/r/20230404072524.19014-5-johan+linaro@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Stable-dep-of: 39674be56fba ("usb: dwc3: gadget: Execute gadget stop after halting the controller") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/dwc3/gadget.c | 46 +++++---------------------------------- 1 file changed, 6 insertions(+), 40 deletions(-)
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index 3faac3244c7db..a995e3f4df37f 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -2478,7 +2478,7 @@ static void __dwc3_gadget_set_speed(struct dwc3 *dwc) dwc3_writel(dwc->regs, DWC3_DCFG, reg); }
-static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend) +static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on) { u32 reg; u32 timeout = 2000; @@ -2497,17 +2497,11 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend) reg &= ~DWC3_DCTL_KEEP_CONNECT; reg |= DWC3_DCTL_RUN_STOP;
- if (dwc->has_hibernation) - reg |= DWC3_DCTL_KEEP_CONNECT; - __dwc3_gadget_set_speed(dwc); dwc->pullups_connected = true; } else { reg &= ~DWC3_DCTL_RUN_STOP;
- if (dwc->has_hibernation && !suspend) - reg &= ~DWC3_DCTL_KEEP_CONNECT; - dwc->pullups_connected = false; }
@@ -2589,7 +2583,7 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc) * remaining event generated by the controller while polling for * DSTS.DEVCTLHLT. */ - return dwc3_gadget_run_stop(dwc, false, false); + return dwc3_gadget_run_stop(dwc, false); }
static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on) @@ -2643,7 +2637,7 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
dwc3_event_buffers_setup(dwc); __dwc3_gadget_start(dwc); - ret = dwc3_gadget_run_stop(dwc, true, false); + ret = dwc3_gadget_run_stop(dwc, true); }
pm_runtime_put(dwc->dev); @@ -4210,30 +4204,6 @@ static void dwc3_gadget_suspend_interrupt(struct dwc3 *dwc, dwc->link_state = next; }
-static void dwc3_gadget_hibernation_interrupt(struct dwc3 *dwc, - unsigned int evtinfo) -{ - unsigned int is_ss = evtinfo & BIT(4); - - /* - * WORKAROUND: DWC3 revision 2.20a with hibernation support - * have a known issue which can cause USB CV TD.9.23 to fail - * randomly. - * - * Because of this issue, core could generate bogus hibernation - * events which SW needs to ignore. - * - * Refers to: - * - * STAR#9000546576: Device Mode Hibernation: Issue in USB 2.0 - * Device Fallback from SuperSpeed - */ - if (is_ss ^ (dwc->speed == USB_SPEED_SUPER)) - return; - - /* enter hibernation here */ -} - static void dwc3_gadget_interrupt(struct dwc3 *dwc, const struct dwc3_event_devt *event) { @@ -4251,11 +4221,7 @@ static void dwc3_gadget_interrupt(struct dwc3 *dwc, dwc3_gadget_wakeup_interrupt(dwc); break; case DWC3_DEVICE_EVENT_HIBER_REQ: - if (dev_WARN_ONCE(dwc->dev, !dwc->has_hibernation, - "unexpected hibernation event\n")) - break; - - dwc3_gadget_hibernation_interrupt(dwc, event->event_info); + dev_WARN_ONCE(dwc->dev, true, "unexpected hibernation event\n"); break; case DWC3_DEVICE_EVENT_LINK_STATUS_CHANGE: dwc3_gadget_linksts_change_interrupt(dwc, event->event_info); @@ -4592,7 +4558,7 @@ int dwc3_gadget_suspend(struct dwc3 *dwc) if (!dwc->gadget_driver) return 0;
- dwc3_gadget_run_stop(dwc, false, false); + dwc3_gadget_run_stop(dwc, false);
spin_lock_irqsave(&dwc->lock, flags); dwc3_disconnect_gadget(dwc); @@ -4613,7 +4579,7 @@ int dwc3_gadget_resume(struct dwc3 *dwc) if (ret < 0) goto err0;
- ret = dwc3_gadget_run_stop(dwc, true, false); + ret = dwc3_gadget_run_stop(dwc, true); if (ret < 0) goto err1;
From: Wesley Cheng quic_wcheng@quicinc.com
[ Upstream commit 39674be56fba1cd3a03bf4617f523a35f85fd2c1 ]
Do not call gadget stop until the poll for controller halt is completed. DEVTEN is cleared as part of gadget stop, so the intention to allow ep0 events to continue while waiting for controller halt is not happening.
Fixes: c96683798e27 ("usb: dwc3: ep0: Don't prepare beyond Setup stage") Cc: stable@vger.kernel.org Acked-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Signed-off-by: Wesley Cheng quic_wcheng@quicinc.com Link: https://lore.kernel.org/r/20230420212759.29429-2-quic_wcheng@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/dwc3/gadget.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index a995e3f4df37f..e63700937ba8c 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -2546,7 +2546,6 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc) * bit. */ dwc3_stop_active_transfers(dwc); - __dwc3_gadget_stop(dwc); spin_unlock_irqrestore(&dwc->lock, flags);
/* @@ -2583,7 +2582,19 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc) * remaining event generated by the controller while polling for * DSTS.DEVCTLHLT. */ - return dwc3_gadget_run_stop(dwc, false); + ret = dwc3_gadget_run_stop(dwc, false); + + /* + * Stop the gadget after controller is halted, so that if needed, the + * events to update EP0 state can still occur while the run/stop + * routine polls for the halted state. DEVTEN is cleared as part of + * gadget stop. + */ + spin_lock_irqsave(&dwc->lock, flags); + __dwc3_gadget_stop(dwc); + spin_unlock_irqrestore(&dwc->lock, flags); + + return ret; }
static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
From: Jeremi Piotrowski jpiotrowski@linux.microsoft.com
[ Upstream commit 45121ad4a1750ca47ce3f32bd434bdb0cdbf0043 ]
The PSP IRQ is edge-triggered (MSI or MSI-X) in all cases supported by the psp module so clear the interrupt status register early in the handler to prevent missed interrupts. sev_irq_handler() calls wake_up() on a wait queue, which can result in a new command being submitted from a different CPU. This then races with the clearing of isr and can result in missed interrupts. A missed interrupt results in a command waiting until it times out, which results in the psp being declared dead.
This is unlikely on bare metal, but has been observed when running virtualized. In the cases where this is observed, sev->cmdresp_reg has PSP_CMDRESP_RESP set which indicates that the command was processed correctly but no interrupt was asserted.
The full sequence of events looks like this:
CPU 1: submits SEV cmd #1 CPU 1: calls wait_event_timeout() CPU 0: enters psp_irq_handler() CPU 0: calls sev_handler()->wake_up() CPU 1: wakes up; finishes processing cmd #1 CPU 1: submits SEV cmd #2 CPU 1: calls wait_event_timeout() PSP: finishes processing cmd #2; interrupt status is still set; no interrupt CPU 0: clears intsts CPU 0: exits psp_irq_handler() CPU 1: wait_event_timeout() times out; psp_dead=true
Fixes: 200664d5237f ("crypto: ccp: Add Secure Encrypted Virtualization (SEV) command support") Cc: stable@vger.kernel.org Signed-off-by: Jeremi Piotrowski jpiotrowski@linux.microsoft.com Acked-by: Tom Lendacky thomas.lendacky@amd.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/crypto/ccp/psp-dev.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c index c9c741ac84421..949a3fa0b94a9 100644 --- a/drivers/crypto/ccp/psp-dev.c +++ b/drivers/crypto/ccp/psp-dev.c @@ -42,6 +42,9 @@ static irqreturn_t psp_irq_handler(int irq, void *data) /* Read the interrupt status: */ status = ioread32(psp->io_regs + psp->vdata->intsts_reg);
+ /* Clear the interrupt status by writing the same value we read. */ + iowrite32(status, psp->io_regs + psp->vdata->intsts_reg); + /* invoke subdevice interrupt handlers */ if (status) { if (psp->sev_irq_handler) @@ -51,9 +54,6 @@ static irqreturn_t psp_irq_handler(int irq, void *data) psp->tee_irq_handler(irq, psp->tee_irq_data, status); }
- /* Clear the interrupt status by writing the same value we read. */ - iowrite32(status, psp->io_regs + psp->vdata->intsts_reg); - return IRQ_HANDLED; }
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
[ Upstream commit 65b7b869da9bd3bd0b9fa60e6fe557bfbc0a75e8 ]
The struct sdw_slave_ops is not modified and sdw_driver takes pointer to const, so make it a const for code safety.
Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Link: https://lore.kernel.org/r/20230124163953.345949-1-krzysztof.kozlowski@linaro... Signed-off-by: Mark Brown broonie@kernel.org Stable-dep-of: 84822215acd1 ("ASoC: codecs: wcd938x: fix accessing regmap on unattached devices") Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/rt1316-sdw.c | 2 +- sound/soc/codecs/rt1318-sdw.c | 2 +- sound/soc/codecs/rt711-sdca-sdw.c | 2 +- sound/soc/codecs/rt715-sdca-sdw.c | 2 +- sound/soc/codecs/wcd938x-sdw.c | 2 +- sound/soc/codecs/wsa881x.c | 2 +- sound/soc/codecs/wsa883x.c | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/sound/soc/codecs/rt1316-sdw.c b/sound/soc/codecs/rt1316-sdw.c index e6294cc7a9954..45a3eff31915b 100644 --- a/sound/soc/codecs/rt1316-sdw.c +++ b/sound/soc/codecs/rt1316-sdw.c @@ -584,7 +584,7 @@ static int rt1316_sdw_pcm_hw_free(struct snd_pcm_substream *substream, * slave_ops: callbacks for get_clock_stop_mode, clock_stop and * port_prep are not defined for now */ -static struct sdw_slave_ops rt1316_slave_ops = { +static const struct sdw_slave_ops rt1316_slave_ops = { .read_prop = rt1316_read_prop, .update_status = rt1316_update_status, }; diff --git a/sound/soc/codecs/rt1318-sdw.c b/sound/soc/codecs/rt1318-sdw.c index f85f5ab2c6d04..c6ec86e97a6e7 100644 --- a/sound/soc/codecs/rt1318-sdw.c +++ b/sound/soc/codecs/rt1318-sdw.c @@ -697,7 +697,7 @@ static int rt1318_sdw_pcm_hw_free(struct snd_pcm_substream *substream, * slave_ops: callbacks for get_clock_stop_mode, clock_stop and * port_prep are not defined for now */ -static struct sdw_slave_ops rt1318_slave_ops = { +static const struct sdw_slave_ops rt1318_slave_ops = { .read_prop = rt1318_read_prop, .update_status = rt1318_update_status, }; diff --git a/sound/soc/codecs/rt711-sdca-sdw.c b/sound/soc/codecs/rt711-sdca-sdw.c index 88a8392a58edb..e23cec4c457de 100644 --- a/sound/soc/codecs/rt711-sdca-sdw.c +++ b/sound/soc/codecs/rt711-sdca-sdw.c @@ -338,7 +338,7 @@ static int rt711_sdca_interrupt_callback(struct sdw_slave *slave, return ret; }
-static struct sdw_slave_ops rt711_sdca_slave_ops = { +static const struct sdw_slave_ops rt711_sdca_slave_ops = { .read_prop = rt711_sdca_read_prop, .interrupt_callback = rt711_sdca_interrupt_callback, .update_status = rt711_sdca_update_status, diff --git a/sound/soc/codecs/rt715-sdca-sdw.c b/sound/soc/codecs/rt715-sdca-sdw.c index c54ecf3e69879..38a82e4e2f952 100644 --- a/sound/soc/codecs/rt715-sdca-sdw.c +++ b/sound/soc/codecs/rt715-sdca-sdw.c @@ -172,7 +172,7 @@ static int rt715_sdca_read_prop(struct sdw_slave *slave) return 0; }
-static struct sdw_slave_ops rt715_sdca_slave_ops = { +static const struct sdw_slave_ops rt715_sdca_slave_ops = { .read_prop = rt715_sdca_read_prop, .update_status = rt715_sdca_update_status, }; diff --git a/sound/soc/codecs/wcd938x-sdw.c b/sound/soc/codecs/wcd938x-sdw.c index 1bf3c06a2b622..33d1b5ffeaeba 100644 --- a/sound/soc/codecs/wcd938x-sdw.c +++ b/sound/soc/codecs/wcd938x-sdw.c @@ -191,7 +191,7 @@ static int wcd9380_interrupt_callback(struct sdw_slave *slave, return IRQ_HANDLED; }
-static struct sdw_slave_ops wcd9380_slave_ops = { +static const struct sdw_slave_ops wcd9380_slave_ops = { .update_status = wcd9380_update_status, .interrupt_callback = wcd9380_interrupt_callback, .bus_config = wcd9380_bus_config, diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c index 6c8b1db649b89..046843b57b038 100644 --- a/sound/soc/codecs/wsa881x.c +++ b/sound/soc/codecs/wsa881x.c @@ -1101,7 +1101,7 @@ static int wsa881x_bus_config(struct sdw_slave *slave, return 0; }
-static struct sdw_slave_ops wsa881x_slave_ops = { +static const struct sdw_slave_ops wsa881x_slave_ops = { .update_status = wsa881x_update_status, .bus_config = wsa881x_bus_config, .port_prep = wsa881x_port_prep, diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c index 58fdb4e9fd978..693e988f30c0f 100644 --- a/sound/soc/codecs/wsa883x.c +++ b/sound/soc/codecs/wsa883x.c @@ -1073,7 +1073,7 @@ static int wsa883x_port_prep(struct sdw_slave *slave, return 0; }
-static struct sdw_slave_ops wsa883x_slave_ops = { +static const struct sdw_slave_ops wsa883x_slave_ops = { .update_status = wsa883x_update_status, .port_prep = wsa883x_port_prep, };
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
[ Upstream commit 84822215acd15bd86a7759a835271e63bba83a7b ]
The WCD938x comes with three devices on two Linux drivers: 1. RX Soundwire device (wcd938x-sdw.c driver), 2. TX Soundwire device, which is used to access devices via regmap (also wcd938x-sdw.c driver), 3. platform device (wcd938x.c driver) - glue and component master, actually having most of the code using TX Soundwire device regmap.
When RX and TX Soundwire devices probe, the component master (platform device) bind tries to write micbias configuration via TX Soundwire regmap. This might happen before TX Soundwire enumerates, so the regmap access fails. On Qualcomm SM8550 board with WCD9385:
qcom-soundwire 6d30000.soundwire-controller: Qualcomm Soundwire controller v2.0.0 Registered wcd938x_codec audio-codec: bound sdw:0:0217:010d:00:4 (ops wcd938x_sdw_component_ops) wcd938x_codec audio-codec: bound sdw:0:0217:010d:00:3 (ops wcd938x_sdw_component_ops) qcom-soundwire 6ad0000.soundwire-controller: swrm_wait_for_wr_fifo_avail err write overflow
Fix the issue by: 1. Moving the regmap creation from platform device to TX Soundwire device. The regmap settings are moved as-is with one difference: making the wcd938x_regmap_config const. 2. Using regmap in cache only mode till the actual TX Soundwire device enumerates and then sync the regmap cache.
Cc: stable@vger.kernel.org # v3.14+ Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Message-Id: 20230503144102.242240-1-krzysztof.kozlowski@linaro.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/wcd938x-sdw.c | 1037 +++++++++++++++++++++++++++++++- sound/soc/codecs/wcd938x.c | 1003 +----------------------------- sound/soc/codecs/wcd938x.h | 1 + 3 files changed, 1030 insertions(+), 1011 deletions(-)
diff --git a/sound/soc/codecs/wcd938x-sdw.c b/sound/soc/codecs/wcd938x-sdw.c index 33d1b5ffeaeba..402286dfaea44 100644 --- a/sound/soc/codecs/wcd938x-sdw.c +++ b/sound/soc/codecs/wcd938x-sdw.c @@ -161,6 +161,14 @@ EXPORT_SYMBOL_GPL(wcd938x_sdw_set_sdw_stream); static int wcd9380_update_status(struct sdw_slave *slave, enum sdw_slave_status status) { + struct wcd938x_sdw_priv *wcd = dev_get_drvdata(&slave->dev); + + if (wcd->regmap && (status == SDW_SLAVE_ATTACHED)) { + /* Write out any cached changes that happened between probe and attach */ + regcache_cache_only(wcd->regmap, false); + return regcache_sync(wcd->regmap); + } + return 0; }
@@ -177,20 +185,1014 @@ static int wcd9380_interrupt_callback(struct sdw_slave *slave, { struct wcd938x_sdw_priv *wcd = dev_get_drvdata(&slave->dev); struct irq_domain *slave_irq = wcd->slave_irq; - struct regmap *regmap = dev_get_regmap(&slave->dev, NULL); u32 sts1, sts2, sts3;
do { handle_nested_irq(irq_find_mapping(slave_irq, 0)); - regmap_read(regmap, WCD938X_DIGITAL_INTR_STATUS_0, &sts1); - regmap_read(regmap, WCD938X_DIGITAL_INTR_STATUS_1, &sts2); - regmap_read(regmap, WCD938X_DIGITAL_INTR_STATUS_2, &sts3); + regmap_read(wcd->regmap, WCD938X_DIGITAL_INTR_STATUS_0, &sts1); + regmap_read(wcd->regmap, WCD938X_DIGITAL_INTR_STATUS_1, &sts2); + regmap_read(wcd->regmap, WCD938X_DIGITAL_INTR_STATUS_2, &sts3);
} while (sts1 || sts2 || sts3);
return IRQ_HANDLED; }
+static const struct reg_default wcd938x_defaults[] = { + {WCD938X_ANA_PAGE_REGISTER, 0x00}, + {WCD938X_ANA_BIAS, 0x00}, + {WCD938X_ANA_RX_SUPPLIES, 0x00}, + {WCD938X_ANA_HPH, 0x0C}, + {WCD938X_ANA_EAR, 0x00}, + {WCD938X_ANA_EAR_COMPANDER_CTL, 0x02}, + {WCD938X_ANA_TX_CH1, 0x20}, + {WCD938X_ANA_TX_CH2, 0x00}, + {WCD938X_ANA_TX_CH3, 0x20}, + {WCD938X_ANA_TX_CH4, 0x00}, + {WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC, 0x00}, + {WCD938X_ANA_MICB3_DSP_EN_LOGIC, 0x00}, + {WCD938X_ANA_MBHC_MECH, 0x39}, + {WCD938X_ANA_MBHC_ELECT, 0x08}, + {WCD938X_ANA_MBHC_ZDET, 0x00}, + {WCD938X_ANA_MBHC_RESULT_1, 0x00}, + {WCD938X_ANA_MBHC_RESULT_2, 0x00}, + {WCD938X_ANA_MBHC_RESULT_3, 0x00}, + {WCD938X_ANA_MBHC_BTN0, 0x00}, + {WCD938X_ANA_MBHC_BTN1, 0x10}, + {WCD938X_ANA_MBHC_BTN2, 0x20}, + {WCD938X_ANA_MBHC_BTN3, 0x30}, + {WCD938X_ANA_MBHC_BTN4, 0x40}, + {WCD938X_ANA_MBHC_BTN5, 0x50}, + {WCD938X_ANA_MBHC_BTN6, 0x60}, + {WCD938X_ANA_MBHC_BTN7, 0x70}, + {WCD938X_ANA_MICB1, 0x10}, + {WCD938X_ANA_MICB2, 0x10}, + {WCD938X_ANA_MICB2_RAMP, 0x00}, + {WCD938X_ANA_MICB3, 0x10}, + {WCD938X_ANA_MICB4, 0x10}, + {WCD938X_BIAS_CTL, 0x2A}, + {WCD938X_BIAS_VBG_FINE_ADJ, 0x55}, + {WCD938X_LDOL_VDDCX_ADJUST, 0x01}, + {WCD938X_LDOL_DISABLE_LDOL, 0x00}, + {WCD938X_MBHC_CTL_CLK, 0x00}, + {WCD938X_MBHC_CTL_ANA, 0x00}, + {WCD938X_MBHC_CTL_SPARE_1, 0x00}, + {WCD938X_MBHC_CTL_SPARE_2, 0x00}, + {WCD938X_MBHC_CTL_BCS, 0x00}, + {WCD938X_MBHC_MOISTURE_DET_FSM_STATUS, 0x00}, + {WCD938X_MBHC_TEST_CTL, 0x00}, + {WCD938X_LDOH_MODE, 0x2B}, + {WCD938X_LDOH_BIAS, 0x68}, + {WCD938X_LDOH_STB_LOADS, 0x00}, + {WCD938X_LDOH_SLOWRAMP, 0x50}, + {WCD938X_MICB1_TEST_CTL_1, 0x1A}, + {WCD938X_MICB1_TEST_CTL_2, 0x00}, + {WCD938X_MICB1_TEST_CTL_3, 0xA4}, + {WCD938X_MICB2_TEST_CTL_1, 0x1A}, + {WCD938X_MICB2_TEST_CTL_2, 0x00}, + {WCD938X_MICB2_TEST_CTL_3, 0x24}, + {WCD938X_MICB3_TEST_CTL_1, 0x1A}, + {WCD938X_MICB3_TEST_CTL_2, 0x00}, + {WCD938X_MICB3_TEST_CTL_3, 0xA4}, + {WCD938X_MICB4_TEST_CTL_1, 0x1A}, + {WCD938X_MICB4_TEST_CTL_2, 0x00}, + {WCD938X_MICB4_TEST_CTL_3, 0xA4}, + {WCD938X_TX_COM_ADC_VCM, 0x39}, + {WCD938X_TX_COM_BIAS_ATEST, 0xE0}, + {WCD938X_TX_COM_SPARE1, 0x00}, + {WCD938X_TX_COM_SPARE2, 0x00}, + {WCD938X_TX_COM_TXFE_DIV_CTL, 0x22}, + {WCD938X_TX_COM_TXFE_DIV_START, 0x00}, + {WCD938X_TX_COM_SPARE3, 0x00}, + {WCD938X_TX_COM_SPARE4, 0x00}, + {WCD938X_TX_1_2_TEST_EN, 0xCC}, + {WCD938X_TX_1_2_ADC_IB, 0xE9}, + {WCD938X_TX_1_2_ATEST_REFCTL, 0x0A}, + {WCD938X_TX_1_2_TEST_CTL, 0x38}, + {WCD938X_TX_1_2_TEST_BLK_EN1, 0xFF}, + {WCD938X_TX_1_2_TXFE1_CLKDIV, 0x00}, + {WCD938X_TX_1_2_SAR2_ERR, 0x00}, + {WCD938X_TX_1_2_SAR1_ERR, 0x00}, + {WCD938X_TX_3_4_TEST_EN, 0xCC}, + {WCD938X_TX_3_4_ADC_IB, 0xE9}, + {WCD938X_TX_3_4_ATEST_REFCTL, 0x0A}, + {WCD938X_TX_3_4_TEST_CTL, 0x38}, + {WCD938X_TX_3_4_TEST_BLK_EN3, 0xFF}, + {WCD938X_TX_3_4_TXFE3_CLKDIV, 0x00}, + {WCD938X_TX_3_4_SAR4_ERR, 0x00}, + {WCD938X_TX_3_4_SAR3_ERR, 0x00}, + {WCD938X_TX_3_4_TEST_BLK_EN2, 0xFB}, + {WCD938X_TX_3_4_TXFE2_CLKDIV, 0x00}, + {WCD938X_TX_3_4_SPARE1, 0x00}, + {WCD938X_TX_3_4_TEST_BLK_EN4, 0xFB}, + {WCD938X_TX_3_4_TXFE4_CLKDIV, 0x00}, + {WCD938X_TX_3_4_SPARE2, 0x00}, + {WCD938X_CLASSH_MODE_1, 0x40}, + {WCD938X_CLASSH_MODE_2, 0x3A}, + {WCD938X_CLASSH_MODE_3, 0x00}, + {WCD938X_CLASSH_CTRL_VCL_1, 0x70}, + {WCD938X_CLASSH_CTRL_VCL_2, 0x82}, + {WCD938X_CLASSH_CTRL_CCL_1, 0x31}, + {WCD938X_CLASSH_CTRL_CCL_2, 0x80}, + {WCD938X_CLASSH_CTRL_CCL_3, 0x80}, + {WCD938X_CLASSH_CTRL_CCL_4, 0x51}, + {WCD938X_CLASSH_CTRL_CCL_5, 0x00}, + {WCD938X_CLASSH_BUCK_TMUX_A_D, 0x00}, + {WCD938X_CLASSH_BUCK_SW_DRV_CNTL, 0x77}, + {WCD938X_CLASSH_SPARE, 0x00}, + {WCD938X_FLYBACK_EN, 0x4E}, + {WCD938X_FLYBACK_VNEG_CTRL_1, 0x0B}, + {WCD938X_FLYBACK_VNEG_CTRL_2, 0x45}, + {WCD938X_FLYBACK_VNEG_CTRL_3, 0x74}, + {WCD938X_FLYBACK_VNEG_CTRL_4, 0x7F}, + {WCD938X_FLYBACK_VNEG_CTRL_5, 0x83}, + {WCD938X_FLYBACK_VNEG_CTRL_6, 0x98}, + {WCD938X_FLYBACK_VNEG_CTRL_7, 0xA9}, + {WCD938X_FLYBACK_VNEG_CTRL_8, 0x68}, + {WCD938X_FLYBACK_VNEG_CTRL_9, 0x64}, + {WCD938X_FLYBACK_VNEGDAC_CTRL_1, 0xED}, + {WCD938X_FLYBACK_VNEGDAC_CTRL_2, 0xF0}, + {WCD938X_FLYBACK_VNEGDAC_CTRL_3, 0xA6}, + {WCD938X_FLYBACK_CTRL_1, 0x65}, + {WCD938X_FLYBACK_TEST_CTL, 0x00}, + {WCD938X_RX_AUX_SW_CTL, 0x00}, + {WCD938X_RX_PA_AUX_IN_CONN, 0x01}, + {WCD938X_RX_TIMER_DIV, 0x32}, + {WCD938X_RX_OCP_CTL, 0x1F}, + {WCD938X_RX_OCP_COUNT, 0x77}, + {WCD938X_RX_BIAS_EAR_DAC, 0xA0}, + {WCD938X_RX_BIAS_EAR_AMP, 0xAA}, + {WCD938X_RX_BIAS_HPH_LDO, 0xA9}, + {WCD938X_RX_BIAS_HPH_PA, 0xAA}, + {WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2, 0x8A}, + {WCD938X_RX_BIAS_HPH_RDAC_LDO, 0x88}, + {WCD938X_RX_BIAS_HPH_CNP1, 0x82}, + {WCD938X_RX_BIAS_HPH_LOWPOWER, 0x82}, + {WCD938X_RX_BIAS_AUX_DAC, 0xA0}, + {WCD938X_RX_BIAS_AUX_AMP, 0xAA}, + {WCD938X_RX_BIAS_VNEGDAC_BLEEDER, 0x50}, + {WCD938X_RX_BIAS_MISC, 0x00}, + {WCD938X_RX_BIAS_BUCK_RST, 0x08}, + {WCD938X_RX_BIAS_BUCK_VREF_ERRAMP, 0x44}, + {WCD938X_RX_BIAS_FLYB_ERRAMP, 0x40}, + {WCD938X_RX_BIAS_FLYB_BUFF, 0xAA}, + {WCD938X_RX_BIAS_FLYB_MID_RST, 0x14}, + {WCD938X_HPH_L_STATUS, 0x04}, + {WCD938X_HPH_R_STATUS, 0x04}, + {WCD938X_HPH_CNP_EN, 0x80}, + {WCD938X_HPH_CNP_WG_CTL, 0x9A}, + {WCD938X_HPH_CNP_WG_TIME, 0x14}, + {WCD938X_HPH_OCP_CTL, 0x28}, + {WCD938X_HPH_AUTO_CHOP, 0x16}, + {WCD938X_HPH_CHOP_CTL, 0x83}, + {WCD938X_HPH_PA_CTL1, 0x46}, + {WCD938X_HPH_PA_CTL2, 0x50}, + {WCD938X_HPH_L_EN, 0x80}, + {WCD938X_HPH_L_TEST, 0xE0}, + {WCD938X_HPH_L_ATEST, 0x50}, + {WCD938X_HPH_R_EN, 0x80}, + {WCD938X_HPH_R_TEST, 0xE0}, + {WCD938X_HPH_R_ATEST, 0x54}, + {WCD938X_HPH_RDAC_CLK_CTL1, 0x99}, + {WCD938X_HPH_RDAC_CLK_CTL2, 0x9B}, + {WCD938X_HPH_RDAC_LDO_CTL, 0x33}, + {WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL, 0x00}, + {WCD938X_HPH_REFBUFF_UHQA_CTL, 0x68}, + {WCD938X_HPH_REFBUFF_LP_CTL, 0x0E}, + {WCD938X_HPH_L_DAC_CTL, 0x20}, + {WCD938X_HPH_R_DAC_CTL, 0x20}, + {WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL, 0x55}, + {WCD938X_HPH_SURGE_HPHLR_SURGE_EN, 0x19}, + {WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1, 0xA0}, + {WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS, 0x00}, + {WCD938X_EAR_EAR_EN_REG, 0x22}, + {WCD938X_EAR_EAR_PA_CON, 0x44}, + {WCD938X_EAR_EAR_SP_CON, 0xDB}, + {WCD938X_EAR_EAR_DAC_CON, 0x80}, + {WCD938X_EAR_EAR_CNP_FSM_CON, 0xB2}, + {WCD938X_EAR_TEST_CTL, 0x00}, + {WCD938X_EAR_STATUS_REG_1, 0x00}, + {WCD938X_EAR_STATUS_REG_2, 0x08}, + {WCD938X_ANA_NEW_PAGE_REGISTER, 0x00}, + {WCD938X_HPH_NEW_ANA_HPH2, 0x00}, + {WCD938X_HPH_NEW_ANA_HPH3, 0x00}, + {WCD938X_SLEEP_CTL, 0x16}, + {WCD938X_SLEEP_WATCHDOG_CTL, 0x00}, + {WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL, 0x00}, + {WCD938X_MBHC_NEW_CTL_1, 0x02}, + {WCD938X_MBHC_NEW_CTL_2, 0x05}, + {WCD938X_MBHC_NEW_PLUG_DETECT_CTL, 0xE9}, + {WCD938X_MBHC_NEW_ZDET_ANA_CTL, 0x0F}, + {WCD938X_MBHC_NEW_ZDET_RAMP_CTL, 0x00}, + {WCD938X_MBHC_NEW_FSM_STATUS, 0x00}, + {WCD938X_MBHC_NEW_ADC_RESULT, 0x00}, + {WCD938X_TX_NEW_AMIC_MUX_CFG, 0x00}, + {WCD938X_AUX_AUXPA, 0x00}, + {WCD938X_LDORXTX_MODE, 0x0C}, + {WCD938X_LDORXTX_CONFIG, 0x10}, + {WCD938X_DIE_CRACK_DIE_CRK_DET_EN, 0x00}, + {WCD938X_DIE_CRACK_DIE_CRK_DET_OUT, 0x00}, + {WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL, 0x40}, + {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L, 0x81}, + {WCD938X_HPH_NEW_INT_RDAC_VREF_CTL, 0x10}, + {WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL, 0x00}, + {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R, 0x81}, + {WCD938X_HPH_NEW_INT_PA_MISC1, 0x22}, + {WCD938X_HPH_NEW_INT_PA_MISC2, 0x00}, + {WCD938X_HPH_NEW_INT_PA_RDAC_MISC, 0x00}, + {WCD938X_HPH_NEW_INT_HPH_TIMER1, 0xFE}, + {WCD938X_HPH_NEW_INT_HPH_TIMER2, 0x02}, + {WCD938X_HPH_NEW_INT_HPH_TIMER3, 0x4E}, + {WCD938X_HPH_NEW_INT_HPH_TIMER4, 0x54}, + {WCD938X_HPH_NEW_INT_PA_RDAC_MISC2, 0x00}, + {WCD938X_HPH_NEW_INT_PA_RDAC_MISC3, 0x00}, + {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW, 0x90}, + {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW, 0x90}, + {WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI, 0x62}, + {WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP, 0x01}, + {WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP, 0x11}, + {WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL, 0x57}, + {WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL, 0x01}, + {WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT, 0x00}, + {WCD938X_MBHC_NEW_INT_SPARE_2, 0x00}, + {WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON, 0xA8}, + {WCD938X_EAR_INT_NEW_CNP_VCM_CON1, 0x42}, + {WCD938X_EAR_INT_NEW_CNP_VCM_CON2, 0x22}, + {WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS, 0x00}, + {WCD938X_AUX_INT_EN_REG, 0x00}, + {WCD938X_AUX_INT_PA_CTRL, 0x06}, + {WCD938X_AUX_INT_SP_CTRL, 0xD2}, + {WCD938X_AUX_INT_DAC_CTRL, 0x80}, + {WCD938X_AUX_INT_CLK_CTRL, 0x50}, + {WCD938X_AUX_INT_TEST_CTRL, 0x00}, + {WCD938X_AUX_INT_STATUS_REG, 0x00}, + {WCD938X_AUX_INT_MISC, 0x00}, + {WCD938X_LDORXTX_INT_BIAS, 0x6E}, + {WCD938X_LDORXTX_INT_STB_LOADS_DTEST, 0x50}, + {WCD938X_LDORXTX_INT_TEST0, 0x1C}, + {WCD938X_LDORXTX_INT_STARTUP_TIMER, 0xFF}, + {WCD938X_LDORXTX_INT_TEST1, 0x1F}, + {WCD938X_LDORXTX_INT_STATUS, 0x00}, + {WCD938X_SLEEP_INT_WATCHDOG_CTL_1, 0x0A}, + {WCD938X_SLEEP_INT_WATCHDOG_CTL_2, 0x0A}, + {WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1, 0x02}, + {WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2, 0x60}, + {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2, 0xFF}, + {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1, 0x7F}, + {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0, 0x3F}, + {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M, 0x1F}, + {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M, 0x0F}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1, 0xD7}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0, 0xC8}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP, 0xC6}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1, 0xD5}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0, 0xCA}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP, 0x05}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0, 0xA5}, + {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP, 0x13}, + {WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1, 0x88}, + {WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP, 0x42}, + {WCD938X_TX_COM_NEW_INT_TXADC_INT_L2, 0xFF}, + {WCD938X_TX_COM_NEW_INT_TXADC_INT_L1, 0x64}, + {WCD938X_TX_COM_NEW_INT_TXADC_INT_L0, 0x64}, + {WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP, 0x77}, + {WCD938X_DIGITAL_PAGE_REGISTER, 0x00}, + {WCD938X_DIGITAL_CHIP_ID0, 0x00}, + {WCD938X_DIGITAL_CHIP_ID1, 0x00}, + {WCD938X_DIGITAL_CHIP_ID2, 0x0D}, + {WCD938X_DIGITAL_CHIP_ID3, 0x01}, + {WCD938X_DIGITAL_SWR_TX_CLK_RATE, 0x00}, + {WCD938X_DIGITAL_CDC_RST_CTL, 0x03}, + {WCD938X_DIGITAL_TOP_CLK_CFG, 0x00}, + {WCD938X_DIGITAL_CDC_ANA_CLK_CTL, 0x00}, + {WCD938X_DIGITAL_CDC_DIG_CLK_CTL, 0xF0}, + {WCD938X_DIGITAL_SWR_RST_EN, 0x00}, + {WCD938X_DIGITAL_CDC_PATH_MODE, 0x55}, + {WCD938X_DIGITAL_CDC_RX_RST, 0x00}, + {WCD938X_DIGITAL_CDC_RX0_CTL, 0xFC}, + {WCD938X_DIGITAL_CDC_RX1_CTL, 0xFC}, + {WCD938X_DIGITAL_CDC_RX2_CTL, 0xFC}, + {WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1, 0x00}, + {WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3, 0x00}, + {WCD938X_DIGITAL_CDC_COMP_CTL_0, 0x00}, + {WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL, 0x1E}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A1_0, 0x00}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A1_1, 0x01}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A2_0, 0x63}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A2_1, 0x04}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A3_0, 0xAC}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A3_1, 0x04}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A4_0, 0x1A}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A4_1, 0x03}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A5_0, 0xBC}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A5_1, 0x02}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A6_0, 0xC7}, + {WCD938X_DIGITAL_CDC_HPH_DSM_A7_0, 0xF8}, + {WCD938X_DIGITAL_CDC_HPH_DSM_C_0, 0x47}, + {WCD938X_DIGITAL_CDC_HPH_DSM_C_1, 0x43}, + {WCD938X_DIGITAL_CDC_HPH_DSM_C_2, 0xB1}, + {WCD938X_DIGITAL_CDC_HPH_DSM_C_3, 0x17}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R1, 0x4D}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R2, 0x29}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R3, 0x34}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R4, 0x59}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R5, 0x66}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R6, 0x87}, + {WCD938X_DIGITAL_CDC_HPH_DSM_R7, 0x64}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A1_0, 0x00}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A1_1, 0x01}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A2_0, 0x96}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A2_1, 0x09}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A3_0, 0xAB}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A3_1, 0x05}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A4_0, 0x1C}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A4_1, 0x02}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A5_0, 0x17}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A5_1, 0x02}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A6_0, 0xAA}, + {WCD938X_DIGITAL_CDC_AUX_DSM_A7_0, 0xE3}, + {WCD938X_DIGITAL_CDC_AUX_DSM_C_0, 0x69}, + {WCD938X_DIGITAL_CDC_AUX_DSM_C_1, 0x54}, + {WCD938X_DIGITAL_CDC_AUX_DSM_C_2, 0x02}, + {WCD938X_DIGITAL_CDC_AUX_DSM_C_3, 0x15}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R1, 0xA4}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R2, 0xB5}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R3, 0x86}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R4, 0x85}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R5, 0xAA}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R6, 0xE2}, + {WCD938X_DIGITAL_CDC_AUX_DSM_R7, 0x62}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0, 0x55}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1, 0xA9}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0, 0x3D}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1, 0x2E}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2, 0x01}, + {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0, 0x00}, + {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1, 0xFC}, + {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2, 0x01}, + {WCD938X_DIGITAL_CDC_HPH_GAIN_CTL, 0x00}, + {WCD938X_DIGITAL_CDC_AUX_GAIN_CTL, 0x00}, + {WCD938X_DIGITAL_CDC_EAR_PATH_CTL, 0x00}, + {WCD938X_DIGITAL_CDC_SWR_CLH, 0x00}, + {WCD938X_DIGITAL_SWR_CLH_BYP, 0x00}, + {WCD938X_DIGITAL_CDC_TX0_CTL, 0x68}, + {WCD938X_DIGITAL_CDC_TX1_CTL, 0x68}, + {WCD938X_DIGITAL_CDC_TX2_CTL, 0x68}, + {WCD938X_DIGITAL_CDC_TX_RST, 0x00}, + {WCD938X_DIGITAL_CDC_REQ_CTL, 0x01}, + {WCD938X_DIGITAL_CDC_RST, 0x00}, + {WCD938X_DIGITAL_CDC_AMIC_CTL, 0x0F}, + {WCD938X_DIGITAL_CDC_DMIC_CTL, 0x04}, + {WCD938X_DIGITAL_CDC_DMIC1_CTL, 0x01}, + {WCD938X_DIGITAL_CDC_DMIC2_CTL, 0x01}, + {WCD938X_DIGITAL_CDC_DMIC3_CTL, 0x01}, + {WCD938X_DIGITAL_CDC_DMIC4_CTL, 0x01}, + {WCD938X_DIGITAL_EFUSE_PRG_CTL, 0x00}, + {WCD938X_DIGITAL_EFUSE_CTL, 0x2B}, + {WCD938X_DIGITAL_CDC_DMIC_RATE_1_2, 0x11}, + {WCD938X_DIGITAL_CDC_DMIC_RATE_3_4, 0x11}, + {WCD938X_DIGITAL_PDM_WD_CTL0, 0x00}, + {WCD938X_DIGITAL_PDM_WD_CTL1, 0x00}, + {WCD938X_DIGITAL_PDM_WD_CTL2, 0x00}, + {WCD938X_DIGITAL_INTR_MODE, 0x00}, + {WCD938X_DIGITAL_INTR_MASK_0, 0xFF}, + {WCD938X_DIGITAL_INTR_MASK_1, 0xFF}, + {WCD938X_DIGITAL_INTR_MASK_2, 0x3F}, + {WCD938X_DIGITAL_INTR_STATUS_0, 0x00}, + {WCD938X_DIGITAL_INTR_STATUS_1, 0x00}, + {WCD938X_DIGITAL_INTR_STATUS_2, 0x00}, + {WCD938X_DIGITAL_INTR_CLEAR_0, 0x00}, + {WCD938X_DIGITAL_INTR_CLEAR_1, 0x00}, + {WCD938X_DIGITAL_INTR_CLEAR_2, 0x00}, + {WCD938X_DIGITAL_INTR_LEVEL_0, 0x00}, + {WCD938X_DIGITAL_INTR_LEVEL_1, 0x00}, + {WCD938X_DIGITAL_INTR_LEVEL_2, 0x00}, + {WCD938X_DIGITAL_INTR_SET_0, 0x00}, + {WCD938X_DIGITAL_INTR_SET_1, 0x00}, + {WCD938X_DIGITAL_INTR_SET_2, 0x00}, + {WCD938X_DIGITAL_INTR_TEST_0, 0x00}, + {WCD938X_DIGITAL_INTR_TEST_1, 0x00}, + {WCD938X_DIGITAL_INTR_TEST_2, 0x00}, + {WCD938X_DIGITAL_TX_MODE_DBG_EN, 0x00}, + {WCD938X_DIGITAL_TX_MODE_DBG_0_1, 0x00}, + {WCD938X_DIGITAL_TX_MODE_DBG_2_3, 0x00}, + {WCD938X_DIGITAL_LB_IN_SEL_CTL, 0x00}, + {WCD938X_DIGITAL_LOOP_BACK_MODE, 0x00}, + {WCD938X_DIGITAL_SWR_DAC_TEST, 0x00}, + {WCD938X_DIGITAL_SWR_HM_TEST_RX_0, 0x40}, + {WCD938X_DIGITAL_SWR_HM_TEST_TX_0, 0x40}, + {WCD938X_DIGITAL_SWR_HM_TEST_RX_1, 0x00}, + {WCD938X_DIGITAL_SWR_HM_TEST_TX_1, 0x00}, + {WCD938X_DIGITAL_SWR_HM_TEST_TX_2, 0x00}, + {WCD938X_DIGITAL_SWR_HM_TEST_0, 0x00}, + {WCD938X_DIGITAL_SWR_HM_TEST_1, 0x00}, + {WCD938X_DIGITAL_PAD_CTL_SWR_0, 0x8F}, + {WCD938X_DIGITAL_PAD_CTL_SWR_1, 0x06}, + {WCD938X_DIGITAL_I2C_CTL, 0x00}, + {WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE, 0x00}, + {WCD938X_DIGITAL_EFUSE_TEST_CTL_0, 0x00}, + {WCD938X_DIGITAL_EFUSE_TEST_CTL_1, 0x00}, + {WCD938X_DIGITAL_EFUSE_T_DATA_0, 0x00}, + {WCD938X_DIGITAL_EFUSE_T_DATA_1, 0x00}, + {WCD938X_DIGITAL_PAD_CTL_PDM_RX0, 0xF1}, + {WCD938X_DIGITAL_PAD_CTL_PDM_RX1, 0xF1}, + {WCD938X_DIGITAL_PAD_CTL_PDM_TX0, 0xF1}, + {WCD938X_DIGITAL_PAD_CTL_PDM_TX1, 0xF1}, + {WCD938X_DIGITAL_PAD_CTL_PDM_TX2, 0xF1}, + {WCD938X_DIGITAL_PAD_INP_DIS_0, 0x00}, + {WCD938X_DIGITAL_PAD_INP_DIS_1, 0x00}, + {WCD938X_DIGITAL_DRIVE_STRENGTH_0, 0x00}, + {WCD938X_DIGITAL_DRIVE_STRENGTH_1, 0x00}, + {WCD938X_DIGITAL_DRIVE_STRENGTH_2, 0x00}, + {WCD938X_DIGITAL_RX_DATA_EDGE_CTL, 0x1F}, + {WCD938X_DIGITAL_TX_DATA_EDGE_CTL, 0x80}, + {WCD938X_DIGITAL_GPIO_MODE, 0x00}, + {WCD938X_DIGITAL_PIN_CTL_OE, 0x00}, + {WCD938X_DIGITAL_PIN_CTL_DATA_0, 0x00}, + {WCD938X_DIGITAL_PIN_CTL_DATA_1, 0x00}, + {WCD938X_DIGITAL_PIN_STATUS_0, 0x00}, + {WCD938X_DIGITAL_PIN_STATUS_1, 0x00}, + {WCD938X_DIGITAL_DIG_DEBUG_CTL, 0x00}, + {WCD938X_DIGITAL_DIG_DEBUG_EN, 0x00}, + {WCD938X_DIGITAL_ANA_CSR_DBG_ADD, 0x00}, + {WCD938X_DIGITAL_ANA_CSR_DBG_CTL, 0x48}, + {WCD938X_DIGITAL_SSP_DBG, 0x00}, + {WCD938X_DIGITAL_MODE_STATUS_0, 0x00}, + {WCD938X_DIGITAL_MODE_STATUS_1, 0x00}, + {WCD938X_DIGITAL_SPARE_0, 0x00}, + {WCD938X_DIGITAL_SPARE_1, 0x00}, + {WCD938X_DIGITAL_SPARE_2, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_0, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_1, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_2, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_3, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_4, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_5, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_6, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_7, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_8, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_9, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_10, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_11, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_12, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_13, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_14, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_15, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_16, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_17, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_18, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_19, 0xFF}, + {WCD938X_DIGITAL_EFUSE_REG_20, 0x0E}, + {WCD938X_DIGITAL_EFUSE_REG_21, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_22, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_23, 0xF8}, + {WCD938X_DIGITAL_EFUSE_REG_24, 0x16}, + {WCD938X_DIGITAL_EFUSE_REG_25, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_26, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_27, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_28, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_29, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_30, 0x00}, + {WCD938X_DIGITAL_EFUSE_REG_31, 0x00}, + {WCD938X_DIGITAL_TX_REQ_FB_CTL_0, 0x88}, + {WCD938X_DIGITAL_TX_REQ_FB_CTL_1, 0x88}, + {WCD938X_DIGITAL_TX_REQ_FB_CTL_2, 0x88}, + {WCD938X_DIGITAL_TX_REQ_FB_CTL_3, 0x88}, + {WCD938X_DIGITAL_TX_REQ_FB_CTL_4, 0x88}, + {WCD938X_DIGITAL_DEM_BYPASS_DATA0, 0x55}, + {WCD938X_DIGITAL_DEM_BYPASS_DATA1, 0x55}, + {WCD938X_DIGITAL_DEM_BYPASS_DATA2, 0x55}, + {WCD938X_DIGITAL_DEM_BYPASS_DATA3, 0x01}, +}; + +static bool wcd938x_rdwr_register(struct device *dev, unsigned int reg) +{ + switch (reg) { + case WCD938X_ANA_PAGE_REGISTER: + case WCD938X_ANA_BIAS: + case WCD938X_ANA_RX_SUPPLIES: + case WCD938X_ANA_HPH: + case WCD938X_ANA_EAR: + case WCD938X_ANA_EAR_COMPANDER_CTL: + case WCD938X_ANA_TX_CH1: + case WCD938X_ANA_TX_CH2: + case WCD938X_ANA_TX_CH3: + case WCD938X_ANA_TX_CH4: + case WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC: + case WCD938X_ANA_MICB3_DSP_EN_LOGIC: + case WCD938X_ANA_MBHC_MECH: + case WCD938X_ANA_MBHC_ELECT: + case WCD938X_ANA_MBHC_ZDET: + case WCD938X_ANA_MBHC_BTN0: + case WCD938X_ANA_MBHC_BTN1: + case WCD938X_ANA_MBHC_BTN2: + case WCD938X_ANA_MBHC_BTN3: + case WCD938X_ANA_MBHC_BTN4: + case WCD938X_ANA_MBHC_BTN5: + case WCD938X_ANA_MBHC_BTN6: + case WCD938X_ANA_MBHC_BTN7: + case WCD938X_ANA_MICB1: + case WCD938X_ANA_MICB2: + case WCD938X_ANA_MICB2_RAMP: + case WCD938X_ANA_MICB3: + case WCD938X_ANA_MICB4: + case WCD938X_BIAS_CTL: + case WCD938X_BIAS_VBG_FINE_ADJ: + case WCD938X_LDOL_VDDCX_ADJUST: + case WCD938X_LDOL_DISABLE_LDOL: + case WCD938X_MBHC_CTL_CLK: + case WCD938X_MBHC_CTL_ANA: + case WCD938X_MBHC_CTL_SPARE_1: + case WCD938X_MBHC_CTL_SPARE_2: + case WCD938X_MBHC_CTL_BCS: + case WCD938X_MBHC_TEST_CTL: + case WCD938X_LDOH_MODE: + case WCD938X_LDOH_BIAS: + case WCD938X_LDOH_STB_LOADS: + case WCD938X_LDOH_SLOWRAMP: + case WCD938X_MICB1_TEST_CTL_1: + case WCD938X_MICB1_TEST_CTL_2: + case WCD938X_MICB1_TEST_CTL_3: + case WCD938X_MICB2_TEST_CTL_1: + case WCD938X_MICB2_TEST_CTL_2: + case WCD938X_MICB2_TEST_CTL_3: + case WCD938X_MICB3_TEST_CTL_1: + case WCD938X_MICB3_TEST_CTL_2: + case WCD938X_MICB3_TEST_CTL_3: + case WCD938X_MICB4_TEST_CTL_1: + case WCD938X_MICB4_TEST_CTL_2: + case WCD938X_MICB4_TEST_CTL_3: + case WCD938X_TX_COM_ADC_VCM: + case WCD938X_TX_COM_BIAS_ATEST: + case WCD938X_TX_COM_SPARE1: + case WCD938X_TX_COM_SPARE2: + case WCD938X_TX_COM_TXFE_DIV_CTL: + case WCD938X_TX_COM_TXFE_DIV_START: + case WCD938X_TX_COM_SPARE3: + case WCD938X_TX_COM_SPARE4: + case WCD938X_TX_1_2_TEST_EN: + case WCD938X_TX_1_2_ADC_IB: + case WCD938X_TX_1_2_ATEST_REFCTL: + case WCD938X_TX_1_2_TEST_CTL: + case WCD938X_TX_1_2_TEST_BLK_EN1: + case WCD938X_TX_1_2_TXFE1_CLKDIV: + case WCD938X_TX_3_4_TEST_EN: + case WCD938X_TX_3_4_ADC_IB: + case WCD938X_TX_3_4_ATEST_REFCTL: + case WCD938X_TX_3_4_TEST_CTL: + case WCD938X_TX_3_4_TEST_BLK_EN3: + case WCD938X_TX_3_4_TXFE3_CLKDIV: + case WCD938X_TX_3_4_TEST_BLK_EN2: + case WCD938X_TX_3_4_TXFE2_CLKDIV: + case WCD938X_TX_3_4_SPARE1: + case WCD938X_TX_3_4_TEST_BLK_EN4: + case WCD938X_TX_3_4_TXFE4_CLKDIV: + case WCD938X_TX_3_4_SPARE2: + case WCD938X_CLASSH_MODE_1: + case WCD938X_CLASSH_MODE_2: + case WCD938X_CLASSH_MODE_3: + case WCD938X_CLASSH_CTRL_VCL_1: + case WCD938X_CLASSH_CTRL_VCL_2: + case WCD938X_CLASSH_CTRL_CCL_1: + case WCD938X_CLASSH_CTRL_CCL_2: + case WCD938X_CLASSH_CTRL_CCL_3: + case WCD938X_CLASSH_CTRL_CCL_4: + case WCD938X_CLASSH_CTRL_CCL_5: + case WCD938X_CLASSH_BUCK_TMUX_A_D: + case WCD938X_CLASSH_BUCK_SW_DRV_CNTL: + case WCD938X_CLASSH_SPARE: + case WCD938X_FLYBACK_EN: + case WCD938X_FLYBACK_VNEG_CTRL_1: + case WCD938X_FLYBACK_VNEG_CTRL_2: + case WCD938X_FLYBACK_VNEG_CTRL_3: + case WCD938X_FLYBACK_VNEG_CTRL_4: + case WCD938X_FLYBACK_VNEG_CTRL_5: + case WCD938X_FLYBACK_VNEG_CTRL_6: + case WCD938X_FLYBACK_VNEG_CTRL_7: + case WCD938X_FLYBACK_VNEG_CTRL_8: + case WCD938X_FLYBACK_VNEG_CTRL_9: + case WCD938X_FLYBACK_VNEGDAC_CTRL_1: + case WCD938X_FLYBACK_VNEGDAC_CTRL_2: + case WCD938X_FLYBACK_VNEGDAC_CTRL_3: + case WCD938X_FLYBACK_CTRL_1: + case WCD938X_FLYBACK_TEST_CTL: + case WCD938X_RX_AUX_SW_CTL: + case WCD938X_RX_PA_AUX_IN_CONN: + case WCD938X_RX_TIMER_DIV: + case WCD938X_RX_OCP_CTL: + case WCD938X_RX_OCP_COUNT: + case WCD938X_RX_BIAS_EAR_DAC: + case WCD938X_RX_BIAS_EAR_AMP: + case WCD938X_RX_BIAS_HPH_LDO: + case WCD938X_RX_BIAS_HPH_PA: + case WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2: + case WCD938X_RX_BIAS_HPH_RDAC_LDO: + case WCD938X_RX_BIAS_HPH_CNP1: + case WCD938X_RX_BIAS_HPH_LOWPOWER: + case WCD938X_RX_BIAS_AUX_DAC: + case WCD938X_RX_BIAS_AUX_AMP: + case WCD938X_RX_BIAS_VNEGDAC_BLEEDER: + case WCD938X_RX_BIAS_MISC: + case WCD938X_RX_BIAS_BUCK_RST: + case WCD938X_RX_BIAS_BUCK_VREF_ERRAMP: + case WCD938X_RX_BIAS_FLYB_ERRAMP: + case WCD938X_RX_BIAS_FLYB_BUFF: + case WCD938X_RX_BIAS_FLYB_MID_RST: + case WCD938X_HPH_CNP_EN: + case WCD938X_HPH_CNP_WG_CTL: + case WCD938X_HPH_CNP_WG_TIME: + case WCD938X_HPH_OCP_CTL: + case WCD938X_HPH_AUTO_CHOP: + case WCD938X_HPH_CHOP_CTL: + case WCD938X_HPH_PA_CTL1: + case WCD938X_HPH_PA_CTL2: + case WCD938X_HPH_L_EN: + case WCD938X_HPH_L_TEST: + case WCD938X_HPH_L_ATEST: + case WCD938X_HPH_R_EN: + case WCD938X_HPH_R_TEST: + case WCD938X_HPH_R_ATEST: + case WCD938X_HPH_RDAC_CLK_CTL1: + case WCD938X_HPH_RDAC_CLK_CTL2: + case WCD938X_HPH_RDAC_LDO_CTL: + case WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL: + case WCD938X_HPH_REFBUFF_UHQA_CTL: + case WCD938X_HPH_REFBUFF_LP_CTL: + case WCD938X_HPH_L_DAC_CTL: + case WCD938X_HPH_R_DAC_CTL: + case WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL: + case WCD938X_HPH_SURGE_HPHLR_SURGE_EN: + case WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1: + case WCD938X_EAR_EAR_EN_REG: + case WCD938X_EAR_EAR_PA_CON: + case WCD938X_EAR_EAR_SP_CON: + case WCD938X_EAR_EAR_DAC_CON: + case WCD938X_EAR_EAR_CNP_FSM_CON: + case WCD938X_EAR_TEST_CTL: + case WCD938X_ANA_NEW_PAGE_REGISTER: + case WCD938X_HPH_NEW_ANA_HPH2: + case WCD938X_HPH_NEW_ANA_HPH3: + case WCD938X_SLEEP_CTL: + case WCD938X_SLEEP_WATCHDOG_CTL: + case WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL: + case WCD938X_MBHC_NEW_CTL_1: + case WCD938X_MBHC_NEW_CTL_2: + case WCD938X_MBHC_NEW_PLUG_DETECT_CTL: + case WCD938X_MBHC_NEW_ZDET_ANA_CTL: + case WCD938X_MBHC_NEW_ZDET_RAMP_CTL: + case WCD938X_TX_NEW_AMIC_MUX_CFG: + case WCD938X_AUX_AUXPA: + case WCD938X_LDORXTX_MODE: + case WCD938X_LDORXTX_CONFIG: + case WCD938X_DIE_CRACK_DIE_CRK_DET_EN: + case WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL: + case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L: + case WCD938X_HPH_NEW_INT_RDAC_VREF_CTL: + case WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL: + case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R: + case WCD938X_HPH_NEW_INT_PA_MISC1: + case WCD938X_HPH_NEW_INT_PA_MISC2: + case WCD938X_HPH_NEW_INT_PA_RDAC_MISC: + case WCD938X_HPH_NEW_INT_HPH_TIMER1: + case WCD938X_HPH_NEW_INT_HPH_TIMER2: + case WCD938X_HPH_NEW_INT_HPH_TIMER3: + case WCD938X_HPH_NEW_INT_HPH_TIMER4: + case WCD938X_HPH_NEW_INT_PA_RDAC_MISC2: + case WCD938X_HPH_NEW_INT_PA_RDAC_MISC3: + case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW: + case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW: + case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI: + case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP: + case WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP: + case WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL: + case WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL: + case WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT: + case WCD938X_MBHC_NEW_INT_SPARE_2: + case WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON: + case WCD938X_EAR_INT_NEW_CNP_VCM_CON1: + case WCD938X_EAR_INT_NEW_CNP_VCM_CON2: + case WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS: + case WCD938X_AUX_INT_EN_REG: + case WCD938X_AUX_INT_PA_CTRL: + case WCD938X_AUX_INT_SP_CTRL: + case WCD938X_AUX_INT_DAC_CTRL: + case WCD938X_AUX_INT_CLK_CTRL: + case WCD938X_AUX_INT_TEST_CTRL: + case WCD938X_AUX_INT_MISC: + case WCD938X_LDORXTX_INT_BIAS: + case WCD938X_LDORXTX_INT_STB_LOADS_DTEST: + case WCD938X_LDORXTX_INT_TEST0: + case WCD938X_LDORXTX_INT_STARTUP_TIMER: + case WCD938X_LDORXTX_INT_TEST1: + case WCD938X_SLEEP_INT_WATCHDOG_CTL_1: + case WCD938X_SLEEP_INT_WATCHDOG_CTL_2: + case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1: + case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2: + case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2: + case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1: + case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0: + case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M: + case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0: + case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP: + case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1: + case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP: + case WCD938X_TX_COM_NEW_INT_TXADC_INT_L2: + case WCD938X_TX_COM_NEW_INT_TXADC_INT_L1: + case WCD938X_TX_COM_NEW_INT_TXADC_INT_L0: + case WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP: + case WCD938X_DIGITAL_PAGE_REGISTER: + case WCD938X_DIGITAL_SWR_TX_CLK_RATE: + case WCD938X_DIGITAL_CDC_RST_CTL: + case WCD938X_DIGITAL_TOP_CLK_CFG: + case WCD938X_DIGITAL_CDC_ANA_CLK_CTL: + case WCD938X_DIGITAL_CDC_DIG_CLK_CTL: + case WCD938X_DIGITAL_SWR_RST_EN: + case WCD938X_DIGITAL_CDC_PATH_MODE: + case WCD938X_DIGITAL_CDC_RX_RST: + case WCD938X_DIGITAL_CDC_RX0_CTL: + case WCD938X_DIGITAL_CDC_RX1_CTL: + case WCD938X_DIGITAL_CDC_RX2_CTL: + case WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1: + case WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3: + case WCD938X_DIGITAL_CDC_COMP_CTL_0: + case WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL: + case WCD938X_DIGITAL_CDC_HPH_DSM_A1_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A1_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_A2_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A2_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_A3_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A3_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_A4_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A4_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_A5_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A5_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_A6_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_A7_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_C_0: + case WCD938X_DIGITAL_CDC_HPH_DSM_C_1: + case WCD938X_DIGITAL_CDC_HPH_DSM_C_2: + case WCD938X_DIGITAL_CDC_HPH_DSM_C_3: + case WCD938X_DIGITAL_CDC_HPH_DSM_R1: + case WCD938X_DIGITAL_CDC_HPH_DSM_R2: + case WCD938X_DIGITAL_CDC_HPH_DSM_R3: + case WCD938X_DIGITAL_CDC_HPH_DSM_R4: + case WCD938X_DIGITAL_CDC_HPH_DSM_R5: + case WCD938X_DIGITAL_CDC_HPH_DSM_R6: + case WCD938X_DIGITAL_CDC_HPH_DSM_R7: + case WCD938X_DIGITAL_CDC_AUX_DSM_A1_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A1_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_A2_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A2_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_A3_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A3_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_A4_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A4_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_A5_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A5_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_A6_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_A7_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_C_0: + case WCD938X_DIGITAL_CDC_AUX_DSM_C_1: + case WCD938X_DIGITAL_CDC_AUX_DSM_C_2: + case WCD938X_DIGITAL_CDC_AUX_DSM_C_3: + case WCD938X_DIGITAL_CDC_AUX_DSM_R1: + case WCD938X_DIGITAL_CDC_AUX_DSM_R2: + case WCD938X_DIGITAL_CDC_AUX_DSM_R3: + case WCD938X_DIGITAL_CDC_AUX_DSM_R4: + case WCD938X_DIGITAL_CDC_AUX_DSM_R5: + case WCD938X_DIGITAL_CDC_AUX_DSM_R6: + case WCD938X_DIGITAL_CDC_AUX_DSM_R7: + case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0: + case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1: + case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0: + case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1: + case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2: + case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0: + case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1: + case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2: + case WCD938X_DIGITAL_CDC_HPH_GAIN_CTL: + case WCD938X_DIGITAL_CDC_AUX_GAIN_CTL: + case WCD938X_DIGITAL_CDC_EAR_PATH_CTL: + case WCD938X_DIGITAL_CDC_SWR_CLH: + case WCD938X_DIGITAL_SWR_CLH_BYP: + case WCD938X_DIGITAL_CDC_TX0_CTL: + case WCD938X_DIGITAL_CDC_TX1_CTL: + case WCD938X_DIGITAL_CDC_TX2_CTL: + case WCD938X_DIGITAL_CDC_TX_RST: + case WCD938X_DIGITAL_CDC_REQ_CTL: + case WCD938X_DIGITAL_CDC_RST: + case WCD938X_DIGITAL_CDC_AMIC_CTL: + case WCD938X_DIGITAL_CDC_DMIC_CTL: + case WCD938X_DIGITAL_CDC_DMIC1_CTL: + case WCD938X_DIGITAL_CDC_DMIC2_CTL: + case WCD938X_DIGITAL_CDC_DMIC3_CTL: + case WCD938X_DIGITAL_CDC_DMIC4_CTL: + case WCD938X_DIGITAL_EFUSE_PRG_CTL: + case WCD938X_DIGITAL_EFUSE_CTL: + case WCD938X_DIGITAL_CDC_DMIC_RATE_1_2: + case WCD938X_DIGITAL_CDC_DMIC_RATE_3_4: + case WCD938X_DIGITAL_PDM_WD_CTL0: + case WCD938X_DIGITAL_PDM_WD_CTL1: + case WCD938X_DIGITAL_PDM_WD_CTL2: + case WCD938X_DIGITAL_INTR_MODE: + case WCD938X_DIGITAL_INTR_MASK_0: + case WCD938X_DIGITAL_INTR_MASK_1: + case WCD938X_DIGITAL_INTR_MASK_2: + case WCD938X_DIGITAL_INTR_CLEAR_0: + case WCD938X_DIGITAL_INTR_CLEAR_1: + case WCD938X_DIGITAL_INTR_CLEAR_2: + case WCD938X_DIGITAL_INTR_LEVEL_0: + case WCD938X_DIGITAL_INTR_LEVEL_1: + case WCD938X_DIGITAL_INTR_LEVEL_2: + case WCD938X_DIGITAL_INTR_SET_0: + case WCD938X_DIGITAL_INTR_SET_1: + case WCD938X_DIGITAL_INTR_SET_2: + case WCD938X_DIGITAL_INTR_TEST_0: + case WCD938X_DIGITAL_INTR_TEST_1: + case WCD938X_DIGITAL_INTR_TEST_2: + case WCD938X_DIGITAL_TX_MODE_DBG_EN: + case WCD938X_DIGITAL_TX_MODE_DBG_0_1: + case WCD938X_DIGITAL_TX_MODE_DBG_2_3: + case WCD938X_DIGITAL_LB_IN_SEL_CTL: + case WCD938X_DIGITAL_LOOP_BACK_MODE: + case WCD938X_DIGITAL_SWR_DAC_TEST: + case WCD938X_DIGITAL_SWR_HM_TEST_RX_0: + case WCD938X_DIGITAL_SWR_HM_TEST_TX_0: + case WCD938X_DIGITAL_SWR_HM_TEST_RX_1: + case WCD938X_DIGITAL_SWR_HM_TEST_TX_1: + case WCD938X_DIGITAL_SWR_HM_TEST_TX_2: + case WCD938X_DIGITAL_PAD_CTL_SWR_0: + case WCD938X_DIGITAL_PAD_CTL_SWR_1: + case WCD938X_DIGITAL_I2C_CTL: + case WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE: + case WCD938X_DIGITAL_EFUSE_TEST_CTL_0: + case WCD938X_DIGITAL_EFUSE_TEST_CTL_1: + case WCD938X_DIGITAL_PAD_CTL_PDM_RX0: + case WCD938X_DIGITAL_PAD_CTL_PDM_RX1: + case WCD938X_DIGITAL_PAD_CTL_PDM_TX0: + case WCD938X_DIGITAL_PAD_CTL_PDM_TX1: + case WCD938X_DIGITAL_PAD_CTL_PDM_TX2: + case WCD938X_DIGITAL_PAD_INP_DIS_0: + case WCD938X_DIGITAL_PAD_INP_DIS_1: + case WCD938X_DIGITAL_DRIVE_STRENGTH_0: + case WCD938X_DIGITAL_DRIVE_STRENGTH_1: + case WCD938X_DIGITAL_DRIVE_STRENGTH_2: + case WCD938X_DIGITAL_RX_DATA_EDGE_CTL: + case WCD938X_DIGITAL_TX_DATA_EDGE_CTL: + case WCD938X_DIGITAL_GPIO_MODE: + case WCD938X_DIGITAL_PIN_CTL_OE: + case WCD938X_DIGITAL_PIN_CTL_DATA_0: + case WCD938X_DIGITAL_PIN_CTL_DATA_1: + case WCD938X_DIGITAL_DIG_DEBUG_CTL: + case WCD938X_DIGITAL_DIG_DEBUG_EN: + case WCD938X_DIGITAL_ANA_CSR_DBG_ADD: + case WCD938X_DIGITAL_ANA_CSR_DBG_CTL: + case WCD938X_DIGITAL_SSP_DBG: + case WCD938X_DIGITAL_SPARE_0: + case WCD938X_DIGITAL_SPARE_1: + case WCD938X_DIGITAL_SPARE_2: + case WCD938X_DIGITAL_TX_REQ_FB_CTL_0: + case WCD938X_DIGITAL_TX_REQ_FB_CTL_1: + case WCD938X_DIGITAL_TX_REQ_FB_CTL_2: + case WCD938X_DIGITAL_TX_REQ_FB_CTL_3: + case WCD938X_DIGITAL_TX_REQ_FB_CTL_4: + case WCD938X_DIGITAL_DEM_BYPASS_DATA0: + case WCD938X_DIGITAL_DEM_BYPASS_DATA1: + case WCD938X_DIGITAL_DEM_BYPASS_DATA2: + case WCD938X_DIGITAL_DEM_BYPASS_DATA3: + return true; + } + + return false; +} + +static bool wcd938x_readonly_register(struct device *dev, unsigned int reg) +{ + switch (reg) { + case WCD938X_ANA_MBHC_RESULT_1: + case WCD938X_ANA_MBHC_RESULT_2: + case WCD938X_ANA_MBHC_RESULT_3: + case WCD938X_MBHC_MOISTURE_DET_FSM_STATUS: + case WCD938X_TX_1_2_SAR2_ERR: + case WCD938X_TX_1_2_SAR1_ERR: + case WCD938X_TX_3_4_SAR4_ERR: + case WCD938X_TX_3_4_SAR3_ERR: + case WCD938X_HPH_L_STATUS: + case WCD938X_HPH_R_STATUS: + case WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS: + case WCD938X_EAR_STATUS_REG_1: + case WCD938X_EAR_STATUS_REG_2: + case WCD938X_MBHC_NEW_FSM_STATUS: + case WCD938X_MBHC_NEW_ADC_RESULT: + case WCD938X_DIE_CRACK_DIE_CRK_DET_OUT: + case WCD938X_AUX_INT_STATUS_REG: + case WCD938X_LDORXTX_INT_STATUS: + case WCD938X_DIGITAL_CHIP_ID0: + case WCD938X_DIGITAL_CHIP_ID1: + case WCD938X_DIGITAL_CHIP_ID2: + case WCD938X_DIGITAL_CHIP_ID3: + case WCD938X_DIGITAL_INTR_STATUS_0: + case WCD938X_DIGITAL_INTR_STATUS_1: + case WCD938X_DIGITAL_INTR_STATUS_2: + case WCD938X_DIGITAL_INTR_CLEAR_0: + case WCD938X_DIGITAL_INTR_CLEAR_1: + case WCD938X_DIGITAL_INTR_CLEAR_2: + case WCD938X_DIGITAL_SWR_HM_TEST_0: + case WCD938X_DIGITAL_SWR_HM_TEST_1: + case WCD938X_DIGITAL_EFUSE_T_DATA_0: + case WCD938X_DIGITAL_EFUSE_T_DATA_1: + case WCD938X_DIGITAL_PIN_STATUS_0: + case WCD938X_DIGITAL_PIN_STATUS_1: + case WCD938X_DIGITAL_MODE_STATUS_0: + case WCD938X_DIGITAL_MODE_STATUS_1: + case WCD938X_DIGITAL_EFUSE_REG_0: + case WCD938X_DIGITAL_EFUSE_REG_1: + case WCD938X_DIGITAL_EFUSE_REG_2: + case WCD938X_DIGITAL_EFUSE_REG_3: + case WCD938X_DIGITAL_EFUSE_REG_4: + case WCD938X_DIGITAL_EFUSE_REG_5: + case WCD938X_DIGITAL_EFUSE_REG_6: + case WCD938X_DIGITAL_EFUSE_REG_7: + case WCD938X_DIGITAL_EFUSE_REG_8: + case WCD938X_DIGITAL_EFUSE_REG_9: + case WCD938X_DIGITAL_EFUSE_REG_10: + case WCD938X_DIGITAL_EFUSE_REG_11: + case WCD938X_DIGITAL_EFUSE_REG_12: + case WCD938X_DIGITAL_EFUSE_REG_13: + case WCD938X_DIGITAL_EFUSE_REG_14: + case WCD938X_DIGITAL_EFUSE_REG_15: + case WCD938X_DIGITAL_EFUSE_REG_16: + case WCD938X_DIGITAL_EFUSE_REG_17: + case WCD938X_DIGITAL_EFUSE_REG_18: + case WCD938X_DIGITAL_EFUSE_REG_19: + case WCD938X_DIGITAL_EFUSE_REG_20: + case WCD938X_DIGITAL_EFUSE_REG_21: + case WCD938X_DIGITAL_EFUSE_REG_22: + case WCD938X_DIGITAL_EFUSE_REG_23: + case WCD938X_DIGITAL_EFUSE_REG_24: + case WCD938X_DIGITAL_EFUSE_REG_25: + case WCD938X_DIGITAL_EFUSE_REG_26: + case WCD938X_DIGITAL_EFUSE_REG_27: + case WCD938X_DIGITAL_EFUSE_REG_28: + case WCD938X_DIGITAL_EFUSE_REG_29: + case WCD938X_DIGITAL_EFUSE_REG_30: + case WCD938X_DIGITAL_EFUSE_REG_31: + return true; + } + return false; +} + +static bool wcd938x_readable_register(struct device *dev, unsigned int reg) +{ + bool ret; + + ret = wcd938x_readonly_register(dev, reg); + if (!ret) + return wcd938x_rdwr_register(dev, reg); + + return ret; +} + +static bool wcd938x_writeable_register(struct device *dev, unsigned int reg) +{ + return wcd938x_rdwr_register(dev, reg); +} + +static bool wcd938x_volatile_register(struct device *dev, unsigned int reg) +{ + if (reg <= WCD938X_BASE_ADDRESS) + return false; + + if (reg == WCD938X_DIGITAL_SWR_TX_CLK_RATE) + return true; + + if (wcd938x_readonly_register(dev, reg)) + return true; + + return false; +} + +static const struct regmap_config wcd938x_regmap_config = { + .name = "wcd938x_csr", + .reg_bits = 32, + .val_bits = 8, + .cache_type = REGCACHE_RBTREE, + .reg_defaults = wcd938x_defaults, + .num_reg_defaults = ARRAY_SIZE(wcd938x_defaults), + .max_register = WCD938X_MAX_REGISTER, + .readable_reg = wcd938x_readable_register, + .writeable_reg = wcd938x_writeable_register, + .volatile_reg = wcd938x_volatile_register, + .can_multi_write = true, +}; + static const struct sdw_slave_ops wcd9380_slave_ops = { .update_status = wcd9380_update_status, .interrupt_callback = wcd9380_interrupt_callback, @@ -261,6 +1263,16 @@ static int wcd9380_probe(struct sdw_slave *pdev, wcd->ch_info = &wcd938x_sdw_rx_ch_info[0]; }
+ if (wcd->is_tx) { + wcd->regmap = devm_regmap_init_sdw(pdev, &wcd938x_regmap_config); + if (IS_ERR(wcd->regmap)) + return dev_err_probe(dev, PTR_ERR(wcd->regmap), + "Regmap init failed\n"); + + /* Start in cache-only until device is enumerated */ + regcache_cache_only(wcd->regmap, true); + }; + pm_runtime_set_autosuspend_delay(dev, 3000); pm_runtime_use_autosuspend(dev); pm_runtime_mark_last_busy(dev); @@ -278,22 +1290,23 @@ MODULE_DEVICE_TABLE(sdw, wcd9380_slave_id);
static int __maybe_unused wcd938x_sdw_runtime_suspend(struct device *dev) { - struct regmap *regmap = dev_get_regmap(dev, NULL); + struct wcd938x_sdw_priv *wcd = dev_get_drvdata(dev);
- if (regmap) { - regcache_cache_only(regmap, true); - regcache_mark_dirty(regmap); + if (wcd->regmap) { + regcache_cache_only(wcd->regmap, true); + regcache_mark_dirty(wcd->regmap); } + return 0; }
static int __maybe_unused wcd938x_sdw_runtime_resume(struct device *dev) { - struct regmap *regmap = dev_get_regmap(dev, NULL); + struct wcd938x_sdw_priv *wcd = dev_get_drvdata(dev);
- if (regmap) { - regcache_cache_only(regmap, false); - regcache_sync(regmap); + if (wcd->regmap) { + regcache_cache_only(wcd->regmap, false); + regcache_sync(wcd->regmap); }
pm_runtime_mark_last_busy(dev); diff --git a/sound/soc/codecs/wcd938x.c b/sound/soc/codecs/wcd938x.c index fcac763b04d1b..d34f13758aca0 100644 --- a/sound/soc/codecs/wcd938x.c +++ b/sound/soc/codecs/wcd938x.c @@ -273,1001 +273,6 @@ static struct wcd_mbhc_field wcd_mbhc_fields[WCD_MBHC_REG_FUNC_MAX] = { WCD_MBHC_FIELD(WCD_MBHC_ELECT_ISRC_EN, WCD938X_ANA_MBHC_ZDET, 0x02), };
-static const struct reg_default wcd938x_defaults[] = { - {WCD938X_ANA_PAGE_REGISTER, 0x00}, - {WCD938X_ANA_BIAS, 0x00}, - {WCD938X_ANA_RX_SUPPLIES, 0x00}, - {WCD938X_ANA_HPH, 0x0C}, - {WCD938X_ANA_EAR, 0x00}, - {WCD938X_ANA_EAR_COMPANDER_CTL, 0x02}, - {WCD938X_ANA_TX_CH1, 0x20}, - {WCD938X_ANA_TX_CH2, 0x00}, - {WCD938X_ANA_TX_CH3, 0x20}, - {WCD938X_ANA_TX_CH4, 0x00}, - {WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC, 0x00}, - {WCD938X_ANA_MICB3_DSP_EN_LOGIC, 0x00}, - {WCD938X_ANA_MBHC_MECH, 0x39}, - {WCD938X_ANA_MBHC_ELECT, 0x08}, - {WCD938X_ANA_MBHC_ZDET, 0x00}, - {WCD938X_ANA_MBHC_RESULT_1, 0x00}, - {WCD938X_ANA_MBHC_RESULT_2, 0x00}, - {WCD938X_ANA_MBHC_RESULT_3, 0x00}, - {WCD938X_ANA_MBHC_BTN0, 0x00}, - {WCD938X_ANA_MBHC_BTN1, 0x10}, - {WCD938X_ANA_MBHC_BTN2, 0x20}, - {WCD938X_ANA_MBHC_BTN3, 0x30}, - {WCD938X_ANA_MBHC_BTN4, 0x40}, - {WCD938X_ANA_MBHC_BTN5, 0x50}, - {WCD938X_ANA_MBHC_BTN6, 0x60}, - {WCD938X_ANA_MBHC_BTN7, 0x70}, - {WCD938X_ANA_MICB1, 0x10}, - {WCD938X_ANA_MICB2, 0x10}, - {WCD938X_ANA_MICB2_RAMP, 0x00}, - {WCD938X_ANA_MICB3, 0x10}, - {WCD938X_ANA_MICB4, 0x10}, - {WCD938X_BIAS_CTL, 0x2A}, - {WCD938X_BIAS_VBG_FINE_ADJ, 0x55}, - {WCD938X_LDOL_VDDCX_ADJUST, 0x01}, - {WCD938X_LDOL_DISABLE_LDOL, 0x00}, - {WCD938X_MBHC_CTL_CLK, 0x00}, - {WCD938X_MBHC_CTL_ANA, 0x00}, - {WCD938X_MBHC_CTL_SPARE_1, 0x00}, - {WCD938X_MBHC_CTL_SPARE_2, 0x00}, - {WCD938X_MBHC_CTL_BCS, 0x00}, - {WCD938X_MBHC_MOISTURE_DET_FSM_STATUS, 0x00}, - {WCD938X_MBHC_TEST_CTL, 0x00}, - {WCD938X_LDOH_MODE, 0x2B}, - {WCD938X_LDOH_BIAS, 0x68}, - {WCD938X_LDOH_STB_LOADS, 0x00}, - {WCD938X_LDOH_SLOWRAMP, 0x50}, - {WCD938X_MICB1_TEST_CTL_1, 0x1A}, - {WCD938X_MICB1_TEST_CTL_2, 0x00}, - {WCD938X_MICB1_TEST_CTL_3, 0xA4}, - {WCD938X_MICB2_TEST_CTL_1, 0x1A}, - {WCD938X_MICB2_TEST_CTL_2, 0x00}, - {WCD938X_MICB2_TEST_CTL_3, 0x24}, - {WCD938X_MICB3_TEST_CTL_1, 0x1A}, - {WCD938X_MICB3_TEST_CTL_2, 0x00}, - {WCD938X_MICB3_TEST_CTL_3, 0xA4}, - {WCD938X_MICB4_TEST_CTL_1, 0x1A}, - {WCD938X_MICB4_TEST_CTL_2, 0x00}, - {WCD938X_MICB4_TEST_CTL_3, 0xA4}, - {WCD938X_TX_COM_ADC_VCM, 0x39}, - {WCD938X_TX_COM_BIAS_ATEST, 0xE0}, - {WCD938X_TX_COM_SPARE1, 0x00}, - {WCD938X_TX_COM_SPARE2, 0x00}, - {WCD938X_TX_COM_TXFE_DIV_CTL, 0x22}, - {WCD938X_TX_COM_TXFE_DIV_START, 0x00}, - {WCD938X_TX_COM_SPARE3, 0x00}, - {WCD938X_TX_COM_SPARE4, 0x00}, - {WCD938X_TX_1_2_TEST_EN, 0xCC}, - {WCD938X_TX_1_2_ADC_IB, 0xE9}, - {WCD938X_TX_1_2_ATEST_REFCTL, 0x0A}, - {WCD938X_TX_1_2_TEST_CTL, 0x38}, - {WCD938X_TX_1_2_TEST_BLK_EN1, 0xFF}, - {WCD938X_TX_1_2_TXFE1_CLKDIV, 0x00}, - {WCD938X_TX_1_2_SAR2_ERR, 0x00}, - {WCD938X_TX_1_2_SAR1_ERR, 0x00}, - {WCD938X_TX_3_4_TEST_EN, 0xCC}, - {WCD938X_TX_3_4_ADC_IB, 0xE9}, - {WCD938X_TX_3_4_ATEST_REFCTL, 0x0A}, - {WCD938X_TX_3_4_TEST_CTL, 0x38}, - {WCD938X_TX_3_4_TEST_BLK_EN3, 0xFF}, - {WCD938X_TX_3_4_TXFE3_CLKDIV, 0x00}, - {WCD938X_TX_3_4_SAR4_ERR, 0x00}, - {WCD938X_TX_3_4_SAR3_ERR, 0x00}, - {WCD938X_TX_3_4_TEST_BLK_EN2, 0xFB}, - {WCD938X_TX_3_4_TXFE2_CLKDIV, 0x00}, - {WCD938X_TX_3_4_SPARE1, 0x00}, - {WCD938X_TX_3_4_TEST_BLK_EN4, 0xFB}, - {WCD938X_TX_3_4_TXFE4_CLKDIV, 0x00}, - {WCD938X_TX_3_4_SPARE2, 0x00}, - {WCD938X_CLASSH_MODE_1, 0x40}, - {WCD938X_CLASSH_MODE_2, 0x3A}, - {WCD938X_CLASSH_MODE_3, 0x00}, - {WCD938X_CLASSH_CTRL_VCL_1, 0x70}, - {WCD938X_CLASSH_CTRL_VCL_2, 0x82}, - {WCD938X_CLASSH_CTRL_CCL_1, 0x31}, - {WCD938X_CLASSH_CTRL_CCL_2, 0x80}, - {WCD938X_CLASSH_CTRL_CCL_3, 0x80}, - {WCD938X_CLASSH_CTRL_CCL_4, 0x51}, - {WCD938X_CLASSH_CTRL_CCL_5, 0x00}, - {WCD938X_CLASSH_BUCK_TMUX_A_D, 0x00}, - {WCD938X_CLASSH_BUCK_SW_DRV_CNTL, 0x77}, - {WCD938X_CLASSH_SPARE, 0x00}, - {WCD938X_FLYBACK_EN, 0x4E}, - {WCD938X_FLYBACK_VNEG_CTRL_1, 0x0B}, - {WCD938X_FLYBACK_VNEG_CTRL_2, 0x45}, - {WCD938X_FLYBACK_VNEG_CTRL_3, 0x74}, - {WCD938X_FLYBACK_VNEG_CTRL_4, 0x7F}, - {WCD938X_FLYBACK_VNEG_CTRL_5, 0x83}, - {WCD938X_FLYBACK_VNEG_CTRL_6, 0x98}, - {WCD938X_FLYBACK_VNEG_CTRL_7, 0xA9}, - {WCD938X_FLYBACK_VNEG_CTRL_8, 0x68}, - {WCD938X_FLYBACK_VNEG_CTRL_9, 0x64}, - {WCD938X_FLYBACK_VNEGDAC_CTRL_1, 0xED}, - {WCD938X_FLYBACK_VNEGDAC_CTRL_2, 0xF0}, - {WCD938X_FLYBACK_VNEGDAC_CTRL_3, 0xA6}, - {WCD938X_FLYBACK_CTRL_1, 0x65}, - {WCD938X_FLYBACK_TEST_CTL, 0x00}, - {WCD938X_RX_AUX_SW_CTL, 0x00}, - {WCD938X_RX_PA_AUX_IN_CONN, 0x01}, - {WCD938X_RX_TIMER_DIV, 0x32}, - {WCD938X_RX_OCP_CTL, 0x1F}, - {WCD938X_RX_OCP_COUNT, 0x77}, - {WCD938X_RX_BIAS_EAR_DAC, 0xA0}, - {WCD938X_RX_BIAS_EAR_AMP, 0xAA}, - {WCD938X_RX_BIAS_HPH_LDO, 0xA9}, - {WCD938X_RX_BIAS_HPH_PA, 0xAA}, - {WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2, 0x8A}, - {WCD938X_RX_BIAS_HPH_RDAC_LDO, 0x88}, - {WCD938X_RX_BIAS_HPH_CNP1, 0x82}, - {WCD938X_RX_BIAS_HPH_LOWPOWER, 0x82}, - {WCD938X_RX_BIAS_AUX_DAC, 0xA0}, - {WCD938X_RX_BIAS_AUX_AMP, 0xAA}, - {WCD938X_RX_BIAS_VNEGDAC_BLEEDER, 0x50}, - {WCD938X_RX_BIAS_MISC, 0x00}, - {WCD938X_RX_BIAS_BUCK_RST, 0x08}, - {WCD938X_RX_BIAS_BUCK_VREF_ERRAMP, 0x44}, - {WCD938X_RX_BIAS_FLYB_ERRAMP, 0x40}, - {WCD938X_RX_BIAS_FLYB_BUFF, 0xAA}, - {WCD938X_RX_BIAS_FLYB_MID_RST, 0x14}, - {WCD938X_HPH_L_STATUS, 0x04}, - {WCD938X_HPH_R_STATUS, 0x04}, - {WCD938X_HPH_CNP_EN, 0x80}, - {WCD938X_HPH_CNP_WG_CTL, 0x9A}, - {WCD938X_HPH_CNP_WG_TIME, 0x14}, - {WCD938X_HPH_OCP_CTL, 0x28}, - {WCD938X_HPH_AUTO_CHOP, 0x16}, - {WCD938X_HPH_CHOP_CTL, 0x83}, - {WCD938X_HPH_PA_CTL1, 0x46}, - {WCD938X_HPH_PA_CTL2, 0x50}, - {WCD938X_HPH_L_EN, 0x80}, - {WCD938X_HPH_L_TEST, 0xE0}, - {WCD938X_HPH_L_ATEST, 0x50}, - {WCD938X_HPH_R_EN, 0x80}, - {WCD938X_HPH_R_TEST, 0xE0}, - {WCD938X_HPH_R_ATEST, 0x54}, - {WCD938X_HPH_RDAC_CLK_CTL1, 0x99}, - {WCD938X_HPH_RDAC_CLK_CTL2, 0x9B}, - {WCD938X_HPH_RDAC_LDO_CTL, 0x33}, - {WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL, 0x00}, - {WCD938X_HPH_REFBUFF_UHQA_CTL, 0x68}, - {WCD938X_HPH_REFBUFF_LP_CTL, 0x0E}, - {WCD938X_HPH_L_DAC_CTL, 0x20}, - {WCD938X_HPH_R_DAC_CTL, 0x20}, - {WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL, 0x55}, - {WCD938X_HPH_SURGE_HPHLR_SURGE_EN, 0x19}, - {WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1, 0xA0}, - {WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS, 0x00}, - {WCD938X_EAR_EAR_EN_REG, 0x22}, - {WCD938X_EAR_EAR_PA_CON, 0x44}, - {WCD938X_EAR_EAR_SP_CON, 0xDB}, - {WCD938X_EAR_EAR_DAC_CON, 0x80}, - {WCD938X_EAR_EAR_CNP_FSM_CON, 0xB2}, - {WCD938X_EAR_TEST_CTL, 0x00}, - {WCD938X_EAR_STATUS_REG_1, 0x00}, - {WCD938X_EAR_STATUS_REG_2, 0x08}, - {WCD938X_ANA_NEW_PAGE_REGISTER, 0x00}, - {WCD938X_HPH_NEW_ANA_HPH2, 0x00}, - {WCD938X_HPH_NEW_ANA_HPH3, 0x00}, - {WCD938X_SLEEP_CTL, 0x16}, - {WCD938X_SLEEP_WATCHDOG_CTL, 0x00}, - {WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL, 0x00}, - {WCD938X_MBHC_NEW_CTL_1, 0x02}, - {WCD938X_MBHC_NEW_CTL_2, 0x05}, - {WCD938X_MBHC_NEW_PLUG_DETECT_CTL, 0xE9}, - {WCD938X_MBHC_NEW_ZDET_ANA_CTL, 0x0F}, - {WCD938X_MBHC_NEW_ZDET_RAMP_CTL, 0x00}, - {WCD938X_MBHC_NEW_FSM_STATUS, 0x00}, - {WCD938X_MBHC_NEW_ADC_RESULT, 0x00}, - {WCD938X_TX_NEW_AMIC_MUX_CFG, 0x00}, - {WCD938X_AUX_AUXPA, 0x00}, - {WCD938X_LDORXTX_MODE, 0x0C}, - {WCD938X_LDORXTX_CONFIG, 0x10}, - {WCD938X_DIE_CRACK_DIE_CRK_DET_EN, 0x00}, - {WCD938X_DIE_CRACK_DIE_CRK_DET_OUT, 0x00}, - {WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL, 0x40}, - {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L, 0x81}, - {WCD938X_HPH_NEW_INT_RDAC_VREF_CTL, 0x10}, - {WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL, 0x00}, - {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R, 0x81}, - {WCD938X_HPH_NEW_INT_PA_MISC1, 0x22}, - {WCD938X_HPH_NEW_INT_PA_MISC2, 0x00}, - {WCD938X_HPH_NEW_INT_PA_RDAC_MISC, 0x00}, - {WCD938X_HPH_NEW_INT_HPH_TIMER1, 0xFE}, - {WCD938X_HPH_NEW_INT_HPH_TIMER2, 0x02}, - {WCD938X_HPH_NEW_INT_HPH_TIMER3, 0x4E}, - {WCD938X_HPH_NEW_INT_HPH_TIMER4, 0x54}, - {WCD938X_HPH_NEW_INT_PA_RDAC_MISC2, 0x00}, - {WCD938X_HPH_NEW_INT_PA_RDAC_MISC3, 0x00}, - {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW, 0x90}, - {WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW, 0x90}, - {WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI, 0x62}, - {WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP, 0x01}, - {WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP, 0x11}, - {WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL, 0x57}, - {WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL, 0x01}, - {WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT, 0x00}, - {WCD938X_MBHC_NEW_INT_SPARE_2, 0x00}, - {WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON, 0xA8}, - {WCD938X_EAR_INT_NEW_CNP_VCM_CON1, 0x42}, - {WCD938X_EAR_INT_NEW_CNP_VCM_CON2, 0x22}, - {WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS, 0x00}, - {WCD938X_AUX_INT_EN_REG, 0x00}, - {WCD938X_AUX_INT_PA_CTRL, 0x06}, - {WCD938X_AUX_INT_SP_CTRL, 0xD2}, - {WCD938X_AUX_INT_DAC_CTRL, 0x80}, - {WCD938X_AUX_INT_CLK_CTRL, 0x50}, - {WCD938X_AUX_INT_TEST_CTRL, 0x00}, - {WCD938X_AUX_INT_STATUS_REG, 0x00}, - {WCD938X_AUX_INT_MISC, 0x00}, - {WCD938X_LDORXTX_INT_BIAS, 0x6E}, - {WCD938X_LDORXTX_INT_STB_LOADS_DTEST, 0x50}, - {WCD938X_LDORXTX_INT_TEST0, 0x1C}, - {WCD938X_LDORXTX_INT_STARTUP_TIMER, 0xFF}, - {WCD938X_LDORXTX_INT_TEST1, 0x1F}, - {WCD938X_LDORXTX_INT_STATUS, 0x00}, - {WCD938X_SLEEP_INT_WATCHDOG_CTL_1, 0x0A}, - {WCD938X_SLEEP_INT_WATCHDOG_CTL_2, 0x0A}, - {WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1, 0x02}, - {WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2, 0x60}, - {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2, 0xFF}, - {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1, 0x7F}, - {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0, 0x3F}, - {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M, 0x1F}, - {WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M, 0x0F}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1, 0xD7}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0, 0xC8}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP, 0xC6}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1, 0xD5}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0, 0xCA}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP, 0x05}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0, 0xA5}, - {WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP, 0x13}, - {WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1, 0x88}, - {WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP, 0x42}, - {WCD938X_TX_COM_NEW_INT_TXADC_INT_L2, 0xFF}, - {WCD938X_TX_COM_NEW_INT_TXADC_INT_L1, 0x64}, - {WCD938X_TX_COM_NEW_INT_TXADC_INT_L0, 0x64}, - {WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP, 0x77}, - {WCD938X_DIGITAL_PAGE_REGISTER, 0x00}, - {WCD938X_DIGITAL_CHIP_ID0, 0x00}, - {WCD938X_DIGITAL_CHIP_ID1, 0x00}, - {WCD938X_DIGITAL_CHIP_ID2, 0x0D}, - {WCD938X_DIGITAL_CHIP_ID3, 0x01}, - {WCD938X_DIGITAL_SWR_TX_CLK_RATE, 0x00}, - {WCD938X_DIGITAL_CDC_RST_CTL, 0x03}, - {WCD938X_DIGITAL_TOP_CLK_CFG, 0x00}, - {WCD938X_DIGITAL_CDC_ANA_CLK_CTL, 0x00}, - {WCD938X_DIGITAL_CDC_DIG_CLK_CTL, 0xF0}, - {WCD938X_DIGITAL_SWR_RST_EN, 0x00}, - {WCD938X_DIGITAL_CDC_PATH_MODE, 0x55}, - {WCD938X_DIGITAL_CDC_RX_RST, 0x00}, - {WCD938X_DIGITAL_CDC_RX0_CTL, 0xFC}, - {WCD938X_DIGITAL_CDC_RX1_CTL, 0xFC}, - {WCD938X_DIGITAL_CDC_RX2_CTL, 0xFC}, - {WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1, 0x00}, - {WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3, 0x00}, - {WCD938X_DIGITAL_CDC_COMP_CTL_0, 0x00}, - {WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL, 0x1E}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A1_0, 0x00}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A1_1, 0x01}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A2_0, 0x63}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A2_1, 0x04}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A3_0, 0xAC}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A3_1, 0x04}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A4_0, 0x1A}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A4_1, 0x03}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A5_0, 0xBC}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A5_1, 0x02}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A6_0, 0xC7}, - {WCD938X_DIGITAL_CDC_HPH_DSM_A7_0, 0xF8}, - {WCD938X_DIGITAL_CDC_HPH_DSM_C_0, 0x47}, - {WCD938X_DIGITAL_CDC_HPH_DSM_C_1, 0x43}, - {WCD938X_DIGITAL_CDC_HPH_DSM_C_2, 0xB1}, - {WCD938X_DIGITAL_CDC_HPH_DSM_C_3, 0x17}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R1, 0x4D}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R2, 0x29}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R3, 0x34}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R4, 0x59}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R5, 0x66}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R6, 0x87}, - {WCD938X_DIGITAL_CDC_HPH_DSM_R7, 0x64}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A1_0, 0x00}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A1_1, 0x01}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A2_0, 0x96}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A2_1, 0x09}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A3_0, 0xAB}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A3_1, 0x05}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A4_0, 0x1C}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A4_1, 0x02}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A5_0, 0x17}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A5_1, 0x02}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A6_0, 0xAA}, - {WCD938X_DIGITAL_CDC_AUX_DSM_A7_0, 0xE3}, - {WCD938X_DIGITAL_CDC_AUX_DSM_C_0, 0x69}, - {WCD938X_DIGITAL_CDC_AUX_DSM_C_1, 0x54}, - {WCD938X_DIGITAL_CDC_AUX_DSM_C_2, 0x02}, - {WCD938X_DIGITAL_CDC_AUX_DSM_C_3, 0x15}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R1, 0xA4}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R2, 0xB5}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R3, 0x86}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R4, 0x85}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R5, 0xAA}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R6, 0xE2}, - {WCD938X_DIGITAL_CDC_AUX_DSM_R7, 0x62}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0, 0x55}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1, 0xA9}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0, 0x3D}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1, 0x2E}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2, 0x01}, - {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0, 0x00}, - {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1, 0xFC}, - {WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2, 0x01}, - {WCD938X_DIGITAL_CDC_HPH_GAIN_CTL, 0x00}, - {WCD938X_DIGITAL_CDC_AUX_GAIN_CTL, 0x00}, - {WCD938X_DIGITAL_CDC_EAR_PATH_CTL, 0x00}, - {WCD938X_DIGITAL_CDC_SWR_CLH, 0x00}, - {WCD938X_DIGITAL_SWR_CLH_BYP, 0x00}, - {WCD938X_DIGITAL_CDC_TX0_CTL, 0x68}, - {WCD938X_DIGITAL_CDC_TX1_CTL, 0x68}, - {WCD938X_DIGITAL_CDC_TX2_CTL, 0x68}, - {WCD938X_DIGITAL_CDC_TX_RST, 0x00}, - {WCD938X_DIGITAL_CDC_REQ_CTL, 0x01}, - {WCD938X_DIGITAL_CDC_RST, 0x00}, - {WCD938X_DIGITAL_CDC_AMIC_CTL, 0x0F}, - {WCD938X_DIGITAL_CDC_DMIC_CTL, 0x04}, - {WCD938X_DIGITAL_CDC_DMIC1_CTL, 0x01}, - {WCD938X_DIGITAL_CDC_DMIC2_CTL, 0x01}, - {WCD938X_DIGITAL_CDC_DMIC3_CTL, 0x01}, - {WCD938X_DIGITAL_CDC_DMIC4_CTL, 0x01}, - {WCD938X_DIGITAL_EFUSE_PRG_CTL, 0x00}, - {WCD938X_DIGITAL_EFUSE_CTL, 0x2B}, - {WCD938X_DIGITAL_CDC_DMIC_RATE_1_2, 0x11}, - {WCD938X_DIGITAL_CDC_DMIC_RATE_3_4, 0x11}, - {WCD938X_DIGITAL_PDM_WD_CTL0, 0x00}, - {WCD938X_DIGITAL_PDM_WD_CTL1, 0x00}, - {WCD938X_DIGITAL_PDM_WD_CTL2, 0x00}, - {WCD938X_DIGITAL_INTR_MODE, 0x00}, - {WCD938X_DIGITAL_INTR_MASK_0, 0xFF}, - {WCD938X_DIGITAL_INTR_MASK_1, 0xFF}, - {WCD938X_DIGITAL_INTR_MASK_2, 0x3F}, - {WCD938X_DIGITAL_INTR_STATUS_0, 0x00}, - {WCD938X_DIGITAL_INTR_STATUS_1, 0x00}, - {WCD938X_DIGITAL_INTR_STATUS_2, 0x00}, - {WCD938X_DIGITAL_INTR_CLEAR_0, 0x00}, - {WCD938X_DIGITAL_INTR_CLEAR_1, 0x00}, - {WCD938X_DIGITAL_INTR_CLEAR_2, 0x00}, - {WCD938X_DIGITAL_INTR_LEVEL_0, 0x00}, - {WCD938X_DIGITAL_INTR_LEVEL_1, 0x00}, - {WCD938X_DIGITAL_INTR_LEVEL_2, 0x00}, - {WCD938X_DIGITAL_INTR_SET_0, 0x00}, - {WCD938X_DIGITAL_INTR_SET_1, 0x00}, - {WCD938X_DIGITAL_INTR_SET_2, 0x00}, - {WCD938X_DIGITAL_INTR_TEST_0, 0x00}, - {WCD938X_DIGITAL_INTR_TEST_1, 0x00}, - {WCD938X_DIGITAL_INTR_TEST_2, 0x00}, - {WCD938X_DIGITAL_TX_MODE_DBG_EN, 0x00}, - {WCD938X_DIGITAL_TX_MODE_DBG_0_1, 0x00}, - {WCD938X_DIGITAL_TX_MODE_DBG_2_3, 0x00}, - {WCD938X_DIGITAL_LB_IN_SEL_CTL, 0x00}, - {WCD938X_DIGITAL_LOOP_BACK_MODE, 0x00}, - {WCD938X_DIGITAL_SWR_DAC_TEST, 0x00}, - {WCD938X_DIGITAL_SWR_HM_TEST_RX_0, 0x40}, - {WCD938X_DIGITAL_SWR_HM_TEST_TX_0, 0x40}, - {WCD938X_DIGITAL_SWR_HM_TEST_RX_1, 0x00}, - {WCD938X_DIGITAL_SWR_HM_TEST_TX_1, 0x00}, - {WCD938X_DIGITAL_SWR_HM_TEST_TX_2, 0x00}, - {WCD938X_DIGITAL_SWR_HM_TEST_0, 0x00}, - {WCD938X_DIGITAL_SWR_HM_TEST_1, 0x00}, - {WCD938X_DIGITAL_PAD_CTL_SWR_0, 0x8F}, - {WCD938X_DIGITAL_PAD_CTL_SWR_1, 0x06}, - {WCD938X_DIGITAL_I2C_CTL, 0x00}, - {WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE, 0x00}, - {WCD938X_DIGITAL_EFUSE_TEST_CTL_0, 0x00}, - {WCD938X_DIGITAL_EFUSE_TEST_CTL_1, 0x00}, - {WCD938X_DIGITAL_EFUSE_T_DATA_0, 0x00}, - {WCD938X_DIGITAL_EFUSE_T_DATA_1, 0x00}, - {WCD938X_DIGITAL_PAD_CTL_PDM_RX0, 0xF1}, - {WCD938X_DIGITAL_PAD_CTL_PDM_RX1, 0xF1}, - {WCD938X_DIGITAL_PAD_CTL_PDM_TX0, 0xF1}, - {WCD938X_DIGITAL_PAD_CTL_PDM_TX1, 0xF1}, - {WCD938X_DIGITAL_PAD_CTL_PDM_TX2, 0xF1}, - {WCD938X_DIGITAL_PAD_INP_DIS_0, 0x00}, - {WCD938X_DIGITAL_PAD_INP_DIS_1, 0x00}, - {WCD938X_DIGITAL_DRIVE_STRENGTH_0, 0x00}, - {WCD938X_DIGITAL_DRIVE_STRENGTH_1, 0x00}, - {WCD938X_DIGITAL_DRIVE_STRENGTH_2, 0x00}, - {WCD938X_DIGITAL_RX_DATA_EDGE_CTL, 0x1F}, - {WCD938X_DIGITAL_TX_DATA_EDGE_CTL, 0x80}, - {WCD938X_DIGITAL_GPIO_MODE, 0x00}, - {WCD938X_DIGITAL_PIN_CTL_OE, 0x00}, - {WCD938X_DIGITAL_PIN_CTL_DATA_0, 0x00}, - {WCD938X_DIGITAL_PIN_CTL_DATA_1, 0x00}, - {WCD938X_DIGITAL_PIN_STATUS_0, 0x00}, - {WCD938X_DIGITAL_PIN_STATUS_1, 0x00}, - {WCD938X_DIGITAL_DIG_DEBUG_CTL, 0x00}, - {WCD938X_DIGITAL_DIG_DEBUG_EN, 0x00}, - {WCD938X_DIGITAL_ANA_CSR_DBG_ADD, 0x00}, - {WCD938X_DIGITAL_ANA_CSR_DBG_CTL, 0x48}, - {WCD938X_DIGITAL_SSP_DBG, 0x00}, - {WCD938X_DIGITAL_MODE_STATUS_0, 0x00}, - {WCD938X_DIGITAL_MODE_STATUS_1, 0x00}, - {WCD938X_DIGITAL_SPARE_0, 0x00}, - {WCD938X_DIGITAL_SPARE_1, 0x00}, - {WCD938X_DIGITAL_SPARE_2, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_0, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_1, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_2, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_3, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_4, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_5, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_6, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_7, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_8, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_9, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_10, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_11, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_12, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_13, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_14, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_15, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_16, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_17, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_18, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_19, 0xFF}, - {WCD938X_DIGITAL_EFUSE_REG_20, 0x0E}, - {WCD938X_DIGITAL_EFUSE_REG_21, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_22, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_23, 0xF8}, - {WCD938X_DIGITAL_EFUSE_REG_24, 0x16}, - {WCD938X_DIGITAL_EFUSE_REG_25, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_26, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_27, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_28, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_29, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_30, 0x00}, - {WCD938X_DIGITAL_EFUSE_REG_31, 0x00}, - {WCD938X_DIGITAL_TX_REQ_FB_CTL_0, 0x88}, - {WCD938X_DIGITAL_TX_REQ_FB_CTL_1, 0x88}, - {WCD938X_DIGITAL_TX_REQ_FB_CTL_2, 0x88}, - {WCD938X_DIGITAL_TX_REQ_FB_CTL_3, 0x88}, - {WCD938X_DIGITAL_TX_REQ_FB_CTL_4, 0x88}, - {WCD938X_DIGITAL_DEM_BYPASS_DATA0, 0x55}, - {WCD938X_DIGITAL_DEM_BYPASS_DATA1, 0x55}, - {WCD938X_DIGITAL_DEM_BYPASS_DATA2, 0x55}, - {WCD938X_DIGITAL_DEM_BYPASS_DATA3, 0x01}, -}; - -static bool wcd938x_rdwr_register(struct device *dev, unsigned int reg) -{ - switch (reg) { - case WCD938X_ANA_PAGE_REGISTER: - case WCD938X_ANA_BIAS: - case WCD938X_ANA_RX_SUPPLIES: - case WCD938X_ANA_HPH: - case WCD938X_ANA_EAR: - case WCD938X_ANA_EAR_COMPANDER_CTL: - case WCD938X_ANA_TX_CH1: - case WCD938X_ANA_TX_CH2: - case WCD938X_ANA_TX_CH3: - case WCD938X_ANA_TX_CH4: - case WCD938X_ANA_MICB1_MICB2_DSP_EN_LOGIC: - case WCD938X_ANA_MICB3_DSP_EN_LOGIC: - case WCD938X_ANA_MBHC_MECH: - case WCD938X_ANA_MBHC_ELECT: - case WCD938X_ANA_MBHC_ZDET: - case WCD938X_ANA_MBHC_BTN0: - case WCD938X_ANA_MBHC_BTN1: - case WCD938X_ANA_MBHC_BTN2: - case WCD938X_ANA_MBHC_BTN3: - case WCD938X_ANA_MBHC_BTN4: - case WCD938X_ANA_MBHC_BTN5: - case WCD938X_ANA_MBHC_BTN6: - case WCD938X_ANA_MBHC_BTN7: - case WCD938X_ANA_MICB1: - case WCD938X_ANA_MICB2: - case WCD938X_ANA_MICB2_RAMP: - case WCD938X_ANA_MICB3: - case WCD938X_ANA_MICB4: - case WCD938X_BIAS_CTL: - case WCD938X_BIAS_VBG_FINE_ADJ: - case WCD938X_LDOL_VDDCX_ADJUST: - case WCD938X_LDOL_DISABLE_LDOL: - case WCD938X_MBHC_CTL_CLK: - case WCD938X_MBHC_CTL_ANA: - case WCD938X_MBHC_CTL_SPARE_1: - case WCD938X_MBHC_CTL_SPARE_2: - case WCD938X_MBHC_CTL_BCS: - case WCD938X_MBHC_TEST_CTL: - case WCD938X_LDOH_MODE: - case WCD938X_LDOH_BIAS: - case WCD938X_LDOH_STB_LOADS: - case WCD938X_LDOH_SLOWRAMP: - case WCD938X_MICB1_TEST_CTL_1: - case WCD938X_MICB1_TEST_CTL_2: - case WCD938X_MICB1_TEST_CTL_3: - case WCD938X_MICB2_TEST_CTL_1: - case WCD938X_MICB2_TEST_CTL_2: - case WCD938X_MICB2_TEST_CTL_3: - case WCD938X_MICB3_TEST_CTL_1: - case WCD938X_MICB3_TEST_CTL_2: - case WCD938X_MICB3_TEST_CTL_3: - case WCD938X_MICB4_TEST_CTL_1: - case WCD938X_MICB4_TEST_CTL_2: - case WCD938X_MICB4_TEST_CTL_3: - case WCD938X_TX_COM_ADC_VCM: - case WCD938X_TX_COM_BIAS_ATEST: - case WCD938X_TX_COM_SPARE1: - case WCD938X_TX_COM_SPARE2: - case WCD938X_TX_COM_TXFE_DIV_CTL: - case WCD938X_TX_COM_TXFE_DIV_START: - case WCD938X_TX_COM_SPARE3: - case WCD938X_TX_COM_SPARE4: - case WCD938X_TX_1_2_TEST_EN: - case WCD938X_TX_1_2_ADC_IB: - case WCD938X_TX_1_2_ATEST_REFCTL: - case WCD938X_TX_1_2_TEST_CTL: - case WCD938X_TX_1_2_TEST_BLK_EN1: - case WCD938X_TX_1_2_TXFE1_CLKDIV: - case WCD938X_TX_3_4_TEST_EN: - case WCD938X_TX_3_4_ADC_IB: - case WCD938X_TX_3_4_ATEST_REFCTL: - case WCD938X_TX_3_4_TEST_CTL: - case WCD938X_TX_3_4_TEST_BLK_EN3: - case WCD938X_TX_3_4_TXFE3_CLKDIV: - case WCD938X_TX_3_4_TEST_BLK_EN2: - case WCD938X_TX_3_4_TXFE2_CLKDIV: - case WCD938X_TX_3_4_SPARE1: - case WCD938X_TX_3_4_TEST_BLK_EN4: - case WCD938X_TX_3_4_TXFE4_CLKDIV: - case WCD938X_TX_3_4_SPARE2: - case WCD938X_CLASSH_MODE_1: - case WCD938X_CLASSH_MODE_2: - case WCD938X_CLASSH_MODE_3: - case WCD938X_CLASSH_CTRL_VCL_1: - case WCD938X_CLASSH_CTRL_VCL_2: - case WCD938X_CLASSH_CTRL_CCL_1: - case WCD938X_CLASSH_CTRL_CCL_2: - case WCD938X_CLASSH_CTRL_CCL_3: - case WCD938X_CLASSH_CTRL_CCL_4: - case WCD938X_CLASSH_CTRL_CCL_5: - case WCD938X_CLASSH_BUCK_TMUX_A_D: - case WCD938X_CLASSH_BUCK_SW_DRV_CNTL: - case WCD938X_CLASSH_SPARE: - case WCD938X_FLYBACK_EN: - case WCD938X_FLYBACK_VNEG_CTRL_1: - case WCD938X_FLYBACK_VNEG_CTRL_2: - case WCD938X_FLYBACK_VNEG_CTRL_3: - case WCD938X_FLYBACK_VNEG_CTRL_4: - case WCD938X_FLYBACK_VNEG_CTRL_5: - case WCD938X_FLYBACK_VNEG_CTRL_6: - case WCD938X_FLYBACK_VNEG_CTRL_7: - case WCD938X_FLYBACK_VNEG_CTRL_8: - case WCD938X_FLYBACK_VNEG_CTRL_9: - case WCD938X_FLYBACK_VNEGDAC_CTRL_1: - case WCD938X_FLYBACK_VNEGDAC_CTRL_2: - case WCD938X_FLYBACK_VNEGDAC_CTRL_3: - case WCD938X_FLYBACK_CTRL_1: - case WCD938X_FLYBACK_TEST_CTL: - case WCD938X_RX_AUX_SW_CTL: - case WCD938X_RX_PA_AUX_IN_CONN: - case WCD938X_RX_TIMER_DIV: - case WCD938X_RX_OCP_CTL: - case WCD938X_RX_OCP_COUNT: - case WCD938X_RX_BIAS_EAR_DAC: - case WCD938X_RX_BIAS_EAR_AMP: - case WCD938X_RX_BIAS_HPH_LDO: - case WCD938X_RX_BIAS_HPH_PA: - case WCD938X_RX_BIAS_HPH_RDACBUFF_CNP2: - case WCD938X_RX_BIAS_HPH_RDAC_LDO: - case WCD938X_RX_BIAS_HPH_CNP1: - case WCD938X_RX_BIAS_HPH_LOWPOWER: - case WCD938X_RX_BIAS_AUX_DAC: - case WCD938X_RX_BIAS_AUX_AMP: - case WCD938X_RX_BIAS_VNEGDAC_BLEEDER: - case WCD938X_RX_BIAS_MISC: - case WCD938X_RX_BIAS_BUCK_RST: - case WCD938X_RX_BIAS_BUCK_VREF_ERRAMP: - case WCD938X_RX_BIAS_FLYB_ERRAMP: - case WCD938X_RX_BIAS_FLYB_BUFF: - case WCD938X_RX_BIAS_FLYB_MID_RST: - case WCD938X_HPH_CNP_EN: - case WCD938X_HPH_CNP_WG_CTL: - case WCD938X_HPH_CNP_WG_TIME: - case WCD938X_HPH_OCP_CTL: - case WCD938X_HPH_AUTO_CHOP: - case WCD938X_HPH_CHOP_CTL: - case WCD938X_HPH_PA_CTL1: - case WCD938X_HPH_PA_CTL2: - case WCD938X_HPH_L_EN: - case WCD938X_HPH_L_TEST: - case WCD938X_HPH_L_ATEST: - case WCD938X_HPH_R_EN: - case WCD938X_HPH_R_TEST: - case WCD938X_HPH_R_ATEST: - case WCD938X_HPH_RDAC_CLK_CTL1: - case WCD938X_HPH_RDAC_CLK_CTL2: - case WCD938X_HPH_RDAC_LDO_CTL: - case WCD938X_HPH_RDAC_CHOP_CLK_LP_CTL: - case WCD938X_HPH_REFBUFF_UHQA_CTL: - case WCD938X_HPH_REFBUFF_LP_CTL: - case WCD938X_HPH_L_DAC_CTL: - case WCD938X_HPH_R_DAC_CTL: - case WCD938X_HPH_SURGE_HPHLR_SURGE_COMP_SEL: - case WCD938X_HPH_SURGE_HPHLR_SURGE_EN: - case WCD938X_HPH_SURGE_HPHLR_SURGE_MISC1: - case WCD938X_EAR_EAR_EN_REG: - case WCD938X_EAR_EAR_PA_CON: - case WCD938X_EAR_EAR_SP_CON: - case WCD938X_EAR_EAR_DAC_CON: - case WCD938X_EAR_EAR_CNP_FSM_CON: - case WCD938X_EAR_TEST_CTL: - case WCD938X_ANA_NEW_PAGE_REGISTER: - case WCD938X_HPH_NEW_ANA_HPH2: - case WCD938X_HPH_NEW_ANA_HPH3: - case WCD938X_SLEEP_CTL: - case WCD938X_SLEEP_WATCHDOG_CTL: - case WCD938X_MBHC_NEW_ELECT_REM_CLAMP_CTL: - case WCD938X_MBHC_NEW_CTL_1: - case WCD938X_MBHC_NEW_CTL_2: - case WCD938X_MBHC_NEW_PLUG_DETECT_CTL: - case WCD938X_MBHC_NEW_ZDET_ANA_CTL: - case WCD938X_MBHC_NEW_ZDET_RAMP_CTL: - case WCD938X_TX_NEW_AMIC_MUX_CFG: - case WCD938X_AUX_AUXPA: - case WCD938X_LDORXTX_MODE: - case WCD938X_LDORXTX_CONFIG: - case WCD938X_DIE_CRACK_DIE_CRK_DET_EN: - case WCD938X_HPH_NEW_INT_RDAC_GAIN_CTL: - case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L: - case WCD938X_HPH_NEW_INT_RDAC_VREF_CTL: - case WCD938X_HPH_NEW_INT_RDAC_OVERRIDE_CTL: - case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R: - case WCD938X_HPH_NEW_INT_PA_MISC1: - case WCD938X_HPH_NEW_INT_PA_MISC2: - case WCD938X_HPH_NEW_INT_PA_RDAC_MISC: - case WCD938X_HPH_NEW_INT_HPH_TIMER1: - case WCD938X_HPH_NEW_INT_HPH_TIMER2: - case WCD938X_HPH_NEW_INT_HPH_TIMER3: - case WCD938X_HPH_NEW_INT_HPH_TIMER4: - case WCD938X_HPH_NEW_INT_PA_RDAC_MISC2: - case WCD938X_HPH_NEW_INT_PA_RDAC_MISC3: - case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_L_NEW: - case WCD938X_HPH_NEW_INT_RDAC_HD2_CTL_R_NEW: - case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_LOHIFI: - case WCD938X_RX_NEW_INT_HPH_RDAC_BIAS_ULP: - case WCD938X_RX_NEW_INT_HPH_RDAC_LDO_LP: - case WCD938X_MBHC_NEW_INT_MOISTURE_DET_DC_CTRL: - case WCD938X_MBHC_NEW_INT_MOISTURE_DET_POLLING_CTRL: - case WCD938X_MBHC_NEW_INT_MECH_DET_CURRENT: - case WCD938X_MBHC_NEW_INT_SPARE_2: - case WCD938X_EAR_INT_NEW_EAR_CHOPPER_CON: - case WCD938X_EAR_INT_NEW_CNP_VCM_CON1: - case WCD938X_EAR_INT_NEW_CNP_VCM_CON2: - case WCD938X_EAR_INT_NEW_EAR_DYNAMIC_BIAS: - case WCD938X_AUX_INT_EN_REG: - case WCD938X_AUX_INT_PA_CTRL: - case WCD938X_AUX_INT_SP_CTRL: - case WCD938X_AUX_INT_DAC_CTRL: - case WCD938X_AUX_INT_CLK_CTRL: - case WCD938X_AUX_INT_TEST_CTRL: - case WCD938X_AUX_INT_MISC: - case WCD938X_LDORXTX_INT_BIAS: - case WCD938X_LDORXTX_INT_STB_LOADS_DTEST: - case WCD938X_LDORXTX_INT_TEST0: - case WCD938X_LDORXTX_INT_STARTUP_TIMER: - case WCD938X_LDORXTX_INT_TEST1: - case WCD938X_SLEEP_INT_WATCHDOG_CTL_1: - case WCD938X_SLEEP_INT_WATCHDOG_CTL_2: - case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT1: - case WCD938X_DIE_CRACK_INT_DIE_CRK_DET_INT2: - case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L2: - case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L1: - case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_L0: - case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP1P2M: - case WCD938X_TX_COM_NEW_INT_TXFE_DIVSTOP_ULP0P6M: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L2L1: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_L0: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG1_ULP: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L2L1: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_L0: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2MAIN_ULP: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_L2L1L0: - case WCD938X_TX_COM_NEW_INT_TXFE_ICTRL_STG2CASC_ULP: - case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L2L1: - case WCD938X_TX_COM_NEW_INT_TXADC_SCBIAS_L0ULP: - case WCD938X_TX_COM_NEW_INT_TXADC_INT_L2: - case WCD938X_TX_COM_NEW_INT_TXADC_INT_L1: - case WCD938X_TX_COM_NEW_INT_TXADC_INT_L0: - case WCD938X_TX_COM_NEW_INT_TXADC_INT_ULP: - case WCD938X_DIGITAL_PAGE_REGISTER: - case WCD938X_DIGITAL_SWR_TX_CLK_RATE: - case WCD938X_DIGITAL_CDC_RST_CTL: - case WCD938X_DIGITAL_TOP_CLK_CFG: - case WCD938X_DIGITAL_CDC_ANA_CLK_CTL: - case WCD938X_DIGITAL_CDC_DIG_CLK_CTL: - case WCD938X_DIGITAL_SWR_RST_EN: - case WCD938X_DIGITAL_CDC_PATH_MODE: - case WCD938X_DIGITAL_CDC_RX_RST: - case WCD938X_DIGITAL_CDC_RX0_CTL: - case WCD938X_DIGITAL_CDC_RX1_CTL: - case WCD938X_DIGITAL_CDC_RX2_CTL: - case WCD938X_DIGITAL_CDC_TX_ANA_MODE_0_1: - case WCD938X_DIGITAL_CDC_TX_ANA_MODE_2_3: - case WCD938X_DIGITAL_CDC_COMP_CTL_0: - case WCD938X_DIGITAL_CDC_ANA_TX_CLK_CTL: - case WCD938X_DIGITAL_CDC_HPH_DSM_A1_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A1_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_A2_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A2_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_A3_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A3_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_A4_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A4_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_A5_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A5_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_A6_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_A7_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_C_0: - case WCD938X_DIGITAL_CDC_HPH_DSM_C_1: - case WCD938X_DIGITAL_CDC_HPH_DSM_C_2: - case WCD938X_DIGITAL_CDC_HPH_DSM_C_3: - case WCD938X_DIGITAL_CDC_HPH_DSM_R1: - case WCD938X_DIGITAL_CDC_HPH_DSM_R2: - case WCD938X_DIGITAL_CDC_HPH_DSM_R3: - case WCD938X_DIGITAL_CDC_HPH_DSM_R4: - case WCD938X_DIGITAL_CDC_HPH_DSM_R5: - case WCD938X_DIGITAL_CDC_HPH_DSM_R6: - case WCD938X_DIGITAL_CDC_HPH_DSM_R7: - case WCD938X_DIGITAL_CDC_AUX_DSM_A1_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A1_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_A2_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A2_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_A3_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A3_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_A4_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A4_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_A5_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A5_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_A6_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_A7_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_C_0: - case WCD938X_DIGITAL_CDC_AUX_DSM_C_1: - case WCD938X_DIGITAL_CDC_AUX_DSM_C_2: - case WCD938X_DIGITAL_CDC_AUX_DSM_C_3: - case WCD938X_DIGITAL_CDC_AUX_DSM_R1: - case WCD938X_DIGITAL_CDC_AUX_DSM_R2: - case WCD938X_DIGITAL_CDC_AUX_DSM_R3: - case WCD938X_DIGITAL_CDC_AUX_DSM_R4: - case WCD938X_DIGITAL_CDC_AUX_DSM_R5: - case WCD938X_DIGITAL_CDC_AUX_DSM_R6: - case WCD938X_DIGITAL_CDC_AUX_DSM_R7: - case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_0: - case WCD938X_DIGITAL_CDC_HPH_GAIN_RX_1: - case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_0: - case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_1: - case WCD938X_DIGITAL_CDC_HPH_GAIN_DSD_2: - case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_0: - case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_1: - case WCD938X_DIGITAL_CDC_AUX_GAIN_DSD_2: - case WCD938X_DIGITAL_CDC_HPH_GAIN_CTL: - case WCD938X_DIGITAL_CDC_AUX_GAIN_CTL: - case WCD938X_DIGITAL_CDC_EAR_PATH_CTL: - case WCD938X_DIGITAL_CDC_SWR_CLH: - case WCD938X_DIGITAL_SWR_CLH_BYP: - case WCD938X_DIGITAL_CDC_TX0_CTL: - case WCD938X_DIGITAL_CDC_TX1_CTL: - case WCD938X_DIGITAL_CDC_TX2_CTL: - case WCD938X_DIGITAL_CDC_TX_RST: - case WCD938X_DIGITAL_CDC_REQ_CTL: - case WCD938X_DIGITAL_CDC_RST: - case WCD938X_DIGITAL_CDC_AMIC_CTL: - case WCD938X_DIGITAL_CDC_DMIC_CTL: - case WCD938X_DIGITAL_CDC_DMIC1_CTL: - case WCD938X_DIGITAL_CDC_DMIC2_CTL: - case WCD938X_DIGITAL_CDC_DMIC3_CTL: - case WCD938X_DIGITAL_CDC_DMIC4_CTL: - case WCD938X_DIGITAL_EFUSE_PRG_CTL: - case WCD938X_DIGITAL_EFUSE_CTL: - case WCD938X_DIGITAL_CDC_DMIC_RATE_1_2: - case WCD938X_DIGITAL_CDC_DMIC_RATE_3_4: - case WCD938X_DIGITAL_PDM_WD_CTL0: - case WCD938X_DIGITAL_PDM_WD_CTL1: - case WCD938X_DIGITAL_PDM_WD_CTL2: - case WCD938X_DIGITAL_INTR_MODE: - case WCD938X_DIGITAL_INTR_MASK_0: - case WCD938X_DIGITAL_INTR_MASK_1: - case WCD938X_DIGITAL_INTR_MASK_2: - case WCD938X_DIGITAL_INTR_CLEAR_0: - case WCD938X_DIGITAL_INTR_CLEAR_1: - case WCD938X_DIGITAL_INTR_CLEAR_2: - case WCD938X_DIGITAL_INTR_LEVEL_0: - case WCD938X_DIGITAL_INTR_LEVEL_1: - case WCD938X_DIGITAL_INTR_LEVEL_2: - case WCD938X_DIGITAL_INTR_SET_0: - case WCD938X_DIGITAL_INTR_SET_1: - case WCD938X_DIGITAL_INTR_SET_2: - case WCD938X_DIGITAL_INTR_TEST_0: - case WCD938X_DIGITAL_INTR_TEST_1: - case WCD938X_DIGITAL_INTR_TEST_2: - case WCD938X_DIGITAL_TX_MODE_DBG_EN: - case WCD938X_DIGITAL_TX_MODE_DBG_0_1: - case WCD938X_DIGITAL_TX_MODE_DBG_2_3: - case WCD938X_DIGITAL_LB_IN_SEL_CTL: - case WCD938X_DIGITAL_LOOP_BACK_MODE: - case WCD938X_DIGITAL_SWR_DAC_TEST: - case WCD938X_DIGITAL_SWR_HM_TEST_RX_0: - case WCD938X_DIGITAL_SWR_HM_TEST_TX_0: - case WCD938X_DIGITAL_SWR_HM_TEST_RX_1: - case WCD938X_DIGITAL_SWR_HM_TEST_TX_1: - case WCD938X_DIGITAL_SWR_HM_TEST_TX_2: - case WCD938X_DIGITAL_PAD_CTL_SWR_0: - case WCD938X_DIGITAL_PAD_CTL_SWR_1: - case WCD938X_DIGITAL_I2C_CTL: - case WCD938X_DIGITAL_CDC_TX_TANGGU_SW_MODE: - case WCD938X_DIGITAL_EFUSE_TEST_CTL_0: - case WCD938X_DIGITAL_EFUSE_TEST_CTL_1: - case WCD938X_DIGITAL_PAD_CTL_PDM_RX0: - case WCD938X_DIGITAL_PAD_CTL_PDM_RX1: - case WCD938X_DIGITAL_PAD_CTL_PDM_TX0: - case WCD938X_DIGITAL_PAD_CTL_PDM_TX1: - case WCD938X_DIGITAL_PAD_CTL_PDM_TX2: - case WCD938X_DIGITAL_PAD_INP_DIS_0: - case WCD938X_DIGITAL_PAD_INP_DIS_1: - case WCD938X_DIGITAL_DRIVE_STRENGTH_0: - case WCD938X_DIGITAL_DRIVE_STRENGTH_1: - case WCD938X_DIGITAL_DRIVE_STRENGTH_2: - case WCD938X_DIGITAL_RX_DATA_EDGE_CTL: - case WCD938X_DIGITAL_TX_DATA_EDGE_CTL: - case WCD938X_DIGITAL_GPIO_MODE: - case WCD938X_DIGITAL_PIN_CTL_OE: - case WCD938X_DIGITAL_PIN_CTL_DATA_0: - case WCD938X_DIGITAL_PIN_CTL_DATA_1: - case WCD938X_DIGITAL_DIG_DEBUG_CTL: - case WCD938X_DIGITAL_DIG_DEBUG_EN: - case WCD938X_DIGITAL_ANA_CSR_DBG_ADD: - case WCD938X_DIGITAL_ANA_CSR_DBG_CTL: - case WCD938X_DIGITAL_SSP_DBG: - case WCD938X_DIGITAL_SPARE_0: - case WCD938X_DIGITAL_SPARE_1: - case WCD938X_DIGITAL_SPARE_2: - case WCD938X_DIGITAL_TX_REQ_FB_CTL_0: - case WCD938X_DIGITAL_TX_REQ_FB_CTL_1: - case WCD938X_DIGITAL_TX_REQ_FB_CTL_2: - case WCD938X_DIGITAL_TX_REQ_FB_CTL_3: - case WCD938X_DIGITAL_TX_REQ_FB_CTL_4: - case WCD938X_DIGITAL_DEM_BYPASS_DATA0: - case WCD938X_DIGITAL_DEM_BYPASS_DATA1: - case WCD938X_DIGITAL_DEM_BYPASS_DATA2: - case WCD938X_DIGITAL_DEM_BYPASS_DATA3: - return true; - } - - return false; -} - -static bool wcd938x_readonly_register(struct device *dev, unsigned int reg) -{ - switch (reg) { - case WCD938X_ANA_MBHC_RESULT_1: - case WCD938X_ANA_MBHC_RESULT_2: - case WCD938X_ANA_MBHC_RESULT_3: - case WCD938X_MBHC_MOISTURE_DET_FSM_STATUS: - case WCD938X_TX_1_2_SAR2_ERR: - case WCD938X_TX_1_2_SAR1_ERR: - case WCD938X_TX_3_4_SAR4_ERR: - case WCD938X_TX_3_4_SAR3_ERR: - case WCD938X_HPH_L_STATUS: - case WCD938X_HPH_R_STATUS: - case WCD938X_HPH_SURGE_HPHLR_SURGE_STATUS: - case WCD938X_EAR_STATUS_REG_1: - case WCD938X_EAR_STATUS_REG_2: - case WCD938X_MBHC_NEW_FSM_STATUS: - case WCD938X_MBHC_NEW_ADC_RESULT: - case WCD938X_DIE_CRACK_DIE_CRK_DET_OUT: - case WCD938X_AUX_INT_STATUS_REG: - case WCD938X_LDORXTX_INT_STATUS: - case WCD938X_DIGITAL_CHIP_ID0: - case WCD938X_DIGITAL_CHIP_ID1: - case WCD938X_DIGITAL_CHIP_ID2: - case WCD938X_DIGITAL_CHIP_ID3: - case WCD938X_DIGITAL_INTR_STATUS_0: - case WCD938X_DIGITAL_INTR_STATUS_1: - case WCD938X_DIGITAL_INTR_STATUS_2: - case WCD938X_DIGITAL_INTR_CLEAR_0: - case WCD938X_DIGITAL_INTR_CLEAR_1: - case WCD938X_DIGITAL_INTR_CLEAR_2: - case WCD938X_DIGITAL_SWR_HM_TEST_0: - case WCD938X_DIGITAL_SWR_HM_TEST_1: - case WCD938X_DIGITAL_EFUSE_T_DATA_0: - case WCD938X_DIGITAL_EFUSE_T_DATA_1: - case WCD938X_DIGITAL_PIN_STATUS_0: - case WCD938X_DIGITAL_PIN_STATUS_1: - case WCD938X_DIGITAL_MODE_STATUS_0: - case WCD938X_DIGITAL_MODE_STATUS_1: - case WCD938X_DIGITAL_EFUSE_REG_0: - case WCD938X_DIGITAL_EFUSE_REG_1: - case WCD938X_DIGITAL_EFUSE_REG_2: - case WCD938X_DIGITAL_EFUSE_REG_3: - case WCD938X_DIGITAL_EFUSE_REG_4: - case WCD938X_DIGITAL_EFUSE_REG_5: - case WCD938X_DIGITAL_EFUSE_REG_6: - case WCD938X_DIGITAL_EFUSE_REG_7: - case WCD938X_DIGITAL_EFUSE_REG_8: - case WCD938X_DIGITAL_EFUSE_REG_9: - case WCD938X_DIGITAL_EFUSE_REG_10: - case WCD938X_DIGITAL_EFUSE_REG_11: - case WCD938X_DIGITAL_EFUSE_REG_12: - case WCD938X_DIGITAL_EFUSE_REG_13: - case WCD938X_DIGITAL_EFUSE_REG_14: - case WCD938X_DIGITAL_EFUSE_REG_15: - case WCD938X_DIGITAL_EFUSE_REG_16: - case WCD938X_DIGITAL_EFUSE_REG_17: - case WCD938X_DIGITAL_EFUSE_REG_18: - case WCD938X_DIGITAL_EFUSE_REG_19: - case WCD938X_DIGITAL_EFUSE_REG_20: - case WCD938X_DIGITAL_EFUSE_REG_21: - case WCD938X_DIGITAL_EFUSE_REG_22: - case WCD938X_DIGITAL_EFUSE_REG_23: - case WCD938X_DIGITAL_EFUSE_REG_24: - case WCD938X_DIGITAL_EFUSE_REG_25: - case WCD938X_DIGITAL_EFUSE_REG_26: - case WCD938X_DIGITAL_EFUSE_REG_27: - case WCD938X_DIGITAL_EFUSE_REG_28: - case WCD938X_DIGITAL_EFUSE_REG_29: - case WCD938X_DIGITAL_EFUSE_REG_30: - case WCD938X_DIGITAL_EFUSE_REG_31: - return true; - } - return false; -} - -static bool wcd938x_readable_register(struct device *dev, unsigned int reg) -{ - bool ret; - - ret = wcd938x_readonly_register(dev, reg); - if (!ret) - return wcd938x_rdwr_register(dev, reg); - - return ret; -} - -static bool wcd938x_writeable_register(struct device *dev, unsigned int reg) -{ - return wcd938x_rdwr_register(dev, reg); -} - -static bool wcd938x_volatile_register(struct device *dev, unsigned int reg) -{ - if (reg <= WCD938X_BASE_ADDRESS) - return false; - - if (reg == WCD938X_DIGITAL_SWR_TX_CLK_RATE) - return true; - - if (wcd938x_readonly_register(dev, reg)) - return true; - - return false; -} - -static struct regmap_config wcd938x_regmap_config = { - .name = "wcd938x_csr", - .reg_bits = 32, - .val_bits = 8, - .cache_type = REGCACHE_RBTREE, - .reg_defaults = wcd938x_defaults, - .num_reg_defaults = ARRAY_SIZE(wcd938x_defaults), - .max_register = WCD938X_MAX_REGISTER, - .readable_reg = wcd938x_readable_register, - .writeable_reg = wcd938x_writeable_register, - .volatile_reg = wcd938x_volatile_register, - .can_multi_write = true, -}; - static const struct regmap_irq wcd938x_irqs[WCD938X_NUM_IRQS] = { REGMAP_IRQ_REG(WCD938X_IRQ_MBHC_BUTTON_PRESS_DET, 0, 0x01), REGMAP_IRQ_REG(WCD938X_IRQ_MBHC_BUTTON_RELEASE_DET, 0, 0x02), @@ -4412,10 +3417,10 @@ static int wcd938x_bind(struct device *dev) return -EINVAL; }
- wcd938x->regmap = devm_regmap_init_sdw(wcd938x->tx_sdw_dev, &wcd938x_regmap_config); - if (IS_ERR(wcd938x->regmap)) { - dev_err(dev, "%s: tx csr regmap not found\n", __func__); - return PTR_ERR(wcd938x->regmap); + wcd938x->regmap = dev_get_regmap(&wcd938x->tx_sdw_dev->dev, NULL); + if (!wcd938x->regmap) { + dev_err(dev, "could not get TX device regmap\n"); + return -EINVAL; }
ret = wcd938x_irq_init(wcd938x, dev); diff --git a/sound/soc/codecs/wcd938x.h b/sound/soc/codecs/wcd938x.h index ea82039e78435..74b1498fec38b 100644 --- a/sound/soc/codecs/wcd938x.h +++ b/sound/soc/codecs/wcd938x.h @@ -663,6 +663,7 @@ struct wcd938x_sdw_priv { bool is_tx; struct wcd938x_priv *wcd938x; struct irq_domain *slave_irq; + struct regmap *regmap; };
#if IS_ENABLED(CONFIG_SND_SOC_WCD938X_SDW)
From: Miquel Raynal miquel.raynal@bootlin.com
[ Upstream commit 4eddee70140b3ae183398b246a609756546c51f1 ]
Introduce a new (no SFDP) flag for the feature that we are about to support: Read While Write. This means, if the chip has several banks and supports RWW, once a page of data to write has been transferred into the chip's internal SRAM, another read operation happening on a different bank can be performed during the tPROG delay.
Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Link: https://lore.kernel.org/r/20230328154105.448540-7-miquel.raynal@bootlin.com Signed-off-by: Tudor Ambarus tudor.ambarus@linaro.org Stable-dep-of: 9fd0945fe6fa ("mtd: spi-nor: spansion: Enable JFFS2 write buffer for Infineon s28hx SEMPER flash") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/core.c | 3 +++ drivers/mtd/spi-nor/core.h | 3 +++ drivers/mtd/spi-nor/debugfs.c | 1 + 3 files changed, 7 insertions(+)
diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c index bf50a35db711e..767b1faa32b0e 100644 --- a/drivers/mtd/spi-nor/core.c +++ b/drivers/mtd/spi-nor/core.c @@ -2471,6 +2471,9 @@ static void spi_nor_init_flags(struct spi_nor *nor)
if (flags & NO_CHIP_ERASE) nor->flags |= SNOR_F_NO_OP_CHIP_ERASE; + + if (flags & SPI_NOR_RWW) + nor->flags |= SNOR_F_RWW; }
/** diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h index f4246c52a1def..57e8916965ea8 100644 --- a/drivers/mtd/spi-nor/core.h +++ b/drivers/mtd/spi-nor/core.h @@ -130,6 +130,7 @@ enum spi_nor_option_flags { SNOR_F_IO_MODE_EN_VOLATILE = BIT(11), SNOR_F_SOFT_RESET = BIT(12), SNOR_F_SWP_IS_VOLATILE = BIT(13), + SNOR_F_RWW = BIT(14), };
struct spi_nor_read_command { @@ -459,6 +460,7 @@ struct spi_nor_fixups { * NO_CHIP_ERASE: chip does not support chip erase. * SPI_NOR_NO_FR: can't do fastread. * SPI_NOR_QUAD_PP: flash supports Quad Input Page Program. + * SPI_NOR_RWW: flash supports reads while write. * * @no_sfdp_flags: flags that indicate support that can be discovered via SFDP. * Used when SFDP tables are not defined in the flash. These @@ -509,6 +511,7 @@ struct flash_info { #define NO_CHIP_ERASE BIT(7) #define SPI_NOR_NO_FR BIT(8) #define SPI_NOR_QUAD_PP BIT(9) +#define SPI_NOR_RWW BIT(10)
u8 no_sfdp_flags; #define SPI_NOR_SKIP_SFDP BIT(0) diff --git a/drivers/mtd/spi-nor/debugfs.c b/drivers/mtd/spi-nor/debugfs.c index 558ffecf8ae6d..bd8a18da49c04 100644 --- a/drivers/mtd/spi-nor/debugfs.c +++ b/drivers/mtd/spi-nor/debugfs.c @@ -25,6 +25,7 @@ static const char *const snor_f_names[] = { SNOR_F_NAME(IO_MODE_EN_VOLATILE), SNOR_F_NAME(SOFT_RESET), SNOR_F_NAME(SWP_IS_VOLATILE), + SNOR_F_NAME(RWW), }; #undef SNOR_F_NAME
From: Takahiro Kuwano Takahiro.Kuwano@infineon.com
[ Upstream commit 9fd0945fe6fadfb6b54a9cd73be101c02b3e8134 ]
Infineon(Cypress) SEMPER NOR flash family has on-die ECC and its program granularity is 16-byte ECC data unit size. JFFS2 supports write buffer mode for ECC'd NOR flash. Provide a way to clear the MTD_BIT_WRITEABLE flag in order to enable JFFS2 write buffer mode support.
A new SNOR_F_ECC flag is introduced to determine if the part has on-die ECC and if it has, MTD_BIT_WRITEABLE is unset.
In vendor specific driver, a common cypress_nor_ecc_init() helper is added. This helper takes care for ECC related initialization for SEMPER flash family by setting up params->writesize and SNOR_F_ECC.
Fixes: c3266af101f2 ("mtd: spi-nor: spansion: add support for Cypress Semper flash") Suggested-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Takahiro Kuwano Takahiro.Kuwano@infineon.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/d586723f6f12aaff44fbcd7b51e674b47ed554ed.168076074... Signed-off-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/core.c | 3 +++ drivers/mtd/spi-nor/core.h | 1 + drivers/mtd/spi-nor/debugfs.c | 1 + drivers/mtd/spi-nor/spansion.c | 13 ++++++++++++- 4 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c index 767b1faa32b0e..d75db50767938 100644 --- a/drivers/mtd/spi-nor/core.c +++ b/drivers/mtd/spi-nor/core.c @@ -2983,6 +2983,9 @@ static void spi_nor_set_mtd_info(struct spi_nor *nor) mtd->name = dev_name(dev); mtd->type = MTD_NORFLASH; mtd->flags = MTD_CAP_NORFLASH; + /* Unset BIT_WRITEABLE to enable JFFS2 write buffer for ECC'd NOR */ + if (nor->flags & SNOR_F_ECC) + mtd->flags &= ~MTD_BIT_WRITEABLE; if (nor->info->flags & SPI_NOR_NO_ERASE) mtd->flags |= MTD_NO_ERASE; else diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h index 57e8916965ea8..75ec2e5604247 100644 --- a/drivers/mtd/spi-nor/core.h +++ b/drivers/mtd/spi-nor/core.h @@ -131,6 +131,7 @@ enum spi_nor_option_flags { SNOR_F_SOFT_RESET = BIT(12), SNOR_F_SWP_IS_VOLATILE = BIT(13), SNOR_F_RWW = BIT(14), + SNOR_F_ECC = BIT(15), };
struct spi_nor_read_command { diff --git a/drivers/mtd/spi-nor/debugfs.c b/drivers/mtd/spi-nor/debugfs.c index bd8a18da49c04..285bdcbaa1134 100644 --- a/drivers/mtd/spi-nor/debugfs.c +++ b/drivers/mtd/spi-nor/debugfs.c @@ -26,6 +26,7 @@ static const char *const snor_f_names[] = { SNOR_F_NAME(SOFT_RESET), SNOR_F_NAME(SWP_IS_VOLATILE), SNOR_F_NAME(RWW), + SNOR_F_NAME(ECC), }; #undef SNOR_F_NAME
diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c index 07fe0f6fdfe3e..f4e9a1738234a 100644 --- a/drivers/mtd/spi-nor/spansion.c +++ b/drivers/mtd/spi-nor/spansion.c @@ -218,6 +218,17 @@ static int cypress_nor_set_page_size(struct spi_nor *nor) return 0; }
+static void cypress_nor_ecc_init(struct spi_nor *nor) +{ + /* + * Programming is supported only in 16-byte ECC data unit granularity. + * Byte-programming, bit-walking, or multiple program operations to the + * same ECC data unit without an erase are not allowed. + */ + nor->params->writesize = 16; + nor->flags |= SNOR_F_ECC; +} + static int s25hx_t_post_bfpt_fixup(struct spi_nor *nor, const struct sfdp_parameter_header *bfpt_header, @@ -324,7 +335,7 @@ static int s28hx_t_post_bfpt_fixup(struct spi_nor *nor, static void s28hx_t_late_init(struct spi_nor *nor) { nor->params->octal_dtr_enable = cypress_nor_octal_dtr_enable; - nor->params->writesize = 16; + cypress_nor_ecc_init(nor); }
static const struct spi_nor_fixups s28hx_t_fixups = {
From: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org
[ Upstream commit 721d3e91bfc93975c5e1a76c7d588dd8df5d82da ]
Not all Qcom platforms support IRQ mode for ECC handling. For those platforms, the current EDAC driver will not be probed due to missing ECC IRQ in devicetree.
So add support for polling mode so that the EDAC driver can be used on all Qcom platforms supporting LLCC.
The polling delay of 5000ms is chosen based on Qcom downstream/vendor driver.
Reported-by: Luca Weiss luca.weiss@fairphone.com Tested-by: Luca Weiss luca.weiss@fairphone.com Tested-by: Steev Klimaszewski steev@kali.org # Thinkpad X13s Tested-by: Andrew Halaney ahalaney@redhat.com # sa8540p-ride Reviewed-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org Signed-off-by: Bjorn Andersson andersson@kernel.org Link: https://lore.kernel.org/r/20230314080443.64635-14-manivannan.sadhasivam@lina... Stable-dep-of: cca94f1dd6d0 ("soc: qcom: llcc: Do not create EDAC platform device on SDM845") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/edac/qcom_edac.c | 50 +++++++++++++++++++++--------------- drivers/soc/qcom/llcc-qcom.c | 13 +++++----- 2 files changed, 35 insertions(+), 28 deletions(-)
diff --git a/drivers/edac/qcom_edac.c b/drivers/edac/qcom_edac.c index c45519f59dc11..2c91ceff8a9ca 100644 --- a/drivers/edac/qcom_edac.c +++ b/drivers/edac/qcom_edac.c @@ -76,6 +76,8 @@ #define DRP0_INTERRUPT_ENABLE BIT(6) #define SB_DB_DRP_INTERRUPT_ENABLE 0x3
+#define ECC_POLL_MSEC 5000 + enum { LLCC_DRAM_CE = 0, LLCC_DRAM_UE, @@ -285,8 +287,7 @@ dump_syn_reg(struct edac_device_ctl_info *edev_ctl, int err_type, u32 bank) return ret; }
-static irqreturn_t -llcc_ecc_irq_handler(int irq, void *edev_ctl) +static irqreturn_t llcc_ecc_irq_handler(int irq, void *edev_ctl) { struct edac_device_ctl_info *edac_dev_ctl = edev_ctl; struct llcc_drv_data *drv = edac_dev_ctl->dev->platform_data; @@ -332,6 +333,11 @@ llcc_ecc_irq_handler(int irq, void *edev_ctl) return irq_rc; }
+static void llcc_ecc_check(struct edac_device_ctl_info *edev_ctl) +{ + llcc_ecc_irq_handler(0, edev_ctl); +} + static int qcom_llcc_edac_probe(struct platform_device *pdev) { struct llcc_drv_data *llcc_driv_data = pdev->dev.platform_data; @@ -359,29 +365,31 @@ static int qcom_llcc_edac_probe(struct platform_device *pdev) edev_ctl->ctl_name = "llcc"; edev_ctl->panic_on_ue = LLCC_ERP_PANIC_ON_UE;
- rc = edac_device_add_device(edev_ctl); - if (rc) - goto out_mem; - - platform_set_drvdata(pdev, edev_ctl); - - /* Request for ecc irq */ + /* Check if LLCC driver has passed ECC IRQ */ ecc_irq = llcc_driv_data->ecc_irq; - if (ecc_irq < 0) { - rc = -ENODEV; - goto out_dev; - } - rc = devm_request_irq(dev, ecc_irq, llcc_ecc_irq_handler, + if (ecc_irq > 0) { + /* Use interrupt mode if IRQ is available */ + rc = devm_request_irq(dev, ecc_irq, llcc_ecc_irq_handler, IRQF_TRIGGER_HIGH, "llcc_ecc", edev_ctl); - if (rc) - goto out_dev; + if (!rc) { + edac_op_state = EDAC_OPSTATE_INT; + goto irq_done; + } + }
- return rc; + /* Fall back to polling mode otherwise */ + edev_ctl->poll_msec = ECC_POLL_MSEC; + edev_ctl->edac_check = llcc_ecc_check; + edac_op_state = EDAC_OPSTATE_POLL;
-out_dev: - edac_device_del_device(edev_ctl->dev); -out_mem: - edac_device_free_ctl_info(edev_ctl); +irq_done: + rc = edac_device_add_device(edev_ctl); + if (rc) { + edac_device_free_ctl_info(edev_ctl); + return rc; + } + + platform_set_drvdata(pdev, edev_ctl);
return rc; } diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c index 26efe12012a0d..e417bd285d9db 100644 --- a/drivers/soc/qcom/llcc-qcom.c +++ b/drivers/soc/qcom/llcc-qcom.c @@ -1001,13 +1001,12 @@ static int qcom_llcc_probe(struct platform_device *pdev) goto err;
drv_data->ecc_irq = platform_get_irq_optional(pdev, 0); - if (drv_data->ecc_irq >= 0) { - llcc_edac = platform_device_register_data(&pdev->dev, - "qcom_llcc_edac", -1, drv_data, - sizeof(*drv_data)); - if (IS_ERR(llcc_edac)) - dev_err(dev, "Failed to register llcc edac driver\n"); - } + + llcc_edac = platform_device_register_data(&pdev->dev, + "qcom_llcc_edac", -1, drv_data, + sizeof(*drv_data)); + if (IS_ERR(llcc_edac)) + dev_err(dev, "Failed to register llcc edac driver\n");
return 0; err:
From: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org
[ Upstream commit cca94f1dd6d0a4c7e5c8190672f5747e3c00ddde ]
The platforms based on SDM845 SoC locks the access to EDAC registers in the bootloader. So probing the EDAC driver will result in a crash. Hence, disable the creation of EDAC platform device on all SDM845 devices.
The issue has been observed on Lenovo Yoga C630 and DB845c.
While at it, also sort the members of `struct qcom_llcc_config` to avoid any holes in-between.
Cc: stable@vger.kernel.org # 5.10 Reported-by: Steev Klimaszewski steev@kali.org Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org Signed-off-by: Bjorn Andersson andersson@kernel.org Link: https://lore.kernel.org/r/20230314080443.64635-15-manivannan.sadhasivam@lina... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/qcom/llcc-qcom.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c index e417bd285d9db..d4d3eced52f35 100644 --- a/drivers/soc/qcom/llcc-qcom.c +++ b/drivers/soc/qcom/llcc-qcom.c @@ -122,10 +122,11 @@ struct llcc_slice_config {
struct qcom_llcc_config { const struct llcc_slice_config *sct_data; - int size; - bool need_llcc_cfg; const u32 *reg_offset; const struct llcc_edac_reg_offset *edac_reg_offset; + int size; + bool need_llcc_cfg; + bool no_edac; };
enum llcc_reg_offset { @@ -454,6 +455,7 @@ static const struct qcom_llcc_config sdm845_cfg = { .need_llcc_cfg = false, .reg_offset = llcc_v1_reg_offset, .edac_reg_offset = &llcc_v1_edac_reg_offset, + .no_edac = true, };
static const struct qcom_llcc_config sm6350_cfg = { @@ -1002,11 +1004,19 @@ static int qcom_llcc_probe(struct platform_device *pdev)
drv_data->ecc_irq = platform_get_irq_optional(pdev, 0);
- llcc_edac = platform_device_register_data(&pdev->dev, - "qcom_llcc_edac", -1, drv_data, - sizeof(*drv_data)); - if (IS_ERR(llcc_edac)) - dev_err(dev, "Failed to register llcc edac driver\n"); + /* + * On some platforms, the access to EDAC registers will be locked by + * the bootloader. So probing the EDAC driver will result in a crash. + * Hence, disable the creation of EDAC platform device for the + * problematic platforms. + */ + if (!cfg->no_edac) { + llcc_edac = platform_device_register_data(&pdev->dev, + "qcom_llcc_edac", -1, drv_data, + sizeof(*drv_data)); + if (IS_ERR(llcc_edac)) + dev_err(dev, "Failed to register llcc edac driver\n"); + }
return 0; err:
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
[ Upstream commit 043f85ce81cb1714e14d31c322c5646513dde3fb ]
Using flexible array is more straight forward. It - saves 1 pointer in the 'zynqmp_ipi_pdata' structure - saves an indirection when using this array - saves some LoC and avoids some always spurious pointer arithmetic
Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Signed-off-by: Jassi Brar jaswinder.singh@linaro.org Stable-dep-of: f72f805e7288 ("mailbox: zynqmp: Fix counts of child nodes") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mailbox/zynqmp-ipi-mailbox.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c index e02a4a18e8c29..29f09ded6e739 100644 --- a/drivers/mailbox/zynqmp-ipi-mailbox.c +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c @@ -110,7 +110,7 @@ struct zynqmp_ipi_pdata { unsigned int method; u32 local_id; int num_mboxes; - struct zynqmp_ipi_mbox *ipi_mboxes; + struct zynqmp_ipi_mbox ipi_mboxes[]; };
static struct device_driver zynqmp_ipi_mbox_driver = { @@ -635,7 +635,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev) int num_mboxes, ret = -EINVAL;
num_mboxes = of_get_child_count(np); - pdata = devm_kzalloc(dev, sizeof(*pdata) + (num_mboxes * sizeof(*mbox)), + pdata = devm_kzalloc(dev, struct_size(pdata, ipi_mboxes, num_mboxes), GFP_KERNEL); if (!pdata) return -ENOMEM; @@ -649,8 +649,6 @@ static int zynqmp_ipi_probe(struct platform_device *pdev) }
pdata->num_mboxes = num_mboxes; - pdata->ipi_mboxes = (struct zynqmp_ipi_mbox *) - ((char *)pdata + sizeof(*pdata));
mbox = pdata->ipi_mboxes; for_each_available_child_of_node(np, nc) {
From: Tanmay Shah tanmay.shah@amd.com
[ Upstream commit f72f805e72882c361e2a612c64a6e549f3da7152 ]
If child mailbox node status is disabled it causes crash in interrupt handler. Fix this by assigning only available child node during driver probe.
Fixes: 4981b82ba2ff ("mailbox: ZynqMP IPI mailbox controller") Signed-off-by: Tanmay Shah tanmay.shah@amd.com Acked-by: Michal Simek michal.simek@amd.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230311012407.1292118-2-tanmay.shah@amd.com Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mailbox/zynqmp-ipi-mailbox.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c index 29f09ded6e739..d097f45b0e5f5 100644 --- a/drivers/mailbox/zynqmp-ipi-mailbox.c +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c @@ -634,7 +634,12 @@ static int zynqmp_ipi_probe(struct platform_device *pdev) struct zynqmp_ipi_mbox *mbox; int num_mboxes, ret = -EINVAL;
- num_mboxes = of_get_child_count(np); + num_mboxes = of_get_available_child_count(np); + if (num_mboxes == 0) { + dev_err(dev, "mailbox nodes not available\n"); + return -EINVAL; + } + pdata = devm_kzalloc(dev, struct_size(pdata, ipi_mboxes, num_mboxes), GFP_KERNEL); if (!pdata)
From: Takahiro Kuwano Takahiro.Kuwano@infineon.com
[ Upstream commit 4199c1719e24e73be0acc8b0146fc31ad8af9771 ]
Infineon(Cypress) SEMPER NOR flash family has on-die ECC and its program granularity is 16-byte ECC data unit size. JFFS2 supports write buffer mode for ECC'd NOR flash. Provide a way to clear the MTD_BIT_WRITEABLE flag in order to enable JFFS2 write buffer mode support.
Fixes: b6b23833fc42 ("mtd: spi-nor: spansion: Add s25hl-t/s25hs-t IDs and fixups") Suggested-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Takahiro Kuwano Takahiro.Kuwano@infineon.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/a1cc128e094db4ec141f85bd380127598dfef17e.168076074... Signed-off-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/spansion.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c index f4e9a1738234a..aef085b476deb 100644 --- a/drivers/mtd/spi-nor/spansion.c +++ b/drivers/mtd/spi-nor/spansion.c @@ -266,13 +266,10 @@ static void s25hx_t_post_sfdp_fixup(struct spi_nor *nor)
static void s25hx_t_late_init(struct spi_nor *nor) { - struct spi_nor_flash_parameter *params = nor->params; - /* Fast Read 4B requires mode cycles */ - params->reads[SNOR_CMD_READ_FAST].num_mode_clocks = 8; + nor->params->reads[SNOR_CMD_READ_FAST].num_mode_clocks = 8;
- /* The writesize should be ECC data unit size */ - params->writesize = 16; + cypress_nor_ecc_init(nor); }
static struct spi_nor_fixups s25hx_t_fixups = {
From: ZhangPeng zhangpeng362@huawei.com
[ Upstream commit 254e69f284d7270e0abdc023ee53b71401c3ba0c ]
Syzbot reported a null-ptr-deref bug:
ntfs3: loop0: Different NTFS' sector size (1024) and media sector size (512) ntfs3: loop0: Mark volume as dirty due to NTFS errors general protection fault, probably for non-canonical address 0xdffffc0000000001: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x0000000000000008-0x000000000000000f] RIP: 0010:d_flags_for_inode fs/dcache.c:1980 [inline] RIP: 0010:__d_add+0x5ce/0x800 fs/dcache.c:2796 Call Trace: <TASK> d_splice_alias+0x122/0x3b0 fs/dcache.c:3191 lookup_open fs/namei.c:3391 [inline] open_last_lookups fs/namei.c:3481 [inline] path_openat+0x10e6/0x2df0 fs/namei.c:3688 do_filp_open+0x264/0x4f0 fs/namei.c:3718 do_sys_openat2+0x124/0x4e0 fs/open.c:1310 do_sys_open fs/open.c:1326 [inline] __do_sys_open fs/open.c:1334 [inline] __se_sys_open fs/open.c:1330 [inline] __x64_sys_open+0x221/0x270 fs/open.c:1330 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
If the MFT record of ntfs inode is not a base record, inode->i_op can be NULL. And a null-ptr-deref may happen:
ntfs_lookup() dir_search_u() # inode->i_op is set to NULL d_splice_alias() __d_add() d_flags_for_inode() # inode->i_op->get_link null-ptr-deref
Fix this by adding a Check on inode->i_op before calling the d_splice_alias() function.
Fixes: 4342306f0f0d ("fs/ntfs3: Add file operations and implementation") Reported-by: syzbot+a8f26a403c169b7593fe@syzkaller.appspotmail.com Signed-off-by: ZhangPeng zhangpeng362@huawei.com Signed-off-by: Konstantin Komarov almaz.alexandrovich@paragon-software.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ntfs3/namei.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/fs/ntfs3/namei.c b/fs/ntfs3/namei.c index c8db35e2ae172..3db34d5c03dc7 100644 --- a/fs/ntfs3/namei.c +++ b/fs/ntfs3/namei.c @@ -88,6 +88,16 @@ static struct dentry *ntfs_lookup(struct inode *dir, struct dentry *dentry, __putname(uni); }
+ /* + * Check for a null pointer + * If the MFT record of ntfs inode is not a base record, inode->i_op can be NULL. + * This causes null pointer dereference in d_splice_alias(). + */ + if (!IS_ERR(inode) && inode->i_op == NULL) { + iput(inode); + inode = ERR_PTR(-EINVAL); + } + return d_splice_alias(inode, dentry); }
From: Ryan Lin tsung-hua.lin@amd.com
[ Upstream commit 1e5d4d8eb8c0f15d90c50e7abd686c980e54e42e ]
[Why] Needs to set the default value of the LTTPR timeout after resume.
[How] Set the default (3.2ms) timeout at resuming if the sink supports LTTPR
Reviewed-by: Jerry Zuo Jerry.Zuo@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Ryan Lin tsung-hua.lin@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 422909d1f352b..58fdd39f5bde9 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -39,6 +39,7 @@ #include "dc/dc_edid_parser.h" #include "dc/dc_stat.h" #include "amdgpu_dm_trace.h" +#include "dc/inc/dc_link_ddc.h"
#include "vid.h" #include "amdgpu.h" @@ -2262,6 +2263,14 @@ static void s3_handle_mst(struct drm_device *dev, bool suspend) if (suspend) { drm_dp_mst_topology_mgr_suspend(mgr); } else { + /* if extended timeout is supported in hardware, + * default to LTTPR timeout (3.2ms) first as a W/A for DP link layer + * CTS 4.2.1.1 regression introduced by CTS specs requirement update. + */ + dc_link_aux_try_to_configure_timeout(aconnector->dc_link->ddc, LINK_AUX_DEFAULT_LTTPR_TIMEOUT_PERIOD); + if (!dp_is_lttpr_present(aconnector->dc_link)) + dc_link_aux_try_to_configure_timeout(aconnector->dc_link->ddc, LINK_AUX_DEFAULT_TIMEOUT_PERIOD); + ret = drm_dp_mst_topology_mgr_resume(mgr, true); if (ret < 0) { dm_helpers_dp_mst_stop_top_mgr(aconnector->dc_link->ctx,
From: Paolo Bonzini pbonzini@redhat.com
[ Upstream commit 2fdcc1b324189b5fb20655baebd40cd82e2bdf0c ]
Most of the time, calls to get_guest_pgd result in calling kvm_read_cr3 (the exception is only nested TDP). Hardcode the default instead of using the get_cr3 function, avoiding a retpoline if they are enabled.
Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Mathias Krause minipli@grsecurity.net Link: https://lore.kernel.org/r/20230322013731.102955-2-minipli@grsecurity.net Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net # backport to v6.2.x Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/mmu/mmu.c | 31 ++++++++++++++++++++----------- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 2 files changed, 21 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 835426254e768..2faea9e873629 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -233,6 +233,20 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) return regs; }
+static unsigned long get_guest_cr3(struct kvm_vcpu *vcpu) +{ + return kvm_read_cr3(vcpu); +} + +static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + if (IS_ENABLED(CONFIG_RETPOLINE) && mmu->get_guest_pgd == get_guest_cr3) + return kvm_read_cr3(vcpu); + + return mmu->get_guest_pgd(vcpu); +} + static inline bool kvm_available_flush_tlb_with_range(void) { return kvm_x86_ops.tlb_remote_flush_with_range; @@ -3699,7 +3713,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) int quadrant, i, r; hpa_t root;
- root_pgd = mmu->get_guest_pgd(vcpu); + root_pgd = kvm_mmu_get_guest_pgd(vcpu, mmu); root_gfn = root_pgd >> PAGE_SHIFT;
if (mmu_check_root(vcpu, root_gfn)) @@ -4149,7 +4163,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, arch.token = alloc_apf_token(vcpu); arch.gfn = gfn; arch.direct_map = vcpu->arch.mmu->root_role.direct; - arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu); + arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu);
return kvm_setup_async_pf(vcpu, cr2_or_gpa, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch); @@ -4168,7 +4182,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) return;
if (!vcpu->arch.mmu->root_role.direct && - work->arch.cr3 != vcpu->arch.mmu->get_guest_pgd(vcpu)) + work->arch.cr3 != kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu)) return;
kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); @@ -4530,11 +4544,6 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd) } EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd);
-static unsigned long get_cr3(struct kvm_vcpu *vcpu) -{ - return kvm_read_cr3(vcpu); -} - static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, unsigned int access) { @@ -5085,7 +5094,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->page_fault = kvm_tdp_page_fault; context->sync_page = nonpaging_sync_page; context->invlpg = NULL; - context->get_guest_pgd = get_cr3; + context->get_guest_pgd = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault;
@@ -5235,7 +5244,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu,
kvm_init_shadow_mmu(vcpu, cpu_role);
- context->get_guest_pgd = get_cr3; + context->get_guest_pgd = get_guest_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; } @@ -5249,7 +5258,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, return;
g_context->cpu_role.as_u64 = new_mode.as_u64; - g_context->get_guest_pgd = get_cr3; + g_context->get_guest_pgd = get_guest_cr3; g_context->get_pdptr = kvm_pdptr_read; g_context->inject_page_fault = kvm_inject_page_fault;
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 0f64550720557..89b19b7ef4f9f 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -324,7 +324,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, trace_kvm_mmu_pagetable_walk(addr, access); retry_walk: walker->level = mmu->cpu_role.base.level; - pte = mmu->get_guest_pgd(vcpu); + pte = kvm_mmu_get_guest_pgd(vcpu, mmu); have_ad = PT_HAVE_ACCESSED_DIRTY(mmu);
#if PTTYPE == 64
From: Mathias Krause minipli@grsecurity.net
[ Upstream commit 01b31714bd90be2784f7145bf93b7f78f3d081e1 ]
There is no need to unload the MMU roots with TDP enabled when only CR0.WP has changed -- the paging structures are still valid, only the permission bitmap needs to be updated.
One heavy user of toggling CR0.WP is grsecurity's KERNEXEC feature to implement kernel W^X.
The optimization brings a huge performance gain for this case as the following micro-benchmark running 'ssdd 10 50000' from rt-tests[1] on a grsecurity L1 VM shows (runtime in seconds, lower is better):
legacy TDP shadow kvm-x86/next@d8708b 8.43s 9.45s 70.3s +patch 5.39s 5.63s 70.2s
For legacy MMU this is ~36% faster, for TDP MMU even ~40% faster. Also TDP and legacy MMU now both have a similar runtime which vanishes the need to disable TDP MMU for grsecurity.
Shadow MMU sees no measurable difference and is still slow, as expected.
[1] https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git
Signed-off-by: Mathias Krause minipli@grsecurity.net Link: https://lore.kernel.org/r/20230322013731.102955-3-minipli@grsecurity.net Co-developed-by: Sean Christopherson seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/x86.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2d76c254582b0..35cd87a326ace 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -904,6 +904,18 @@ EXPORT_SYMBOL_GPL(load_pdptrs);
void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned long cr0) { + /* + * CR0.WP is incorporated into the MMU role, but only for non-nested, + * indirect shadow MMUs. If TDP is enabled, the MMU's metadata needs + * to be updated, e.g. so that emulating guest translations does the + * right thing, but there's no need to unload the root as CR0.WP + * doesn't affect SPTEs. + */ + if (tdp_enabled && (cr0 ^ old_cr0) == X86_CR0_WP) { + kvm_init_mmu(vcpu); + return; + } + if ((cr0 ^ old_cr0) & X86_CR0_PG) { kvm_clear_async_pf_completion_queue(vcpu); kvm_async_pf_hash_reset(vcpu);
From: Mathias Krause minipli@grsecurity.net
[ Upstream commit 74cdc836919bf34684ef66f995273f35e2189daf ]
Make use of the kvm_read_cr{0,4}_bits() helper functions when we only want to know the state of certain bits instead of the whole register.
This not only makes the intent cleaner, it also avoids a potential VMREAD in case the tested bits aren't guest owned.
Signed-off-by: Mathias Krause minipli@grsecurity.net Link: https://lore.kernel.org/r/20230322013731.102955-5-minipli@grsecurity.net Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/pmu.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index eb594620dd75a..8be583a05de70 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -438,9 +438,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) if (!pmc) return 1;
- if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) && + if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) && (static_call(kvm_x86_get_cpl)(vcpu) != 0) && - (kvm_read_cr0(vcpu) & X86_CR0_PE)) + (kvm_read_cr0_bits(vcpu, X86_CR0_PE))) return 1;
*data = pmc_read_counter(pmc) & mask; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 53034045cb6e6..ca8eca4ec0e38 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5450,7 +5450,7 @@ static int handle_cr(struct kvm_vcpu *vcpu) break; case 3: /* lmsw */ val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f; - trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val); + trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val)); kvm_lmsw(vcpu, val);
return kvm_skip_emulated_instruction(vcpu); @@ -7531,7 +7531,7 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT;
- if (kvm_read_cr0(vcpu) & X86_CR0_CD) { + if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) cache = MTRR_TYPE_WRBACK; else
From: Mathias Krause minipli@grsecurity.net
[ Upstream commit fb509f76acc8d42bed11bca308404f81c2be856a ]
Guests like grsecurity that make heavy use of CR0.WP to implement kernel level W^X will suffer from the implied VMEXITs.
With EPT there is no need to intercept a guest change of CR0.WP, so simply make it a guest owned bit if we can do so.
This implies that a read of a guest's CR0.WP bit might need a VMREAD. However, the only potentially affected user seems to be kvm_init_mmu() which is a heavy operation to begin with. But also most callers already cache the full value of CR0 anyway, so no additional VMREAD is needed. The only exception is nested_vmx_load_cr3().
This change is VMX-specific, as SVM has no such fine grained control register intercept control.
Suggested-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net Link: https://lore.kernel.org/r/20230322013731.102955-7-minipli@grsecurity.net Co-developed-by: Sean Christopherson seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net # backport to v6.2.x Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/kvm_cache_regs.h | 2 +- arch/x86/kvm/vmx/nested.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/vmx/vmx.h | 18 ++++++++++++++++++ 4 files changed, 22 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index c09174f73a344..451697a96cf33 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -4,7 +4,7 @@
#include <linux/kvm_host.h>
-#define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS +#define KVM_POSSIBLE_CR0_GUEST_BITS (X86_CR0_TS | X86_CR0_WP) #define KVM_POSSIBLE_CR4_GUEST_BITS \ (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \ | X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD | X86_CR4_FSGSBASE) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b7f2e59d50ee4..579ceaf75dde7 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4488,7 +4488,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, * CR0_GUEST_HOST_MASK is already set in the original vmcs01 * (KVM doesn't change it); */ - vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmx_set_cr0(vcpu, vmcs12->host_cr0);
/* Same as above - no reason to call set_cr4_guest_host_mask(). */ @@ -4639,7 +4639,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu) */ vmx_set_efer(vcpu, nested_vmx_get_vmcs01_guest_efer(vmx));
- vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmx_set_cr0(vcpu, vmcs_readl(CR0_READ_SHADOW));
vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ca8eca4ec0e38..57a73954980ac 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4728,7 +4728,7 @@ static void init_vmcs(struct vcpu_vmx *vmx) /* 22.2.1, 20.8.1 */ vm_entry_controls_set(vmx, vmx_vmentry_ctrl());
- vmx->vcpu.arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; + vmx->vcpu.arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); vmcs_writel(CR0_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr0_guest_owned_bits);
set_cr4_guest_host_mask(vmx); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index a3da84f4ea456..e2b04f4c0fef3 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -640,6 +640,24 @@ BUILD_CONTROLS_SHADOW(tertiary_exec, TERTIARY_VM_EXEC_CONTROL, 64) (1 << VCPU_EXREG_EXIT_INFO_1) | \ (1 << VCPU_EXREG_EXIT_INFO_2))
+static inline unsigned long vmx_l1_guest_owned_cr0_bits(void) +{ + unsigned long bits = KVM_POSSIBLE_CR0_GUEST_BITS; + + /* + * CR0.WP needs to be intercepted when KVM is shadowing legacy paging + * in order to construct shadow PTEs with the correct protections. + * Note! CR0.WP technically can be passed through to the guest if + * paging is disabled, but checking CR0.PG would generate a cyclical + * dependency of sorts due to forcing the caller to ensure CR0 holds + * the correct value prior to determining which CR0 bits can be owned + * by L1. Keep it simple and limit the optimization to EPT. + */ + if (!enable_ept) + bits &= ~X86_CR0_WP; + return bits; +} + static inline struct kvm_vmx *to_kvm_vmx(struct kvm *kvm) { return container_of(kvm, struct kvm_vmx, kvm);
From: Sean Christopherson seanjc@google.com
[ Upstream commit cf9f4c0eb1699d306e348b1fd0225af7b2c282d3 ]
Refresh the MMU's snapshot of the vCPU's CR0.WP prior to checking for permission faults when emulating a guest memory access and CR0.WP may be guest owned. If the guest toggles only CR0.WP and triggers emulation of a supervisor write, e.g. when KVM is emulating UMIP, KVM may consume a stale CR0.WP, i.e. use stale protection bits metadata.
Note, KVM passes through CR0.WP if and only if EPT is enabled as CR0.WP is part of the MMU role for legacy shadow paging, and SVM (NPT) doesn't support per-bit interception controls for CR0. Don't bother checking for EPT vs. NPT as the "old == new" check will always be true under NPT, i.e. the only cost is the read of vcpu->arch.cr4 (SVM unconditionally grabs CR0 from the VMCB on VM-Exit).
Reported-by: Mathias Krause minipli@grsecurity.net Link: https://lkml.kernel.org/r/677169b4-051f-fcae-756b-9a3e1bb9f8fe%40grsecurity.... Fixes: fb509f76acc8 ("KVM: VMX: Make CR0.WP a guest owned bit") Tested-by: Mathias Krause minipli@grsecurity.net Link: https://lore.kernel.org/r/20230405002608.418442-1-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Mathias Krause minipli@grsecurity.net # backport to v6.2.x Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/mmu.h | 26 +++++++++++++++++++++++++- arch/x86/kvm/mmu/mmu.c | 15 +++++++++++++++ 2 files changed, 40 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6bdaacb6faa07..59804be91b5b0 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -113,6 +113,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, u64 fault_address, char *insn, int insn_len); +void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu);
int kvm_mmu_load(struct kvm_vcpu *vcpu); void kvm_mmu_unload(struct kvm_vcpu *vcpu); @@ -153,6 +155,24 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu) vcpu->arch.mmu->root_role.level); }
+static inline void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + /* + * When EPT is enabled, KVM may passthrough CR0.WP to the guest, i.e. + * @mmu's snapshot of CR0.WP and thus all related paging metadata may + * be stale. Refresh CR0.WP and the metadata on-demand when checking + * for permission faults. Exempt nested MMUs, i.e. MMUs for shadowing + * nEPT and nNPT, as CR0.WP is ignored in both cases. Note, KVM does + * need to refresh nested_mmu, a.k.a. the walker used to translate L2 + * GVAs to GPAs, as that "MMU" needs to honor L2's CR0.WP. + */ + if (!tdp_enabled || mmu == &vcpu->arch.guest_mmu) + return; + + __kvm_mmu_refresh_passthrough_bits(vcpu, mmu); +} + /* * Check if a given access (described through the I/D, W/R and U/S bits of a * page fault error code pfec) causes a permission fault with the given PTE @@ -184,8 +204,12 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, u64 implicit_access = access & PFERR_IMPLICIT_ACCESS; bool not_smap = ((rflags & X86_EFLAGS_AC) | implicit_access) == X86_EFLAGS_AC; int index = (pfec + (not_smap << PFERR_RSVD_BIT)) >> 1; - bool fault = (mmu->permissions[index] >> pte_access) & 1; u32 errcode = PFERR_PRESENT_MASK; + bool fault; + + kvm_mmu_refresh_passthrough_bits(vcpu, mmu); + + fault = (mmu->permissions[index] >> pte_access) & 1;
WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK)); if (unlikely(mmu->pkru_mask)) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2faea9e873629..ce135539145fd 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5047,6 +5047,21 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) return role; }
+void __kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + const bool cr0_wp = !!kvm_read_cr0_bits(vcpu, X86_CR0_WP); + + BUILD_BUG_ON((KVM_MMU_CR0_ROLE_BITS & KVM_POSSIBLE_CR0_GUEST_BITS) != X86_CR0_WP); + BUILD_BUG_ON((KVM_MMU_CR4_ROLE_BITS & KVM_POSSIBLE_CR4_GUEST_BITS)); + + if (is_cr0_wp(mmu) == cr0_wp) + return; + + mmu->cpu_role.base.cr0_wp = cr0_wp; + reset_guest_paging_metadata(vcpu, mmu); +} + static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu) { /* tdp_root_level is architecture forced level, use it if nonzero */
From: Bob Pearson rpearsonhpe@gmail.com
[ Upstream commit 72a03627443d5bc7032ab98bd784740cd8a76f8a ]
Currently all the object types in the rxe driver are allocated in rdma-core except for MRs. By moving tha kzalloc() call outside of the pool code the rxe_alloc() subroutine can be eliminated and code checking for MR as a special case can be removed.
This patch moves the kzalloc() and kfree_rcu() calls into the mr registration and destruction verbs. It removes that code from rxe_pool.c including the rxe_alloc() subroutine which is no longer used.
Link: https://lore.kernel.org/r/20230213225551.12437-1-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson rpearsonhpe@gmail.com Reviewed-by: Devesh Sharma devesh.s.sharma@oracle.com Reviewed-by: Devesh Sharma devesh.s.sharma@oracle.com Reviewed-by: Zhu Yanjun yanjun.zhu@linux.dev Signed-off-by: Jason Gunthorpe jgg@nvidia.com Stable-dep-of: 78b26a335310 ("RDMA/rxe: Remove tasklet call from rxe_cq.c") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/sw/rxe/rxe_mr.c | 2 +- drivers/infiniband/sw/rxe/rxe_pool.c | 46 --------------------- drivers/infiniband/sw/rxe/rxe_pool.h | 3 -- drivers/infiniband/sw/rxe/rxe_verbs.c | 59 +++++++++++++++++++-------- 4 files changed, 44 insertions(+), 66 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 5e9a03831bf9f..b10aa1580a644 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -731,7 +731,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) return -EINVAL;
rxe_cleanup(mr); - + kfree_rcu(mr); return 0; }
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 1151c0b5cceab..6215c6de3a840 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -116,55 +116,12 @@ void rxe_pool_cleanup(struct rxe_pool *pool) WARN_ON(!xa_empty(&pool->xa)); }
-void *rxe_alloc(struct rxe_pool *pool) -{ - struct rxe_pool_elem *elem; - void *obj; - int err; - - if (WARN_ON(!(pool->type == RXE_TYPE_MR))) - return NULL; - - if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto err_cnt; - - obj = kzalloc(pool->elem_size, GFP_KERNEL); - if (!obj) - goto err_cnt; - - elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); - - elem->pool = pool; - elem->obj = obj; - kref_init(&elem->ref_cnt); - init_completion(&elem->complete); - - /* allocate index in array but leave pointer as NULL so it - * can't be looked up until rxe_finalize() is called - */ - err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, - &pool->next, GFP_KERNEL); - if (err < 0) - goto err_free; - - return obj; - -err_free: - kfree(obj); -err_cnt: - atomic_dec(&pool->num_elem); - return NULL; -} - int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem, bool sleepable) { int err; gfp_t gfp_flags;
- if (WARN_ON(pool->type == RXE_TYPE_MR)) - return -EINVAL; - if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto err_cnt;
@@ -275,9 +232,6 @@ int __rxe_cleanup(struct rxe_pool_elem *elem, bool sleepable) if (pool->cleanup) pool->cleanup(elem);
- if (pool->type == RXE_TYPE_MR) - kfree_rcu(elem->obj); - atomic_dec(&pool->num_elem);
return err; diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 9d83cb32092ff..b42e26427a702 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -54,9 +54,6 @@ void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, /* free resources from object pool */ void rxe_pool_cleanup(struct rxe_pool *pool);
-/* allocate an object from pool */ -void *rxe_alloc(struct rxe_pool *pool); - /* connect already allocated object to pool */ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem, bool sleepable); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 9ae7cf93365c7..6803ac76ae572 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -867,10 +867,17 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) struct rxe_dev *rxe = to_rdev(ibpd->device); struct rxe_pd *pd = to_rpd(ibpd); struct rxe_mr *mr; + int err;
- mr = rxe_alloc(&rxe->mr_pool); - if (!mr) - return ERR_PTR(-ENOMEM); + mr = kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) { + err = -ENOMEM; + goto err_out; + } + + err = rxe_add_to_pool(&rxe->mr_pool, mr); + if (err) + goto err_free;
rxe_get(pd); mr->ibmr.pd = ibpd; @@ -878,8 +885,12 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access)
rxe_mr_init_dma(access, mr); rxe_finalize(mr); - return &mr->ibmr; + +err_free: + kfree(mr); +err_out: + return ERR_PTR(err); }
static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, @@ -893,9 +904,15 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, struct rxe_pd *pd = to_rpd(ibpd); struct rxe_mr *mr;
- mr = rxe_alloc(&rxe->mr_pool); - if (!mr) - return ERR_PTR(-ENOMEM); + mr = kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) { + err = -ENOMEM; + goto err_out; + } + + err = rxe_add_to_pool(&rxe->mr_pool, mr); + if (err) + goto err_free;
rxe_get(pd); mr->ibmr.pd = ibpd; @@ -903,14 +920,16 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
err = rxe_mr_init_user(rxe, start, length, iova, access, mr); if (err) - goto err1; + goto err_cleanup;
rxe_finalize(mr); - return &mr->ibmr;
-err1: +err_cleanup: rxe_cleanup(mr); +err_free: + kfree(mr); +err_out: return ERR_PTR(err); }
@@ -925,9 +944,15 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, if (mr_type != IB_MR_TYPE_MEM_REG) return ERR_PTR(-EINVAL);
- mr = rxe_alloc(&rxe->mr_pool); - if (!mr) - return ERR_PTR(-ENOMEM); + mr = kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) { + err = -ENOMEM; + goto err_out; + } + + err = rxe_add_to_pool(&rxe->mr_pool, mr); + if (err) + goto err_free;
rxe_get(pd); mr->ibmr.pd = ibpd; @@ -935,14 +960,16 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type,
err = rxe_mr_init_fast(max_num_sg, mr); if (err) - goto err1; + goto err_cleanup;
rxe_finalize(mr); - return &mr->ibmr;
-err1: +err_cleanup: rxe_cleanup(mr); +err_free: + kfree(mr); +err_out: return ERR_PTR(err); }
From: Bob Pearson rpearsonhpe@gmail.com
[ Upstream commit a9fb3287211e64b94ceb2b6b4791cc2b829d0d56 ]
Replace the name rxe_dbg with rxe_dbg_dev which better matches the remaining rxe_dbg_xxx macros for debug messages with a rxe device parameter. Reuse the name rxe_dbg for debug messages which do not have a rxe device parameter.
Link: https://lore.kernel.org/r/20230303221623.8053-3-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson rpearsonhpe@gmail.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Stable-dep-of: 78b26a335310 ("RDMA/rxe: Remove tasklet call from rxe_cq.c") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/sw/rxe/rxe.c | 2 +- drivers/infiniband/sw/rxe/rxe.h | 3 ++- drivers/infiniband/sw/rxe/rxe_cq.c | 6 +++--- drivers/infiniband/sw/rxe/rxe_icrc.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_mmap.c | 6 +++--- drivers/infiniband/sw/rxe/rxe_net.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_qp.c | 16 ++++++++-------- drivers/infiniband/sw/rxe/rxe_srq.c | 6 +++--- drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +- 9 files changed, 25 insertions(+), 24 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index a3f05fdd9fac2..d57ba7a5964b9 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -187,7 +187,7 @@ static int rxe_newlink(const char *ibdev_name, struct net_device *ndev) rxe = rxe_get_dev_from_net(ndev); if (rxe) { ib_device_put(&rxe->ib_dev); - rxe_dbg(rxe, "already configured on %s\n", ndev->name); + rxe_dbg_dev(rxe, "already configured on %s\n", ndev->name); err = -EEXIST; goto err; } diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index 2415f3704f576..0757acc381038 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -38,7 +38,8 @@
#define RXE_ROCE_V2_SPORT (0xc000)
-#define rxe_dbg(rxe, fmt, ...) ibdev_dbg(&(rxe)->ib_dev, \ +#define rxe_dbg(fmt, ...) pr_debug("%s: " fmt "\n", __func__, ##__VA_ARGS__) +#define rxe_dbg_dev(rxe, fmt, ...) ibdev_dbg(&(rxe)->ib_dev, \ "%s: " fmt, __func__, ##__VA_ARGS__) #define rxe_dbg_uc(uc, fmt, ...) ibdev_dbg((uc)->ibuc.device, \ "uc#%d %s: " fmt, (uc)->elem.index, __func__, ##__VA_ARGS__) diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c index faf49c50bbaba..519ddec29b4ba 100644 --- a/drivers/infiniband/sw/rxe/rxe_cq.c +++ b/drivers/infiniband/sw/rxe/rxe_cq.c @@ -14,12 +14,12 @@ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq, int count;
if (cqe <= 0) { - rxe_dbg(rxe, "cqe(%d) <= 0\n", cqe); + rxe_dbg_dev(rxe, "cqe(%d) <= 0\n", cqe); goto err1; }
if (cqe > rxe->attr.max_cqe) { - rxe_dbg(rxe, "cqe(%d) > max_cqe(%d)\n", + rxe_dbg_dev(rxe, "cqe(%d) > max_cqe(%d)\n", cqe, rxe->attr.max_cqe); goto err1; } @@ -50,7 +50,7 @@ int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe, cq->queue = rxe_queue_init(rxe, &cqe, sizeof(struct rxe_cqe), type); if (!cq->queue) { - rxe_dbg(rxe, "unable to create cq\n"); + rxe_dbg_dev(rxe, "unable to create cq\n"); return -ENOMEM; }
diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index 71bc2c1895888..fdf5f08cd8f17 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -21,7 +21,7 @@ int rxe_icrc_init(struct rxe_dev *rxe)
tfm = crypto_alloc_shash("crc32", 0, 0); if (IS_ERR(tfm)) { - rxe_dbg(rxe, "failed to init crc32 algorithm err: %ld\n", + rxe_dbg_dev(rxe, "failed to init crc32 algorithm err: %ld\n", PTR_ERR(tfm)); return PTR_ERR(tfm); } @@ -51,7 +51,7 @@ static __be32 rxe_crc32(struct rxe_dev *rxe, __be32 crc, void *next, size_t len) *(__be32 *)shash_desc_ctx(shash) = crc; err = crypto_shash_update(shash, next, len); if (unlikely(err)) { - rxe_dbg(rxe, "failed crc calculation, err: %d\n", err); + rxe_dbg_dev(rxe, "failed crc calculation, err: %d\n", err); return (__force __be32)crc32_le((__force u32)crc, next, len); }
diff --git a/drivers/infiniband/sw/rxe/rxe_mmap.c b/drivers/infiniband/sw/rxe/rxe_mmap.c index a47d72dbc5376..6b7f2bd698799 100644 --- a/drivers/infiniband/sw/rxe/rxe_mmap.c +++ b/drivers/infiniband/sw/rxe/rxe_mmap.c @@ -79,7 +79,7 @@ int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
/* Don't allow a mmap larger than the object. */ if (size > ip->info.size) { - rxe_dbg(rxe, "mmap region is larger than the object!\n"); + rxe_dbg_dev(rxe, "mmap region is larger than the object!\n"); spin_unlock_bh(&rxe->pending_lock); ret = -EINVAL; goto done; @@ -87,7 +87,7 @@ int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
goto found_it; } - rxe_dbg(rxe, "unable to find pending mmap info\n"); + rxe_dbg_dev(rxe, "unable to find pending mmap info\n"); spin_unlock_bh(&rxe->pending_lock); ret = -EINVAL; goto done; @@ -98,7 +98,7 @@ int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
ret = remap_vmalloc_range(vma, ip->obj, 0); if (ret) { - rxe_dbg(rxe, "err %d from remap_vmalloc_range\n", ret); + rxe_dbg_dev(rxe, "err %d from remap_vmalloc_range\n", ret); goto done; }
diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index e02e1624bcf4d..a2ace42e95366 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -596,7 +596,7 @@ static int rxe_notify(struct notifier_block *not_blk, rxe_port_down(rxe); break; case NETDEV_CHANGEMTU: - rxe_dbg(rxe, "%s changed mtu to %d\n", ndev->name, ndev->mtu); + rxe_dbg_dev(rxe, "%s changed mtu to %d\n", ndev->name, ndev->mtu); rxe_set_mtu(rxe, ndev->mtu); break; case NETDEV_CHANGE: @@ -608,7 +608,7 @@ static int rxe_notify(struct notifier_block *not_blk, case NETDEV_CHANGENAME: case NETDEV_FEAT_CHANGE: default: - rxe_dbg(rxe, "ignoring netdev event = %ld for %s\n", + rxe_dbg_dev(rxe, "ignoring netdev event = %ld for %s\n", event, ndev->name); break; } diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 13283ec06f95e..d5de5ba6940f1 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -19,33 +19,33 @@ static int rxe_qp_chk_cap(struct rxe_dev *rxe, struct ib_qp_cap *cap, int has_srq) { if (cap->max_send_wr > rxe->attr.max_qp_wr) { - rxe_dbg(rxe, "invalid send wr = %u > %d\n", + rxe_dbg_dev(rxe, "invalid send wr = %u > %d\n", cap->max_send_wr, rxe->attr.max_qp_wr); goto err1; }
if (cap->max_send_sge > rxe->attr.max_send_sge) { - rxe_dbg(rxe, "invalid send sge = %u > %d\n", + rxe_dbg_dev(rxe, "invalid send sge = %u > %d\n", cap->max_send_sge, rxe->attr.max_send_sge); goto err1; }
if (!has_srq) { if (cap->max_recv_wr > rxe->attr.max_qp_wr) { - rxe_dbg(rxe, "invalid recv wr = %u > %d\n", + rxe_dbg_dev(rxe, "invalid recv wr = %u > %d\n", cap->max_recv_wr, rxe->attr.max_qp_wr); goto err1; }
if (cap->max_recv_sge > rxe->attr.max_recv_sge) { - rxe_dbg(rxe, "invalid recv sge = %u > %d\n", + rxe_dbg_dev(rxe, "invalid recv sge = %u > %d\n", cap->max_recv_sge, rxe->attr.max_recv_sge); goto err1; } }
if (cap->max_inline_data > rxe->max_inline_data) { - rxe_dbg(rxe, "invalid max inline data = %u > %d\n", + rxe_dbg_dev(rxe, "invalid max inline data = %u > %d\n", cap->max_inline_data, rxe->max_inline_data); goto err1; } @@ -73,7 +73,7 @@ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init) }
if (!init->recv_cq || !init->send_cq) { - rxe_dbg(rxe, "missing cq\n"); + rxe_dbg_dev(rxe, "missing cq\n"); goto err1; }
@@ -82,14 +82,14 @@ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init)
if (init->qp_type == IB_QPT_GSI) { if (!rdma_is_port_valid(&rxe->ib_dev, port_num)) { - rxe_dbg(rxe, "invalid port = %d\n", port_num); + rxe_dbg_dev(rxe, "invalid port = %d\n", port_num); goto err1; }
port = &rxe->port;
if (init->qp_type == IB_QPT_GSI && port->qp_gsi_index) { - rxe_dbg(rxe, "GSI QP exists for port %d\n", port_num); + rxe_dbg_dev(rxe, "GSI QP exists for port %d\n", port_num); goto err1; } } diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 82e37a41ced40..27ca82ec0826b 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -13,13 +13,13 @@ int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init) struct ib_srq_attr *attr = &init->attr;
if (attr->max_wr > rxe->attr.max_srq_wr) { - rxe_dbg(rxe, "max_wr(%d) > max_srq_wr(%d)\n", + rxe_dbg_dev(rxe, "max_wr(%d) > max_srq_wr(%d)\n", attr->max_wr, rxe->attr.max_srq_wr); goto err1; }
if (attr->max_wr <= 0) { - rxe_dbg(rxe, "max_wr(%d) <= 0\n", attr->max_wr); + rxe_dbg_dev(rxe, "max_wr(%d) <= 0\n", attr->max_wr); goto err1; }
@@ -27,7 +27,7 @@ int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init) attr->max_wr = RXE_MIN_SRQ_WR;
if (attr->max_sge > rxe->attr.max_srq_sge) { - rxe_dbg(rxe, "max_sge(%d) > max_srq_sge(%d)\n", + rxe_dbg_dev(rxe, "max_sge(%d) > max_srq_sge(%d)\n", attr->max_sge, rxe->attr.max_srq_sge); goto err1; } diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 6803ac76ae572..a40a6d0581500 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -1093,7 +1093,7 @@ int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name)
err = ib_register_device(dev, ibdev_name, NULL); if (err) - rxe_dbg(rxe, "failed with error %d\n", err); + rxe_dbg_dev(rxe, "failed with error %d\n", err);
/* * Note that rxe may be invalid at this point if another thread
From: Bob Pearson rpearsonhpe@gmail.com
[ Upstream commit 9ac01f434a1eb56ea94611bd75cf62fa276b41f4 ]
Extend the dbg log messages (e.g. rxe_dbg_xxx) to include err and info types. rxe.c is modified to use these new log messages as examples.
Link: https://lore.kernel.org/r/20230303221623.8053-4-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson rpearsonhpe@gmail.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Stable-dep-of: 78b26a335310 ("RDMA/rxe: Remove tasklet call from rxe_cq.c") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/sw/rxe/rxe.c | 8 ++++--- drivers/infiniband/sw/rxe/rxe.h | 42 +++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index d57ba7a5964b9..7a7e713de52db 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -160,6 +160,8 @@ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu)
port->attr.active_mtu = mtu; port->mtu_cap = ib_mtu_enum_to_int(mtu); + + rxe_info_dev(rxe, "Set mtu to %d", port->mtu_cap); }
/* called by ifc layer to create new rxe device. @@ -179,7 +181,7 @@ static int rxe_newlink(const char *ibdev_name, struct net_device *ndev) int err = 0;
if (is_vlan_dev(ndev)) { - pr_err("rxe creation allowed on top of a real device only\n"); + rxe_err("rxe creation allowed on top of a real device only"); err = -EPERM; goto err; } @@ -187,14 +189,14 @@ static int rxe_newlink(const char *ibdev_name, struct net_device *ndev) rxe = rxe_get_dev_from_net(ndev); if (rxe) { ib_device_put(&rxe->ib_dev); - rxe_dbg_dev(rxe, "already configured on %s\n", ndev->name); + rxe_err_dev(rxe, "already configured on %s", ndev->name); err = -EEXIST; goto err; }
err = rxe_net_add(ibdev_name, ndev); if (err) { - pr_debug("failed to add %s\n", ndev->name); + rxe_err("failed to add %s\n", ndev->name); goto err; } err: diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index 0757acc381038..bd8a8ea4ea8fd 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -58,6 +58,48 @@ #define rxe_dbg_mw(mw, fmt, ...) ibdev_dbg((mw)->ibmw.device, \ "mw#%d %s: " fmt, (mw)->elem.index, __func__, ##__VA_ARGS__)
+#define rxe_err(fmt, ...) pr_err_ratelimited("%s: " fmt "\n", __func__, \ + ##__VA_ARGS__) +#define rxe_err_dev(rxe, fmt, ...) ibdev_err_ratelimited(&(rxe)->ib_dev, \ + "%s: " fmt, __func__, ##__VA_ARGS__) +#define rxe_err_uc(uc, fmt, ...) ibdev_err_ratelimited((uc)->ibuc.device, \ + "uc#%d %s: " fmt, (uc)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_err_pd(pd, fmt, ...) ibdev_err_ratelimited((pd)->ibpd.device, \ + "pd#%d %s: " fmt, (pd)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_err_ah(ah, fmt, ...) ibdev_err_ratelimited((ah)->ibah.device, \ + "ah#%d %s: " fmt, (ah)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_err_srq(srq, fmt, ...) ibdev_err_ratelimited((srq)->ibsrq.device, \ + "srq#%d %s: " fmt, (srq)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_err_qp(qp, fmt, ...) ibdev_err_ratelimited((qp)->ibqp.device, \ + "qp#%d %s: " fmt, (qp)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_err_cq(cq, fmt, ...) ibdev_err_ratelimited((cq)->ibcq.device, \ + "cq#%d %s: " fmt, (cq)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_err_mr(mr, fmt, ...) ibdev_err_ratelimited((mr)->ibmr.device, \ + "mr#%d %s: " fmt, (mr)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_err_mw(mw, fmt, ...) ibdev_err_ratelimited((mw)->ibmw.device, \ + "mw#%d %s: " fmt, (mw)->elem.index, __func__, ##__VA_ARGS__) + +#define rxe_info(fmt, ...) pr_info_ratelimited("%s: " fmt "\n", __func__, \ + ##__VA_ARGS__) +#define rxe_info_dev(rxe, fmt, ...) ibdev_info_ratelimited(&(rxe)->ib_dev, \ + "%s: " fmt, __func__, ##__VA_ARGS__) +#define rxe_info_uc(uc, fmt, ...) ibdev_info_ratelimited((uc)->ibuc.device, \ + "uc#%d %s: " fmt, (uc)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_info_pd(pd, fmt, ...) ibdev_info_ratelimited((pd)->ibpd.device, \ + "pd#%d %s: " fmt, (pd)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_info_ah(ah, fmt, ...) ibdev_info_ratelimited((ah)->ibah.device, \ + "ah#%d %s: " fmt, (ah)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_info_srq(srq, fmt, ...) ibdev_info_ratelimited((srq)->ibsrq.device, \ + "srq#%d %s: " fmt, (srq)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_info_qp(qp, fmt, ...) ibdev_info_ratelimited((qp)->ibqp.device, \ + "qp#%d %s: " fmt, (qp)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_info_cq(cq, fmt, ...) ibdev_info_ratelimited((cq)->ibcq.device, \ + "cq#%d %s: " fmt, (cq)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_info_mr(mr, fmt, ...) ibdev_info_ratelimited((mr)->ibmr.device, \ + "mr#%d %s: " fmt, (mr)->elem.index, __func__, ##__VA_ARGS__) +#define rxe_info_mw(mw, fmt, ...) ibdev_info_ratelimited((mw)->ibmw.device, \ + "mw#%d %s: " fmt, (mw)->elem.index, __func__, ##__VA_ARGS__) + /* responder states */ enum resp_states { RESPST_NONE,
From: Hans de Goede hdegoede@redhat.com
[ Upstream commit c963e2ec095cb3f855890be53f56f5a6c6fbe371 ]
Commit 7e1d728a94ca ("ASoC: Intel: soc-acpi-byt: Add new WM5102 ACPI HID") added an extra HID to wm5102_comp_ids.codecs, but it forgot to bump wm5102_comp_ids.num_codecs, causing the last codec HID in the codecs list to no longer work.
Bump wm5102_comp_ids.num_codecs to fix this.
Fixes: 7e1d728a94ca ("ASoC: Intel: soc-acpi-byt: Add new WM5102 ACPI HID") Signed-off-by: Hans de Goede hdegoede@redhat.com Acked-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Link: https://lore.kernel.org/r/20230421183714.35186-1-hdegoede@redhat.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/intel/common/soc-acpi-intel-byt-match.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sound/soc/intel/common/soc-acpi-intel-byt-match.c b/sound/soc/intel/common/soc-acpi-intel-byt-match.c index db5a92b9875a8..87c44f284971a 100644 --- a/sound/soc/intel/common/soc-acpi-intel-byt-match.c +++ b/sound/soc/intel/common/soc-acpi-intel-byt-match.c @@ -124,7 +124,7 @@ static const struct snd_soc_acpi_codecs rt5640_comp_ids = { };
static const struct snd_soc_acpi_codecs wm5102_comp_ids = { - .num_codecs = 2, + .num_codecs = 3, .codecs = { "10WM5102", "WM510204", "WM510205"}, };
From: Zheng Wang zyytlz.wz@163.com
[ Upstream commit c5749639f2d0a1f6cbe187d05f70c2e7c544d748 ]
In qedi_probe() we call __qedi_probe() which initializes &qedi->recovery_work with qedi_recovery_handler() and &qedi->board_disable_work with qedi_board_disable_work().
When qedi_schedule_recovery_handler() is called, schedule_delayed_work() will finally start the work.
In qedi_remove(), which is called to remove the driver, the following sequence may be observed:
Fix this by finishing the work before cleanup in qedi_remove().
CPU0 CPU1
|qedi_recovery_handler qedi_remove | __qedi_remove | iscsi_host_free | scsi_host_put | //free shost | |iscsi_host_for_each_session |//use qedi->shost
Cancel recovery_work and board_disable_work in __qedi_remove().
Fixes: 4b1068f5d74b ("scsi: qedi: Add MFW error recovery process") Signed-off-by: Zheng Wang zyytlz.wz@163.com Link: https://lore.kernel.org/r/20230413033422.28003-1-zyytlz.wz@163.com Acked-by: Manish Rangankar mrangankar@marvell.com Reviewed-by: Mike Christie michael.christie@oracle.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/qedi/qedi_main.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c index f2ee49756df8d..45d3595541820 100644 --- a/drivers/scsi/qedi/qedi_main.c +++ b/drivers/scsi/qedi/qedi_main.c @@ -2450,6 +2450,9 @@ static void __qedi_remove(struct pci_dev *pdev, int mode) qedi_ops->ll2->stop(qedi->cdev); }
+ cancel_delayed_work_sync(&qedi->recovery_work); + cancel_delayed_work_sync(&qedi->board_disable_work); + qedi_free_iscsi_pf_param(qedi);
rval = qedi_ops->common->update_drv_state(qedi->cdev, false);
From: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
[ Upstream commit 822b84ecfc646da0f87fd947fa00dc3be5e45ecc ]
When the commit fff7eb56b376 ("drm/amd/display: Don't set dram clock change requirement for SubVP") was merged, we missed some parts associated with the MCLK switch. This commit adds all the missing parts.
Fixes: fff7eb56b376 ("drm/amd/display: Don't set dram clock change requirement for SubVP") Reviewed-by: Aurabindo Pillai aurabindo.pillai@amd.com Signed-off-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c | 1 + .../drm/amd/display/dc/dcn32/dcn32_resource.c | 2 +- .../drm/amd/display/dc/dml/dcn30/dcn30_fpu.c | 18 +++++++++++++++++- 3 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c index 30d15a94f720d..578a715040ac3 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c @@ -992,6 +992,7 @@ void dcn32_init_hw(struct dc *dc) if (dc->ctx->dmub_srv) { dc_dmub_srv_query_caps_cmd(dc->ctx->dmub_srv->dmub); dc->caps.dmub_caps.psr = dc->ctx->dmub_srv->dmub->feature_caps.psr; + dc->caps.dmub_caps.mclk_sw = dc->ctx->dmub_srv->dmub->feature_caps.fw_assisted_mclk_switch; } }
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c index 6187aba1362b8..41c3620546c2b 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c @@ -2021,7 +2021,7 @@ int dcn32_populate_dml_pipes_from_context( // In general cases we want to keep the dram clock change requirement // (prefer configs that support MCLK switch). Only override to false // for SubVP - if (subvp_in_use) + if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || subvp_in_use) context->bw_ctx.dml.soc.dram_clock_change_requirement_final = false; else context->bw_ctx.dml.soc.dram_clock_change_requirement_final = true; diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c index 4fa6363647937..fdfb19337ea6e 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c @@ -368,7 +368,9 @@ void dcn30_fpu_update_soc_for_wm_a(struct dc *dc, struct dc_state *context) dc_assert_fp_enabled();
if (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].valid) { - context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us; + if (!context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || + context->bw_ctx.dml.soc.dram_clock_change_latency_us == 0) + context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us; context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.sr_enter_plus_exit_time_us; context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.sr_exit_time_us; } @@ -520,6 +522,20 @@ void dcn30_fpu_calculate_wm_and_dlg( pipe_idx++; }
+ // WA: restrict FPO to use first non-strobe mode (NV24 BW issue) + if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching && + dc->dml.soc.num_chans <= 4 && + context->bw_ctx.dml.vba.DRAMSpeed <= 1700 && + context->bw_ctx.dml.vba.DRAMSpeed >= 1500) { + + for (i = 0; i < dc->dml.soc.num_states; i++) { + if (dc->dml.soc.clock_limits[i].dram_speed_mts > 1700) { + context->bw_ctx.dml.vba.DRAMSpeed = dc->dml.soc.clock_limits[i].dram_speed_mts; + break; + } + } + } + dcn20_calculate_dlg_params(dc, context, pipes, pipe_cnt, vlevel);
if (!pstate_en)
From: Hersen Wu hersenxs.wu@amd.com
[ Upstream commit dd24662d9dfbad281bbf030f06d68c7938fa0c66 ]
[Why&How] We were not returning -EINVAL on DSC atomic check fail. Add it.
Fixes: 71be4b16d39a ("drm/amd/display: dsc validate fail not pass to atomic check") Reviewed-by: Aurabindo Pillai aurabindo.pillai@amd.com Signed-off-by: Hersen Wu hersenxs.wu@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 1 + drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c | 1 + 2 files changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 58fdd39f5bde9..324cfaeb744cd 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -9859,6 +9859,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, ret = compute_mst_dsc_configs_for_state(state, dm_state->context, vars); if (ret) { DRM_DEBUG_DRIVER("compute_mst_dsc_configs_for_state() failed\n"); + ret = -EINVAL; goto fail; }
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c index 60dd88666437d..994a37003217d 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c @@ -1375,6 +1375,7 @@ int pre_validate_dsc(struct drm_atomic_state *state, ret = pre_compute_mst_dsc_configs_for_state(state, local_dc_state, vars); if (ret != 0) { DRM_INFO_ONCE("pre_compute_mst_dsc_configs_for_state() failed\n"); + ret = -EINVAL; goto clean_exit; }
From: Aurabindo Pillai aurabindo.pillai@amd.com
[ Upstream commit d1c5c3e252b8a911a524e6ee33b82aca81397745 ]
[Why&How] Fix CLK MGR early initialization and add logging.
Fixes: 265280b99822 ("drm/amd/display: add CLKMGR changes for DCN32/321") Reviewed-by: Leo Li sunpeng.li@amd.com Reviewed-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Aurabindo Pillai aurabindo.pillai@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c index 200fcec191861..1859b2e4a98a1 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c @@ -719,6 +719,8 @@ void dcn32_clk_mgr_construct( struct pp_smu_funcs *pp_smu, struct dccg *dccg) { + struct clk_log_info log_info = {0}; + clk_mgr->base.ctx = ctx; clk_mgr->base.funcs = &dcn32_funcs; if (ASICREV_IS_GC_11_0_2(clk_mgr->base.ctx->asic_id.hw_internal_rev)) { @@ -752,6 +754,7 @@ void dcn32_clk_mgr_construct( clk_mgr->base.clks.ref_dtbclk_khz = 268750; }
+ /* integer part is now VCO frequency in kHz */ clk_mgr->base.dentist_vco_freq_khz = dcn32_get_vco_frequency_from_reg(clk_mgr);
@@ -759,6 +762,8 @@ void dcn32_clk_mgr_construct( if (clk_mgr->base.dentist_vco_freq_khz == 0) clk_mgr->base.dentist_vco_freq_khz = 4300000; /* Updated as per HW docs */
+ dcn32_dump_clk_registers(&clk_mgr->base.boot_snapshot, &clk_mgr->base, &log_info); + if (ctx->dc->debug.disable_dtb_ref_clk_switch && clk_mgr->base.clks.ref_dtbclk_khz != clk_mgr->base.boot_snapshot.dtbclk) { clk_mgr->base.clks.ref_dtbclk_khz = clk_mgr->base.boot_snapshot.dtbclk;
From: Cruise Hung Cruise.Hung@amd.com
[ Upstream commit 425afa0ac99a05b39e6cd00704fa0e3e925cee2b ]
[Why & How] We missed resetting OUTBOX0 mailbox r/w pointer on DMUB reset. Fix it.
Fixes: 6ecf9773a503 ("drm/amd/display: Fix DMUB outbox trace in S4 (#4465)") Signed-off-by: Cruise Hung Cruise.Hung@amd.com Acked-by: Aurabindo Pillai aurabindo.pillai@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c index a76da0131addd..b0adbf783aae9 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c @@ -130,6 +130,8 @@ void dmub_dcn32_reset(struct dmub_srv *dmub) REG_WRITE(DMCUB_INBOX1_WPTR, 0); REG_WRITE(DMCUB_OUTBOX1_RPTR, 0); REG_WRITE(DMCUB_OUTBOX1_WPTR, 0); + REG_WRITE(DMCUB_OUTBOX0_RPTR, 0); + REG_WRITE(DMCUB_OUTBOX0_WPTR, 0); REG_WRITE(DMCUB_SCRATCH0, 0); }
From: Aurabindo Pillai aurabindo.pillai@amd.com
[ Upstream commit 99d92eaca5d915763b240aae24669f5bf3227ecf ]
[Why & How] There's no need to clear GPINT register for DMUB when releasing it from reset. Fix that.
Fixes: ac2e555e0a7f ("drm/amd/display: Add DMCUB source files and changes for DCN32/321") Reviewed-by: Leo Li sunpeng.li@amd.com Signed-off-by: Aurabindo Pillai aurabindo.pillai@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c index b0adbf783aae9..9c20516be066c 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c @@ -137,7 +137,6 @@ void dmub_dcn32_reset(struct dmub_srv *dmub)
void dmub_dcn32_reset_release(struct dmub_srv *dmub) { - REG_WRITE(DMCUB_GPINT_DATAIN1, 0); REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 0); REG_WRITE(DMCUB_SCRATCH15, dmub->psp_version & 0x001100FF); REG_UPDATE_2(DMCUB_CNTL, DMCUB_ENABLE, 1, DMCUB_TRACEPORT_EN, 1);
From: Aurabindo Pillai aurabindo.pillai@amd.com
[ Upstream commit 989cd3e76a4aab76fe7dd50090ac3fa501c537f6 ]
[Why&how]
Update bounding box values as per hardware spec
Fixes: 197485c69543 ("drm/amd/display: Create dcn321_fpu file") Acked-by: Leo Li sunpeng.li@amd.com Signed-off-by: Aurabindo Pillai aurabindo.pillai@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../amd/display/dc/dml/dcn321/dcn321_fpu.c | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c index b80cef70fa60f..383a409a3f54c 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c @@ -106,16 +106,16 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_21_soc = { .clock_limits = { { .state = 0, - .dcfclk_mhz = 1564.0, - .fabricclk_mhz = 400.0, - .dispclk_mhz = 2150.0, - .dppclk_mhz = 2150.0, + .dcfclk_mhz = 1434.0, + .fabricclk_mhz = 2250.0, + .dispclk_mhz = 1720.0, + .dppclk_mhz = 1720.0, .phyclk_mhz = 810.0, .phyclk_d18_mhz = 667.0, - .phyclk_d32_mhz = 625.0, + .phyclk_d32_mhz = 313.0, .socclk_mhz = 1200.0, - .dscclk_mhz = 716.667, - .dram_speed_mts = 1600.0, + .dscclk_mhz = 573.333, + .dram_speed_mts = 16000.0, .dtbclk_mhz = 1564.0, }, }, @@ -125,14 +125,14 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_21_soc = { .sr_exit_z8_time_us = 285.0, .sr_enter_plus_exit_z8_time_us = 320, .writeback_latency_us = 12.0, - .round_trip_ping_latency_dcfclk_cycles = 263, + .round_trip_ping_latency_dcfclk_cycles = 207, .urgent_latency_pixel_data_only_us = 4, .urgent_latency_pixel_mixed_with_vm_data_us = 4, .urgent_latency_vm_data_only_us = 4, - .fclk_change_latency_us = 20, - .usr_retraining_latency_us = 2, - .smn_latency_us = 2, - .mall_allocated_for_dcn_mbytes = 64, + .fclk_change_latency_us = 7, + .usr_retraining_latency_us = 0, + .smn_latency_us = 0, + .mall_allocated_for_dcn_mbytes = 32, .urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096, .urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096, .urgent_out_of_order_return_per_channel_vm_only_bytes = 4096,
From: David Howells dhowells@redhat.com
[ Upstream commit 2b5fdc0f5caa505afe34d608e2eefadadf2ee67a ]
Inside the loop in rxrpc_wait_to_be_connected() it checks call->error to see if it should exit the loop without first checking the call state. This is probably safe as if call->error is set, the call is dead anyway, but we should probably wait for the call state to have been set to completion first, lest it cause surprise on the way out.
Fix this by only accessing call->error if the call is complete. We don't actually need to access the error inside the loop as we'll do that after.
This caused the following report:
BUG: KCSAN: data-race in rxrpc_send_data / rxrpc_set_call_completion
write to 0xffff888159cf3c50 of 4 bytes by task 25673 on cpu 1: rxrpc_set_call_completion+0x71/0x1c0 net/rxrpc/call_state.c:22 rxrpc_send_data_packet+0xba9/0x1650 net/rxrpc/output.c:479 rxrpc_transmit_one+0x1e/0x130 net/rxrpc/output.c:714 rxrpc_decant_prepared_tx net/rxrpc/call_event.c:326 [inline] rxrpc_transmit_some_data+0x496/0x600 net/rxrpc/call_event.c:350 rxrpc_input_call_event+0x564/0x1220 net/rxrpc/call_event.c:464 rxrpc_io_thread+0x307/0x1d80 net/rxrpc/io_thread.c:461 kthread+0x1ac/0x1e0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
read to 0xffff888159cf3c50 of 4 bytes by task 25672 on cpu 0: rxrpc_send_data+0x29e/0x1950 net/rxrpc/sendmsg.c:296 rxrpc_do_sendmsg+0xb7a/0xc20 net/rxrpc/sendmsg.c:726 rxrpc_sendmsg+0x413/0x520 net/rxrpc/af_rxrpc.c:565 sock_sendmsg_nosec net/socket.c:724 [inline] sock_sendmsg net/socket.c:747 [inline] ____sys_sendmsg+0x375/0x4c0 net/socket.c:2501 ___sys_sendmsg net/socket.c:2555 [inline] __sys_sendmmsg+0x263/0x500 net/socket.c:2641 __do_sys_sendmmsg net/socket.c:2670 [inline] __se_sys_sendmmsg net/socket.c:2667 [inline] __x64_sys_sendmmsg+0x57/0x60 net/socket.c:2667 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
value changed: 0x00000000 -> 0xffffffea
Fixes: 9d35d880e0e4 ("rxrpc: Move client call connection to the I/O thread") Reported-by: syzbot+ebc945fdb4acd72cba78@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/000000000000e7c6d205fa10a3cd@google.com/ Signed-off-by: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: Dmitry Vyukov dvyukov@google.com cc: "David S. Miller" davem@davemloft.net cc: Eric Dumazet edumazet@google.com cc: Jakub Kicinski kuba@kernel.org cc: Paolo Abeni pabeni@redhat.com cc: linux-afs@lists.infradead.org cc: linux-fsdevel@vger.kernel.org cc: netdev@vger.kernel.org Link: https://lore.kernel.org/r/508133.1682427395@warthog.procyon.org.uk Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/rxrpc/sendmsg.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index da49fcf1c4567..6caa47d352ed6 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -50,15 +50,11 @@ static int rxrpc_wait_to_be_connected(struct rxrpc_call *call, long *timeo) _enter("%d", call->debug_id);
if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) - return call->error; + goto no_wait;
add_wait_queue_exclusive(&call->waitq, &myself);
for (;;) { - ret = call->error; - if (ret < 0) - break; - switch (call->interruptibility) { case RXRPC_INTERRUPTIBLE: case RXRPC_PREINTERRUPTIBLE: @@ -69,10 +65,9 @@ static int rxrpc_wait_to_be_connected(struct rxrpc_call *call, long *timeo) set_current_state(TASK_UNINTERRUPTIBLE); break; } - if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) { - ret = call->error; + + if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) break; - } if ((call->interruptibility == RXRPC_INTERRUPTIBLE || call->interruptibility == RXRPC_PREINTERRUPTIBLE) && signal_pending(current)) { @@ -85,6 +80,7 @@ static int rxrpc_wait_to_be_connected(struct rxrpc_call *call, long *timeo) remove_wait_queue(&call->waitq, &myself); __set_current_state(TASK_RUNNING);
+no_wait: if (ret == 0 && rxrpc_call_is_complete(call)) ret = call->error;
From: John Hickey jjh@daedalian.us
[ Upstream commit c23ae5091a8b3e50fe755257df020907e7c029bb ]
Commit 4fe815850bdc ("ixgbe: let the xdpdrv work with more than 64 cpus") adds support to allow XDP programs to run on systems with more than 64 CPUs by locking the XDP TX rings and indexing them using cpu % 64 (IXGBE_MAX_XDP_QS).
Upon trying this out patch on a system with more than 64 cores, the kernel paniced with an array-index-out-of-bounds at the return in ixgbe_determine_xdp_ring in ixgbe.h, which means ixgbe_determine_xdp_q_idx was just returning the cpu instead of cpu % IXGBE_MAX_XDP_QS. An example splat:
========================================================================== UBSAN: array-index-out-of-bounds in /var/lib/dkms/ixgbe/5.18.6+focal-1/build/src/ixgbe.h:1147:26 index 65 is out of range for type 'ixgbe_ring *[64]' ========================================================================== BUG: kernel NULL pointer dereference, address: 0000000000000058 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 0 P4D 0 Oops: 0000 [#1] SMP NOPTI CPU: 65 PID: 408 Comm: ksoftirqd/65 Tainted: G IOE 5.15.0-48-generic #54~20.04.1-Ubuntu Hardware name: Dell Inc. PowerEdge R640/0W23H8, BIOS 2.5.4 01/13/2020 RIP: 0010:ixgbe_xmit_xdp_ring+0x1b/0x1c0 [ixgbe] Code: 3b 52 d4 cf e9 42 f2 ff ff 66 0f 1f 44 00 00 0f 1f 44 00 00 55 b9 00 00 00 00 48 89 e5 41 57 41 56 41 55 41 54 53 48 83 ec 08 <44> 0f b7 47 58 0f b7 47 5a 0f b7 57 54 44 0f b7 76 08 66 41 39 c0 RSP: 0018:ffffbc3fcd88fcb0 EFLAGS: 00010282 RAX: ffff92a253260980 RBX: ffffbc3fe68b00a0 RCX: 0000000000000000 RDX: ffff928b5f659000 RSI: ffff928b5f659000 RDI: 0000000000000000 RBP: ffffbc3fcd88fce0 R08: ffff92b9dfc20580 R09: 0000000000000001 R10: 3d3d3d3d3d3d3d3d R11: 3d3d3d3d3d3d3d3d R12: 0000000000000000 R13: ffff928b2f0fa8c0 R14: ffff928b9be20050 R15: 000000000000003c FS: 0000000000000000(0000) GS:ffff92b9dfc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000058 CR3: 000000011dd6a002 CR4: 00000000007706e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: <TASK> ixgbe_poll+0x103e/0x1280 [ixgbe] ? sched_clock_cpu+0x12/0xe0 __napi_poll+0x30/0x160 net_rx_action+0x11c/0x270 __do_softirq+0xda/0x2ee run_ksoftirqd+0x2f/0x50 smpboot_thread_fn+0xb7/0x150 ? sort_range+0x30/0x30 kthread+0x127/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x1f/0x30 </TASK>
I think this is how it happens:
Upon loading the first XDP program on a system with more than 64 CPUs, ixgbe_xdp_locking_key is incremented in ixgbe_xdp_setup. However, immediately after this, the rings are reconfigured by ixgbe_setup_tc. ixgbe_setup_tc calls ixgbe_clear_interrupt_scheme which calls ixgbe_free_q_vectors which calls ixgbe_free_q_vector in a loop. ixgbe_free_q_vector decrements ixgbe_xdp_locking_key once per call if it is non-zero. Commenting out the decrement in ixgbe_free_q_vector stopped my system from panicing.
I suspect to make the original patch work, I would need to load an XDP program and then replace it in order to get ixgbe_xdp_locking_key back above 0 since ixgbe_setup_tc is only called when transitioning between XDP and non-XDP ring configurations, while ixgbe_xdp_locking_key is incremented every time ixgbe_xdp_setup is called.
Also, ixgbe_setup_tc can be called via ethtool --set-channels, so this becomes another path to decrement ixgbe_xdp_locking_key to 0 on systems with more than 64 CPUs.
Since ixgbe_xdp_locking_key only protects the XDP_TX path and is tied to the number of CPUs present, there is no reason to disable it upon unloading an XDP program. To avoid confusion, I have moved enabling ixgbe_xdp_locking_key into ixgbe_sw_init, which is part of the probe path.
Fixes: 4fe815850bdc ("ixgbe: let the xdpdrv work with more than 64 cpus") Signed-off-by: John Hickey jjh@daedalian.us Reviewed-by: Maciej Fijalkowski maciej.fijalkowski@intel.com Tested-by: Chandan Kumar Rout chandanx.rout@intel.com (A Contingent Worker at Intel) Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Link: https://lore.kernel.org/r/20230425170308.2522429-1-anthony.l.nguyen@intel.co... Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 3 --- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 6 ++++-- 2 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c index f8156fe4b1dc4..0ee943db3dc92 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c @@ -1035,9 +1035,6 @@ static void ixgbe_free_q_vector(struct ixgbe_adapter *adapter, int v_idx) adapter->q_vector[v_idx] = NULL; __netif_napi_del(&q_vector->napi);
- if (static_key_enabled(&ixgbe_xdp_locking_key)) - static_branch_dec(&ixgbe_xdp_locking_key); - /* * after a call to __netif_napi_del() napi may still be used and * ixgbe_get_stats64() might access the rings on this vector, diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 4507fba8747a7..03e583cf48153 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -6495,6 +6495,10 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter, set_bit(0, adapter->fwd_bitmask); set_bit(__IXGBE_DOWN, &adapter->state);
+ /* enable locking for XDP_TX if we have more CPUs than queues */ + if (nr_cpu_ids > IXGBE_MAX_XDP_QS) + static_branch_enable(&ixgbe_xdp_locking_key); + return 0; }
@@ -10288,8 +10292,6 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog) */ if (nr_cpu_ids > IXGBE_MAX_XDP_QS * 2) return -ENOMEM; - else if (nr_cpu_ids > IXGBE_MAX_XDP_QS) - static_branch_inc(&ixgbe_xdp_locking_key);
old_prog = xchg(&adapter->xdp_prog, prog); need_reset = (!!prog != !!old_prog);
From: Geetha sowjanya gakula@marvell.com
[ Upstream commit c222b292a3568754828ffd30338d2909b14ed160 ]
For each lmac port, MCS has two MCS_TOP_SLAVE_CHANNEL_CONFIGX registers. For CN10KB both register need to be configured for the port level mcs bypass to work. This patch also sets bitmap of flowid/secy entry reserved for default bypass so that these entries can be shown in debugfs.
Fixes: bd69476e86fc ("octeontx2-af: cn10k: mcs: Install a default TCAM for normal traffic") Signed-off-by: Geetha sowjanya gakula@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/mcs.c | 11 ++++++++++- .../net/ethernet/marvell/octeontx2/af/rvu_debugfs.c | 5 +++-- 2 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c index f68a6a0e3aa41..492baa0b594ce 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c @@ -494,6 +494,9 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs)
/* Flow entry */ flow_id = mcs->hw->tcam_entries - MCS_RSRC_RSVD_CNT; + __set_bit(flow_id, mcs->rx.flow_ids.bmap); + __set_bit(flow_id, mcs->tx.flow_ids.bmap); + for (reg_id = 0; reg_id < 4; reg_id++) { reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id); mcs_reg_write(mcs, reg, GENMASK_ULL(63, 0)); @@ -504,6 +507,8 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs) } /* secy */ secy_id = mcs->hw->secy_entries - MCS_RSRC_RSVD_CNT; + __set_bit(secy_id, mcs->rx.secy.bmap); + __set_bit(secy_id, mcs->tx.secy.bmap);
/* Set validate frames to NULL and enable control port */ plcy = 0x7ull; @@ -528,6 +533,7 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs) /* Enable Flowid entry */ mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_RX, true); mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_TX, true); + return 0; }
@@ -1325,8 +1331,11 @@ void mcs_reset_port(struct mcs *mcs, u8 port_id, u8 reset) void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode) { u64 reg; + int id = lmac_id * 2;
- reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(lmac_id * 2); + reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(id); + mcs_reg_write(mcs, reg, (u64)mode); + reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG((id + 1)); mcs_reg_write(mcs, reg, (u64)mode); }
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c index 26cfa501f1a11..9533b1d929604 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c @@ -497,8 +497,9 @@ static int rvu_dbg_mcs_rx_secy_stats_display(struct seq_file *filp, void *unused stats.octet_validated_cnt); seq_printf(filp, "secy%d: Pkts on disable port: %lld\n", secy_id, stats.pkt_port_disabled_cnt); - seq_printf(filp, "secy%d: Octets validated: %lld\n", secy_id, stats.pkt_badtag_cnt); - seq_printf(filp, "secy%d: Octets validated: %lld\n", secy_id, stats.pkt_nosa_cnt); + seq_printf(filp, "secy%d: Pkts with badtag: %lld\n", secy_id, stats.pkt_badtag_cnt); + seq_printf(filp, "secy%d: Pkts with no SA(sectag.tci.c=0): %lld\n", secy_id, + stats.pkt_nosa_cnt); seq_printf(filp, "secy%d: Pkts with nosaerror: %lld\n", secy_id, stats.pkt_nosaerror_cnt); seq_printf(filp, "secy%d: Tagged ctrl pkts: %lld\n", secy_id,
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit b51612198603fce33d6cf57b4864e3018a1cd9b8 ]
As per hardware errata on CN10KB, all the four TCAM_DATA and TCAM_MASK registers has to be written at once otherwise write to individual registers will fail. Hence write to all TCAM_DATA registers and then to all TCAM_MASK registers.
Fixes: cfc14181d497 ("octeontx2-af: cn10k: mcs: Manage the MCS block hardware resources") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/mcs.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c index 492baa0b594ce..148417d633a56 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c @@ -473,6 +473,8 @@ void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int flow_id, for (reg_id = 0; reg_id < 4; reg_id++) { reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id); mcs_reg_write(mcs, reg, data[reg_id]); + } + for (reg_id = 0; reg_id < 4; reg_id++) { reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id); mcs_reg_write(mcs, reg, mask[reg_id]); } @@ -480,6 +482,8 @@ void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int flow_id, for (reg_id = 0; reg_id < 4; reg_id++) { reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id); mcs_reg_write(mcs, reg, data[reg_id]); + } + for (reg_id = 0; reg_id < 4; reg_id++) { reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id); mcs_reg_write(mcs, reg, mask[reg_id]); }
From: Geetha sowjanya gakula@marvell.com
[ Upstream commit 65cdc2b637a5749c7dec0ce14fe2c48f1f91f671 ]
When ptp timestamp is enabled in RPM, RPM will append 8B timestamp header for all RX traffic. MCS need to skip these 8 bytes header while parsing the packet header, so that correct tcam key is created for lookup. This patch fixes the mcs parser configuration to skip this 8B header for ptp packets.
Fixes: ca7f49ff8846 ("octeontx2-af: cn10k: Introduce driver for macsec block.") Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../ethernet/marvell/octeontx2/af/mcs_reg.h | 1 + .../marvell/octeontx2/af/mcs_rvu_if.c | 37 +++++++++++++++++++ .../net/ethernet/marvell/octeontx2/af/rvu.h | 1 + .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 2 + 4 files changed, 41 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h index c95a8b8f5eaf7..7427e3b1490f4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h @@ -97,6 +97,7 @@ #define MCSX_PEX_TX_SLAVE_VLAN_CFGX(a) (0x46f8ull + (a) * 0x8ull) #define MCSX_PEX_TX_SLAVE_CUSTOM_TAG_REL_MODE_SEL(a) (0x788ull + (a) * 0x8ull) #define MCSX_PEX_TX_SLAVE_PORT_CONFIG(a) (0x4738ull + (a) * 0x8ull) +#define MCSX_PEX_RX_SLAVE_PORT_CFGX(a) (0x3b98ull + (a) * 0x8ull) #define MCSX_PEX_RX_SLAVE_RULE_ETYPE_CFGX(a) ({ \ u64 offset; \ \ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c index eb25e458266ca..dfd23580e3b8e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c @@ -11,6 +11,7 @@
#include "mcs.h" #include "rvu.h" +#include "mcs_reg.h" #include "lmac_common.h"
#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ @@ -32,6 +33,42 @@ static struct _req_type __maybe_unused \ MBOX_UP_MCS_MESSAGES #undef M
+void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena) +{ + struct mcs *mcs; + u64 cfg; + u8 port; + + if (!rvu->mcs_blk_cnt) + return; + + /* When ptp is enabled, RPM appends 8B header for all + * RX packets. MCS PEX need to configure to skip 8B + * during packet parsing. + */ + + /* CNF10K-B */ + if (rvu->mcs_blk_cnt > 1) { + mcs = mcs_get_pdata(rpm_id); + cfg = mcs_reg_read(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION); + if (ena) + cfg |= BIT_ULL(lmac_id); + else + cfg &= ~BIT_ULL(lmac_id); + mcs_reg_write(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION, cfg); + return; + } + /* CN10KB */ + mcs = mcs_get_pdata(0); + port = (rpm_id * rvu->hw->lmac_per_cgx) + lmac_id; + cfg = mcs_reg_read(mcs, MCSX_PEX_RX_SLAVE_PORT_CFGX(port)); + if (ena) + cfg |= BIT_ULL(0); + else + cfg &= ~BIT_ULL(0); + mcs_reg_write(mcs, MCSX_PEX_RX_SLAVE_PORT_CFGX(port), cfg); +} + int rvu_mbox_handler_mcs_set_lmac_mode(struct rvu *rvu, struct mcs_set_lmac_mode *req, struct msg_rsp *rsp) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index f6c45cf27caf4..f0502556d127f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -899,6 +899,7 @@ int rvu_get_hwvf(struct rvu *rvu, int pcifunc); /* CN10K MCS */ int rvu_mcs_init(struct rvu *rvu); int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc); +void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena); void rvu_mcs_exit(struct rvu *rvu);
#endif /* RVU_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index 438b212fb54a7..83b342fa8d753 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -773,6 +773,8 @@ static int rvu_cgx_ptp_rx_cfg(struct rvu *rvu, u16 pcifunc, bool enable) /* This flag is required to clean up CGX conf if app gets killed */ pfvf->hw_rx_tstamp_en = enable;
+ /* Inform MCS about 8B RX header */ + rvu_mcs_ptp_cfg(rvu, cgx_id, lmac_id, enable); return 0; }
From: Geetha sowjanya gakula@marvell.com
[ Upstream commit b8aebeaaf9ffb1e99c642eb3751e28981f9be475 ]
On CN10KB, MCS IP vector number, BBE and PAB interrupt mask got changed to support more block level interrupts. To address this changes, this patch fixes the bbe and pab interrupt handlers.
Fixes: 6c635f78c474 ("octeontx2-af: cn10k: mcs: Handle MCS block interrupts") Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/mcs.c | 95 ++++++++----------- .../net/ethernet/marvell/octeontx2/af/mcs.h | 26 +++-- .../marvell/octeontx2/af/mcs_cnf10kb.c | 63 ++++++++++++ .../ethernet/marvell/octeontx2/af/mcs_reg.h | 5 +- 4 files changed, 119 insertions(+), 70 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c index 148417d633a56..c43f19dfbd744 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c @@ -936,60 +936,42 @@ static void mcs_tx_misc_intr_handler(struct mcs *mcs, u64 intr) mcs_add_intr_wq_entry(mcs, &event); }
-static void mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir) +void cn10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, + enum mcs_direction dir) { - struct mcs_intr_event event = { 0 }; - int i; + u64 val, reg; + int lmac;
- if (!(intr & MCS_BBE_INT_MASK)) + if (!(intr & 0x6ULL)) return;
- event.mcs_id = mcs->mcs_id; - event.pcifunc = mcs->pf_map[0]; + if (intr & BIT_ULL(1)) + reg = (dir == MCS_RX) ? MCSX_BBE_RX_SLAVE_DFIFO_OVERFLOW_0 : + MCSX_BBE_TX_SLAVE_DFIFO_OVERFLOW_0; + else + reg = (dir == MCS_RX) ? MCSX_BBE_RX_SLAVE_PLFIFO_OVERFLOW_0 : + MCSX_BBE_TX_SLAVE_PLFIFO_OVERFLOW_0; + val = mcs_reg_read(mcs, reg);
- for (i = 0; i < MCS_MAX_BBE_INT; i++) { - if (!(intr & BIT_ULL(i))) + /* policy/data over flow occurred */ + for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++) { + if (!(val & BIT_ULL(lmac))) continue; - - /* Lower nibble denotes data fifo overflow interrupts and - * upper nibble indicates policy fifo overflow interrupts. - */ - if (intr & 0xFULL) - event.intr_mask = (dir == MCS_RX) ? - MCS_BBE_RX_DFIFO_OVERFLOW_INT : - MCS_BBE_TX_DFIFO_OVERFLOW_INT; - else - event.intr_mask = (dir == MCS_RX) ? - MCS_BBE_RX_PLFIFO_OVERFLOW_INT : - MCS_BBE_TX_PLFIFO_OVERFLOW_INT; - - /* Notify the lmac_id info which ran into BBE fatal error */ - event.lmac_id = i & 0x3ULL; - mcs_add_intr_wq_entry(mcs, &event); + dev_warn(mcs->dev, "BEE:Policy or data overflow occurred on lmac:%d\n", lmac); } }
-static void mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir) +void cn10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, + enum mcs_direction dir) { - struct mcs_intr_event event = { 0 }; - int i; + int lmac;
- if (!(intr & MCS_PAB_INT_MASK)) + if (!(intr & 0xFFFFFULL)) return;
- event.mcs_id = mcs->mcs_id; - event.pcifunc = mcs->pf_map[0]; - - for (i = 0; i < MCS_MAX_PAB_INT; i++) { - if (!(intr & BIT_ULL(i))) - continue; - - event.intr_mask = (dir == MCS_RX) ? MCS_PAB_RX_CHAN_OVERFLOW_INT : - MCS_PAB_TX_CHAN_OVERFLOW_INT; - - /* Notify the lmac_id info which ran into PAB fatal error */ - event.lmac_id = i; - mcs_add_intr_wq_entry(mcs, &event); + for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++) { + if (intr & BIT_ULL(lmac)) + dev_warn(mcs->dev, "PAB: overflow occurred on lmac:%d\n", lmac); } }
@@ -998,9 +980,8 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq) struct mcs *mcs = (struct mcs *)mcs_irq; u64 intr, cpm_intr, bbe_intr, pab_intr;
- /* Disable and clear the interrupt */ + /* Disable the interrupt */ mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1C, BIT_ULL(0)); - mcs_reg_write(mcs, MCSX_IP_INT, BIT_ULL(0));
/* Check which block has interrupt*/ intr = mcs_reg_read(mcs, MCSX_TOP_SLAVE_INT_SUM); @@ -1047,7 +1028,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq) /* BBE RX */ if (intr & MCS_BBE_RX_INT_ENA) { bbe_intr = mcs_reg_read(mcs, MCSX_BBE_RX_SLAVE_BBE_INT); - mcs_bbe_intr_handler(mcs, bbe_intr, MCS_RX); + mcs->mcs_ops->mcs_bbe_intr_handler(mcs, bbe_intr, MCS_RX);
/* Clear the interrupt */ mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_INTR_RW, 0); @@ -1057,7 +1038,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq) /* BBE TX */ if (intr & MCS_BBE_TX_INT_ENA) { bbe_intr = mcs_reg_read(mcs, MCSX_BBE_TX_SLAVE_BBE_INT); - mcs_bbe_intr_handler(mcs, bbe_intr, MCS_TX); + mcs->mcs_ops->mcs_bbe_intr_handler(mcs, bbe_intr, MCS_TX);
/* Clear the interrupt */ mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_INTR_RW, 0); @@ -1067,7 +1048,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq) /* PAB RX */ if (intr & MCS_PAB_RX_INT_ENA) { pab_intr = mcs_reg_read(mcs, MCSX_PAB_RX_SLAVE_PAB_INT); - mcs_pab_intr_handler(mcs, pab_intr, MCS_RX); + mcs->mcs_ops->mcs_pab_intr_handler(mcs, pab_intr, MCS_RX);
/* Clear the interrupt */ mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_INTR_RW, 0); @@ -1077,14 +1058,15 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq) /* PAB TX */ if (intr & MCS_PAB_TX_INT_ENA) { pab_intr = mcs_reg_read(mcs, MCSX_PAB_TX_SLAVE_PAB_INT); - mcs_pab_intr_handler(mcs, pab_intr, MCS_TX); + mcs->mcs_ops->mcs_pab_intr_handler(mcs, pab_intr, MCS_TX);
/* Clear the interrupt */ mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_INTR_RW, 0); mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT, pab_intr); }
- /* Enable the interrupt */ + /* Clear and enable the interrupt */ + mcs_reg_write(mcs, MCSX_IP_INT, BIT_ULL(0)); mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1S, BIT_ULL(0));
return IRQ_HANDLED; @@ -1166,7 +1148,7 @@ static int mcs_register_interrupts(struct mcs *mcs) return ret; }
- ret = request_irq(pci_irq_vector(mcs->pdev, MCS_INT_VEC_IP), + ret = request_irq(pci_irq_vector(mcs->pdev, mcs->hw->ip_vec), mcs_ip_intr_handler, 0, "MCS_IP", mcs); if (ret) { dev_err(mcs->dev, "MCS IP irq registration failed\n"); @@ -1185,11 +1167,11 @@ static int mcs_register_interrupts(struct mcs *mcs) mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_TX_INT_ENB, 0x7ULL); mcs_reg_write(mcs, MCSX_CPM_RX_SLAVE_RX_INT_ENB, 0x7FULL);
- mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_ENB, 0xff); - mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_ENB, 0xff); + mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_ENB, 0xFFULL); + mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_ENB, 0xFFULL);
- mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_ENB, 0xff); - mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_ENB, 0xff); + mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_ENB, 0xFFFFFULL); + mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_ENB, 0xFFFFFULL);
mcs->tx_sa_active = alloc_mem(mcs, mcs->hw->sc_entries); if (!mcs->tx_sa_active) { @@ -1200,7 +1182,7 @@ static int mcs_register_interrupts(struct mcs *mcs) return ret;
free_irq: - free_irq(pci_irq_vector(mcs->pdev, MCS_INT_VEC_IP), mcs); + free_irq(pci_irq_vector(mcs->pdev, mcs->hw->ip_vec), mcs); exit: pci_free_irq_vectors(mcs->pdev); mcs->num_vec = 0; @@ -1497,6 +1479,7 @@ void cn10kb_mcs_set_hw_capabilities(struct mcs *mcs) hw->lmac_cnt = 20; /* lmacs/ports per mcs block */ hw->mcs_x2p_intf = 5; /* x2p clabration intf */ hw->mcs_blks = 1; /* MCS blocks */ + hw->ip_vec = MCS_CN10KB_INT_VEC_IP; /* IP vector */ }
static struct mcs_ops cn10kb_mcs_ops = { @@ -1505,6 +1488,8 @@ static struct mcs_ops cn10kb_mcs_ops = { .mcs_tx_sa_mem_map_write = cn10kb_mcs_tx_sa_mem_map_write, .mcs_rx_sa_mem_map_write = cn10kb_mcs_rx_sa_mem_map_write, .mcs_flowid_secy_map = cn10kb_mcs_flowid_secy_map, + .mcs_bbe_intr_handler = cn10kb_mcs_bbe_intr_handler, + .mcs_pab_intr_handler = cn10kb_mcs_pab_intr_handler, };
static int mcs_probe(struct pci_dev *pdev, const struct pci_device_id *id) @@ -1605,7 +1590,7 @@ static void mcs_remove(struct pci_dev *pdev)
/* Set MCS to external bypass */ mcs_set_external_bypass(mcs, true); - free_irq(pci_irq_vector(pdev, MCS_INT_VEC_IP), mcs); + free_irq(pci_irq_vector(pdev, mcs->hw->ip_vec), mcs); pci_free_irq_vectors(pdev); pci_release_regions(pdev); pci_disable_device(pdev); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h index 64dc2b80e15dd..0f89dcb764654 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h @@ -43,24 +43,15 @@ /* Reserved resources for default bypass entry */ #define MCS_RSRC_RSVD_CNT 1
-/* MCS Interrupt Vector Enumeration */ -enum mcs_int_vec_e { - MCS_INT_VEC_MIL_RX_GBL = 0x0, - MCS_INT_VEC_MIL_RX_LMACX = 0x1, - MCS_INT_VEC_MIL_TX_LMACX = 0x5, - MCS_INT_VEC_HIL_RX_GBL = 0x9, - MCS_INT_VEC_HIL_RX_LMACX = 0xa, - MCS_INT_VEC_HIL_TX_GBL = 0xe, - MCS_INT_VEC_HIL_TX_LMACX = 0xf, - MCS_INT_VEC_IP = 0x13, - MCS_INT_VEC_CNT = 0x14, -}; +/* MCS Interrupt Vector */ +#define MCS_CNF10KB_INT_VEC_IP 0x13 +#define MCS_CN10KB_INT_VEC_IP 0x53
#define MCS_MAX_BBE_INT 8ULL #define MCS_BBE_INT_MASK 0xFFULL
-#define MCS_MAX_PAB_INT 4ULL -#define MCS_PAB_INT_MASK 0xFULL +#define MCS_MAX_PAB_INT 8ULL +#define MCS_PAB_INT_MASK 0xFULL
#define MCS_BBE_RX_INT_ENA BIT_ULL(0) #define MCS_BBE_TX_INT_ENA BIT_ULL(1) @@ -137,6 +128,7 @@ struct hwinfo { u8 lmac_cnt; u8 mcs_blks; unsigned long lmac_bmap; /* bitmap of enabled mcs lmac */ + u16 ip_vec; };
struct mcs { @@ -165,6 +157,8 @@ struct mcs_ops { void (*mcs_tx_sa_mem_map_write)(struct mcs *mcs, struct mcs_tx_sc_sa_map *map); void (*mcs_rx_sa_mem_map_write)(struct mcs *mcs, struct mcs_rx_sc_sa_map *map); void (*mcs_flowid_secy_map)(struct mcs *mcs, struct secy_mem_map *map, int dir); + void (*mcs_bbe_intr_handler)(struct mcs *mcs, u64 intr, enum mcs_direction dir); + void (*mcs_pab_intr_handler)(struct mcs *mcs, u64 intr, enum mcs_direction dir); };
extern struct pci_driver mcs_driver; @@ -219,6 +213,8 @@ void cn10kb_mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *ma void cn10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir); void cn10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map); void cn10kb_mcs_parser_cfg(struct mcs *mcs); +void cn10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir); +void cn10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
/* CNF10K-B APIs */ struct mcs_ops *cnf10kb_get_mac_ops(void); @@ -229,6 +225,8 @@ void cnf10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *m void cnf10kb_mcs_parser_cfg(struct mcs *mcs); void cnf10kb_mcs_tx_pn_thresh_reached_handler(struct mcs *mcs); void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs); +void cnf10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir); +void cnf10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
/* Stats APIs */ void mcs_get_sc_stats(struct mcs *mcs, struct mcs_sc_stats *stats, int id, int dir); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c index 7b62054144286..9f9b904ab2cd0 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c @@ -13,6 +13,8 @@ static struct mcs_ops cnf10kb_mcs_ops = { .mcs_tx_sa_mem_map_write = cnf10kb_mcs_tx_sa_mem_map_write, .mcs_rx_sa_mem_map_write = cnf10kb_mcs_rx_sa_mem_map_write, .mcs_flowid_secy_map = cnf10kb_mcs_flowid_secy_map, + .mcs_bbe_intr_handler = cnf10kb_mcs_bbe_intr_handler, + .mcs_pab_intr_handler = cnf10kb_mcs_pab_intr_handler, };
struct mcs_ops *cnf10kb_get_mac_ops(void) @@ -31,6 +33,7 @@ void cnf10kb_mcs_set_hw_capabilities(struct mcs *mcs) hw->lmac_cnt = 4; /* lmacs/ports per mcs block */ hw->mcs_x2p_intf = 1; /* x2p clabration intf */ hw->mcs_blks = 7; /* MCS blocks */ + hw->ip_vec = MCS_CNF10KB_INT_VEC_IP; /* IP vector */ }
void cnf10kb_mcs_parser_cfg(struct mcs *mcs) @@ -212,3 +215,63 @@ void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs) mcs_add_intr_wq_entry(mcs, &event); } } + +void cnf10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, + enum mcs_direction dir) +{ + struct mcs_intr_event event = { 0 }; + int i; + + if (!(intr & MCS_BBE_INT_MASK)) + return; + + event.mcs_id = mcs->mcs_id; + event.pcifunc = mcs->pf_map[0]; + + for (i = 0; i < MCS_MAX_BBE_INT; i++) { + if (!(intr & BIT_ULL(i))) + continue; + + /* Lower nibble denotes data fifo overflow interrupts and + * upper nibble indicates policy fifo overflow interrupts. + */ + if (intr & 0xFULL) + event.intr_mask = (dir == MCS_RX) ? + MCS_BBE_RX_DFIFO_OVERFLOW_INT : + MCS_BBE_TX_DFIFO_OVERFLOW_INT; + else + event.intr_mask = (dir == MCS_RX) ? + MCS_BBE_RX_PLFIFO_OVERFLOW_INT : + MCS_BBE_TX_PLFIFO_OVERFLOW_INT; + + /* Notify the lmac_id info which ran into BBE fatal error */ + event.lmac_id = i & 0x3ULL; + mcs_add_intr_wq_entry(mcs, &event); + } +} + +void cnf10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, + enum mcs_direction dir) +{ + struct mcs_intr_event event = { 0 }; + int i; + + if (!(intr & MCS_PAB_INT_MASK)) + return; + + event.mcs_id = mcs->mcs_id; + event.pcifunc = mcs->pf_map[0]; + + for (i = 0; i < MCS_MAX_PAB_INT; i++) { + if (!(intr & BIT_ULL(i))) + continue; + + event.intr_mask = (dir == MCS_RX) ? + MCS_PAB_RX_CHAN_OVERFLOW_INT : + MCS_PAB_TX_CHAN_OVERFLOW_INT; + + /* Notify the lmac_id info which ran into PAB fatal error */ + event.lmac_id = i; + mcs_add_intr_wq_entry(mcs, &event); + } +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h index 7427e3b1490f4..f3ab01fc363c8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h @@ -276,7 +276,10 @@ #define MCSX_BBE_RX_SLAVE_CAL_ENTRY 0x180ull #define MCSX_BBE_RX_SLAVE_CAL_LEN 0x188ull #define MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(a) (0x290ull + (a) * 0x40ull) - +#define MCSX_BBE_RX_SLAVE_DFIFO_OVERFLOW_0 0xe20 +#define MCSX_BBE_TX_SLAVE_DFIFO_OVERFLOW_0 0x1298 +#define MCSX_BBE_RX_SLAVE_PLFIFO_OVERFLOW_0 0xe40 +#define MCSX_BBE_TX_SLAVE_PLFIFO_OVERFLOW_0 0x12b8 #define MCSX_BBE_RX_SLAVE_BBE_INT ({ \ u64 offset; \ \
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 699af748c61574125d269db260dabbe20436d74e ]
When system is rebooted after creating macsec interface below NULL pointer dereference crashes occurred. This patch fixes those crashes by using correct order of teardown
[ 3324.406942] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 [ 3324.415726] Mem abort info: [ 3324.418510] ESR = 0x96000006 [ 3324.421557] EC = 0x25: DABT (current EL), IL = 32 bits [ 3324.426865] SET = 0, FnV = 0 [ 3324.429913] EA = 0, S1PTW = 0 [ 3324.433047] Data abort info: [ 3324.435921] ISV = 0, ISS = 0x00000006 [ 3324.439748] CM = 0, WnR = 0 .... [ 3324.575915] Call trace: [ 3324.578353] cn10k_mdo_del_secy+0x24/0x180 [ 3324.582440] macsec_common_dellink+0xec/0x120 [ 3324.586788] macsec_notify+0x17c/0x1c0 [ 3324.590529] raw_notifier_call_chain+0x50/0x70 [ 3324.594965] call_netdevice_notifiers_info+0x34/0x7c [ 3324.599921] rollback_registered_many+0x354/0x5bc [ 3324.604616] unregister_netdevice_queue+0x88/0x10c [ 3324.609399] unregister_netdev+0x20/0x30 [ 3324.613313] otx2_remove+0x8c/0x310 [ 3324.616794] pci_device_shutdown+0x30/0x70 [ 3324.620882] device_shutdown+0x11c/0x204
[ 966.664930] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 [ 966.673712] Mem abort info: [ 966.676497] ESR = 0x96000006 [ 966.679543] EC = 0x25: DABT (current EL), IL = 32 bits [ 966.684848] SET = 0, FnV = 0 [ 966.687895] EA = 0, S1PTW = 0 [ 966.691028] Data abort info: [ 966.693900] ISV = 0, ISS = 0x00000006 [ 966.697729] CM = 0, WnR = 0 [ 966.833467] Call trace: [ 966.835904] cn10k_mdo_stop+0x20/0xa0 [ 966.839557] macsec_dev_stop+0xe8/0x11c [ 966.843384] __dev_close_many+0xbc/0x140 [ 966.847298] dev_close_many+0x84/0x120 [ 966.851039] rollback_registered_many+0x114/0x5bc [ 966.855735] unregister_netdevice_many.part.0+0x14/0xa0 [ 966.860952] unregister_netdevice_many+0x18/0x24 [ 966.865560] macsec_notify+0x1ac/0x1c0 [ 966.869303] raw_notifier_call_chain+0x50/0x70 [ 966.873738] call_netdevice_notifiers_info+0x34/0x7c [ 966.878694] rollback_registered_many+0x354/0x5bc [ 966.883390] unregister_netdevice_queue+0x88/0x10c [ 966.888173] unregister_netdev+0x20/0x30 [ 966.892090] otx2_remove+0x8c/0x310 [ 966.895571] pci_device_shutdown+0x30/0x70 [ 966.899660] device_shutdown+0x11c/0x204 [ 966.903574] __do_sys_reboot+0x208/0x290 [ 966.907487] __arm64_sys_reboot+0x20/0x30 [ 966.911489] el0_svc_handler+0x80/0x1c0 [ 966.915316] el0_svc+0x8/0x180 [ 966.918362] Code: f9400000 f9400a64 91220014 f94b3403 (f9400060) [ 966.924448] ---[ end trace 341778e799c3d8d7 ]---
Fixes: c54ffc73601c ("octeontx2-pf: mcs: Introduce MACSEC hardware offloading") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index c1ea60bc2630e..3489553ad7010 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -3069,8 +3069,6 @@ static void otx2_remove(struct pci_dev *pdev) otx2_config_pause_frm(pf); }
- cn10k_mcs_free(pf); - #ifdef CONFIG_DCB /* Disable PFC config */ if (pf->pfc_en) { @@ -3084,6 +3082,7 @@ static void otx2_remove(struct pci_dev *pdev)
otx2_unregister_dl(pf); unregister_netdev(netdev); + cn10k_mcs_free(pf); otx2_sriov_disable(pf->pdev); otx2_sriov_vfcfg_cleanup(pf); if (pf->otx2_wq)
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 57d00d4364f314485092667d2a48718985515deb ]
On CN10KB silicon a single hardware macsec block is present and offloads macsec operations for all the ethernet LMACs. TCAM match with macsec ethertype 0x88e5 alone at RX side is not sufficient to distinguish all the macsec interfaces created on top of netdevs. Hence append the DMAC of the macsec interface too. Otherwise the first created macsec interface only receives all the macsec traffic.
Fixes: c54ffc73601c ("octeontx2-pf: mcs: Introduce MACSEC hardware offloading") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c index 9ec5f38d38a84..f699209978fef 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c @@ -9,6 +9,7 @@ #include <net/macsec.h> #include "otx2_common.h"
+#define MCS_TCAM0_MAC_DA_MASK GENMASK_ULL(47, 0) #define MCS_TCAM0_MAC_SA_MASK GENMASK_ULL(63, 48) #define MCS_TCAM1_MAC_SA_MASK GENMASK_ULL(31, 0) #define MCS_TCAM1_ETYPE_MASK GENMASK_ULL(47, 32) @@ -237,8 +238,10 @@ static int cn10k_mcs_write_rx_flowid(struct otx2_nic *pfvf, struct cn10k_mcs_rxsc *rxsc, u8 hw_secy_id) { struct macsec_rx_sc *sw_rx_sc = rxsc->sw_rxsc; + struct macsec_secy *secy = rxsc->sw_secy; struct mcs_flowid_entry_write_req *req; struct mbox *mbox = &pfvf->mbox; + u64 mac_da; int ret;
mutex_lock(&mbox->lock); @@ -249,11 +252,16 @@ static int cn10k_mcs_write_rx_flowid(struct otx2_nic *pfvf, goto fail; }
+ mac_da = ether_addr_to_u64(secy->netdev->dev_addr); + + req->data[0] = FIELD_PREP(MCS_TCAM0_MAC_DA_MASK, mac_da); + req->mask[0] = ~0ULL; + req->mask[0] = ~MCS_TCAM0_MAC_DA_MASK; + req->data[1] = FIELD_PREP(MCS_TCAM1_ETYPE_MASK, ETH_P_MACSEC); req->mask[1] = ~0ULL; req->mask[1] &= ~MCS_TCAM1_ETYPE_MASK;
- req->mask[0] = ~0ULL; req->mask[2] = ~0ULL; req->mask[3] = ~0ULL;
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 815debbbf7b52026462c37eea3be70d6377a7a9a ]
When freeing MCS hardware resources like SecY, SC and SA the corresponding stats needs to be cleared. Otherwise previous stats are shown in newly created macsec interfaces.
Fixes: c54ffc73601c ("octeontx2-pf: mcs: Introduce MACSEC hardware offloading") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c index f699209978fef..13faca9add9f4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c @@ -150,11 +150,20 @@ static void cn10k_mcs_free_rsrc(struct otx2_nic *pfvf, enum mcs_direction dir, enum mcs_rsrc_type type, u16 hw_rsrc_id, bool all) { + struct mcs_clear_stats *clear_req; struct mbox *mbox = &pfvf->mbox; struct mcs_free_rsrc_req *req;
mutex_lock(&mbox->lock);
+ clear_req = otx2_mbox_alloc_msg_mcs_clear_stats(mbox); + if (!clear_req) + goto fail; + + clear_req->id = hw_rsrc_id; + clear_req->type = type; + clear_req->dir = dir; + req = otx2_mbox_alloc_msg_mcs_free_resources(mbox); if (!req) goto fail;
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 9bdfe61054fb2b989eb58df20bf99c0cf67e3038 ]
Macsec stats like InPktsLate and InPktsDelayed share same counter in hardware. If SecY replay_protect is true then counter represents InPktsLate otherwise InPktsDelayed. This mode change was tracked based on protect_frames instead of replay_protect mistakenly. Similarly InPktsUnchecked and InPktsOk share same counter and mode change was tracked based on validate_check instead of validate_disabled. This patch fixes those problems.
Fixes: c54ffc73601c ("octeontx2-pf: mcs: Introduce MACSEC hardware offloading") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../ethernet/marvell/octeontx2/nic/cn10k_macsec.c | 14 +++++++------- .../ethernet/marvell/octeontx2/nic/otx2_common.h | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c index 13faca9add9f4..3ad8d7ef20be6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c @@ -1014,7 +1014,7 @@ static void cn10k_mcs_sync_stats(struct otx2_nic *pfvf, struct macsec_secy *secy
/* Check if sync is really needed */ if (secy->validate_frames == txsc->last_validate_frames && - secy->protect_frames == txsc->last_protect_frames) + secy->replay_protect == txsc->last_replay_protect) return;
cn10k_mcs_secy_stats(pfvf, txsc->hw_secy_id_rx, &rx_rsp, MCS_RX, true); @@ -1036,19 +1036,19 @@ static void cn10k_mcs_sync_stats(struct otx2_nic *pfvf, struct macsec_secy *secy rxsc->stats.InPktsInvalid += sc_rsp.pkt_invalid_cnt; rxsc->stats.InPktsNotValid += sc_rsp.pkt_notvalid_cnt;
- if (txsc->last_protect_frames) + if (txsc->last_replay_protect) rxsc->stats.InPktsLate += sc_rsp.pkt_late_cnt; else rxsc->stats.InPktsDelayed += sc_rsp.pkt_late_cnt;
- if (txsc->last_validate_frames == MACSEC_VALIDATE_CHECK) + if (txsc->last_validate_frames == MACSEC_VALIDATE_DISABLED) rxsc->stats.InPktsUnchecked += sc_rsp.pkt_unchecked_cnt; else rxsc->stats.InPktsOK += sc_rsp.pkt_unchecked_cnt; }
txsc->last_validate_frames = secy->validate_frames; - txsc->last_protect_frames = secy->protect_frames; + txsc->last_replay_protect = secy->replay_protect; }
static int cn10k_mdo_open(struct macsec_context *ctx) @@ -1117,7 +1117,7 @@ static int cn10k_mdo_add_secy(struct macsec_context *ctx) txsc->sw_secy = secy; txsc->encoding_sa = secy->tx_sc.encoding_sa; txsc->last_validate_frames = secy->validate_frames; - txsc->last_protect_frames = secy->protect_frames; + txsc->last_replay_protect = secy->replay_protect;
list_add(&txsc->entry, &cfg->txsc_list);
@@ -1538,12 +1538,12 @@ static int cn10k_mdo_get_rx_sc_stats(struct macsec_context *ctx) rxsc->stats.InPktsInvalid += rsp.pkt_invalid_cnt; rxsc->stats.InPktsNotValid += rsp.pkt_notvalid_cnt;
- if (secy->protect_frames) + if (secy->replay_protect) rxsc->stats.InPktsLate += rsp.pkt_late_cnt; else rxsc->stats.InPktsDelayed += rsp.pkt_late_cnt;
- if (secy->validate_frames == MACSEC_VALIDATE_CHECK) + if (secy->validate_frames == MACSEC_VALIDATE_DISABLED) rxsc->stats.InPktsUnchecked += rsp.pkt_unchecked_cnt; else rxsc->stats.InPktsOK += rsp.pkt_unchecked_cnt; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 3d22cc6a2804a..f42b2b65bfd7b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -389,7 +389,7 @@ struct cn10k_mcs_txsc { struct cn10k_txsc_stats stats; struct list_head entry; enum macsec_validation_type last_validate_frames; - bool last_protect_frames; + bool last_replay_protect; u16 hw_secy_id_tx; u16 hw_secy_id_rx; u16 hw_flow_id;
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 3c99bace4ad08ad0264285ba8ad73117560992c2 ]
After creating SecYs, SCs and SAs a SecY can be modified to change attributes like validation mode, protect frames mode etc. During this SecY update, packet number is reset to initial user given value by mistake. Hence do not reset PN when updating SecY parameters.
Fixes: c54ffc73601c ("octeontx2-pf: mcs: Introduce MACSEC hardware offloading") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: Geetha sowjanya gakula@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../ethernet/marvell/octeontx2/nic/cn10k_macsec.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c index 3ad8d7ef20be6..a487a98eac88c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c @@ -1134,6 +1134,7 @@ static int cn10k_mdo_upd_secy(struct macsec_context *ctx) struct macsec_secy *secy = ctx->secy; struct macsec_tx_sa *sw_tx_sa; struct cn10k_mcs_txsc *txsc; + bool active; u8 sa_num; int err;
@@ -1141,15 +1142,19 @@ static int cn10k_mdo_upd_secy(struct macsec_context *ctx) if (!txsc) return -ENOENT;
- txsc->encoding_sa = secy->tx_sc.encoding_sa; - - sa_num = txsc->encoding_sa; - sw_tx_sa = rcu_dereference_bh(secy->tx_sc.sa[sa_num]); + /* Encoding SA got changed */ + if (txsc->encoding_sa != secy->tx_sc.encoding_sa) { + txsc->encoding_sa = secy->tx_sc.encoding_sa; + sa_num = txsc->encoding_sa; + sw_tx_sa = rcu_dereference_bh(secy->tx_sc.sa[sa_num]); + active = sw_tx_sa ? sw_tx_sa->active : false; + cn10k_mcs_link_tx_sa2sc(pfvf, secy, txsc, sa_num, active); + }
if (netif_running(secy->netdev)) { cn10k_mcs_sync_stats(pfvf, secy, txsc);
- err = cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, sw_tx_sa, sa_num); + err = cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, NULL, 0); if (err) return err; }
From: Cosmo Chou chou.cosmo@gmail.com
[ Upstream commit 6f75cd166a5a3c0bc50441faa8b8304f60522fdd ]
ncsi_channel_is_tx() determines whether a given channel should be used for Tx or not. However, when reconfiguring the channel by handling a Configuration Required AEN, there is a misjudgment that the channel Tx has already been enabled, which results in the Enable Channel Network Tx command not being sent.
Clear the channel Tx enable flag before reconfiguring the channel to avoid the misjudgment.
Fixes: 8d951a75d022 ("net/ncsi: Configure multi-package, multi-channel modes with failover") Signed-off-by: Cosmo Chou chou.cosmo@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ncsi/ncsi-aen.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/net/ncsi/ncsi-aen.c b/net/ncsi/ncsi-aen.c index b635c194f0a85..62fb1031763d1 100644 --- a/net/ncsi/ncsi-aen.c +++ b/net/ncsi/ncsi-aen.c @@ -165,6 +165,7 @@ static int ncsi_aen_handler_cr(struct ncsi_dev_priv *ndp, nc->state = NCSI_CHANNEL_INACTIVE; list_add_tail_rcu(&nc->link, &ndp->channel_queue); spin_unlock_irqrestore(&ndp->lock, flags); + nc->modes[NCSI_MODE_TX_ENABLE].enable = 0;
return ncsi_process_next_channel(ndp); }
From: Eric Dumazet edumazet@google.com
[ Upstream commit 7e692df3933628d974acb9f5b334d2b3e885e2a6 ]
David Ahern reported crashes in skb_copy_ubufs() caused by TCP tx zerocopy using hugepages, and skb length bigger than ~68 KB.
skb_copy_ubufs() assumed it could copy all payload using up to MAX_SKB_FRAGS order-0 pages.
This assumption broke when BIG TCP was able to put up to 512 KB per skb.
We did not hit this bug at Google because we use CONFIG_MAX_SKB_FRAGS=45 and limit gso_max_size to 180000.
A solution is to use higher order pages if needed.
v2: add missing __GFP_COMP, or we leak memory.
Fixes: 7c4e983c4f3c ("net: allow gso_max_size to exceed 65536") Reported-by: David Ahern dsahern@kernel.org Link: https://lore.kernel.org/netdev/c70000f6-baa4-4a05-46d0-4b3e0dc1ccc8@gmail.co... Signed-off-by: Eric Dumazet edumazet@google.com Cc: Xin Long lucien.xin@gmail.com Cc: Willem de Bruijn willemb@google.com Cc: Coco Li lixiaoyan@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/skbuff.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 6f5ef18a8b772..ef3dd8f120e02 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1608,7 +1608,7 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) { int num_frags = skb_shinfo(skb)->nr_frags; struct page *page, *head = NULL; - int i, new_frags; + int i, order, psize, new_frags; u32 d_off;
if (skb_shared(skb) || skb_unclone(skb, gfp_mask)) @@ -1617,9 +1617,17 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) if (!num_frags) goto release;
- new_frags = (__skb_pagelen(skb) + PAGE_SIZE - 1) >> PAGE_SHIFT; + /* We might have to allocate high order pages, so compute what minimum + * page order is needed. + */ + order = 0; + while ((PAGE_SIZE << order) * MAX_SKB_FRAGS < __skb_pagelen(skb)) + order++; + psize = (PAGE_SIZE << order); + + new_frags = (__skb_pagelen(skb) + psize - 1) >> (PAGE_SHIFT + order); for (i = 0; i < new_frags; i++) { - page = alloc_page(gfp_mask); + page = alloc_pages(gfp_mask | __GFP_COMP, order); if (!page) { while (head) { struct page *next = (struct page *)page_private(head); @@ -1646,11 +1654,11 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) vaddr = kmap_atomic(p);
while (done < p_len) { - if (d_off == PAGE_SIZE) { + if (d_off == psize) { d_off = 0; page = (struct page *)page_private(page); } - copy = min_t(u32, PAGE_SIZE - d_off, p_len - done); + copy = min_t(u32, psize - d_off, p_len - done); memcpy(page_address(page) + d_off, vaddr + p_off + done, copy); done += copy; @@ -1666,7 +1674,7 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
/* skb frags point to kernel buffers */ for (i = 0; i < new_frags - 1; i++) { - __skb_fill_page_desc(skb, i, head, 0, PAGE_SIZE); + __skb_fill_page_desc(skb, i, head, 0, psize); head = (struct page *)page_private(head); } __skb_fill_page_desc(skb, new_frags - 1, head, 0, d_off);
From: Vlad Buslov vladbu@nvidia.com
[ Upstream commit da94a7781fc3c92e7df7832bc2746f4d39bc624e ]
Error handler of tcf_block_bind() frees the whole bo->cb_list on error. However, by that time the flow_block_cb instances are already in the driver list because driver ndo_setup_tc() callback is called before that up the call chain in tcf_block_offload_cmd(). This leaves dangling pointers to freed objects in the list and causes use-after-free[0]. Fix it by also removing flow_block_cb instances from driver_list before deallocating them.
[0]: [ 279.868433] ================================================================== [ 279.869964] BUG: KASAN: slab-use-after-free in flow_block_cb_setup_simple+0x631/0x7c0 [ 279.871527] Read of size 8 at addr ffff888147e2bf20 by task tc/2963
[ 279.873151] CPU: 6 PID: 2963 Comm: tc Not tainted 6.3.0-rc6+ #4 [ 279.874273] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [ 279.876295] Call Trace: [ 279.876882] <TASK> [ 279.877413] dump_stack_lvl+0x33/0x50 [ 279.878198] print_report+0xc2/0x610 [ 279.878987] ? flow_block_cb_setup_simple+0x631/0x7c0 [ 279.879994] kasan_report+0xae/0xe0 [ 279.880750] ? flow_block_cb_setup_simple+0x631/0x7c0 [ 279.881744] ? mlx5e_tc_reoffload_flows_work+0x240/0x240 [mlx5_core] [ 279.883047] flow_block_cb_setup_simple+0x631/0x7c0 [ 279.884027] tcf_block_offload_cmd.isra.0+0x189/0x2d0 [ 279.885037] ? tcf_block_setup+0x6b0/0x6b0 [ 279.885901] ? mutex_lock+0x7d/0xd0 [ 279.886669] ? __mutex_unlock_slowpath.constprop.0+0x2d0/0x2d0 [ 279.887844] ? ingress_init+0x1c0/0x1c0 [sch_ingress] [ 279.888846] tcf_block_get_ext+0x61c/0x1200 [ 279.889711] ingress_init+0x112/0x1c0 [sch_ingress] [ 279.890682] ? clsact_init+0x2b0/0x2b0 [sch_ingress] [ 279.891701] qdisc_create+0x401/0xea0 [ 279.892485] ? qdisc_tree_reduce_backlog+0x470/0x470 [ 279.893473] tc_modify_qdisc+0x6f7/0x16d0 [ 279.894344] ? tc_get_qdisc+0xac0/0xac0 [ 279.895213] ? mutex_lock+0x7d/0xd0 [ 279.896005] ? __mutex_lock_slowpath+0x10/0x10 [ 279.896910] rtnetlink_rcv_msg+0x5fe/0x9d0 [ 279.897770] ? rtnl_calcit.isra.0+0x2b0/0x2b0 [ 279.898672] ? __sys_sendmsg+0xb5/0x140 [ 279.899494] ? do_syscall_64+0x3d/0x90 [ 279.900302] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 279.901337] ? kasan_save_stack+0x2e/0x40 [ 279.902177] ? kasan_save_stack+0x1e/0x40 [ 279.903058] ? kasan_set_track+0x21/0x30 [ 279.903913] ? kasan_save_free_info+0x2a/0x40 [ 279.904836] ? ____kasan_slab_free+0x11a/0x1b0 [ 279.905741] ? kmem_cache_free+0x179/0x400 [ 279.906599] netlink_rcv_skb+0x12c/0x360 [ 279.907450] ? rtnl_calcit.isra.0+0x2b0/0x2b0 [ 279.908360] ? netlink_ack+0x1550/0x1550 [ 279.909192] ? rhashtable_walk_peek+0x170/0x170 [ 279.910135] ? kmem_cache_alloc_node+0x1af/0x390 [ 279.911086] ? _copy_from_iter+0x3d6/0xc70 [ 279.912031] netlink_unicast+0x553/0x790 [ 279.912864] ? netlink_attachskb+0x6a0/0x6a0 [ 279.913763] ? netlink_recvmsg+0x416/0xb50 [ 279.914627] netlink_sendmsg+0x7a1/0xcb0 [ 279.915473] ? netlink_unicast+0x790/0x790 [ 279.916334] ? iovec_from_user.part.0+0x4d/0x220 [ 279.917293] ? netlink_unicast+0x790/0x790 [ 279.918159] sock_sendmsg+0xc5/0x190 [ 279.918938] ____sys_sendmsg+0x535/0x6b0 [ 279.919813] ? import_iovec+0x7/0x10 [ 279.920601] ? kernel_sendmsg+0x30/0x30 [ 279.921423] ? __copy_msghdr+0x3c0/0x3c0 [ 279.922254] ? import_iovec+0x7/0x10 [ 279.923041] ___sys_sendmsg+0xeb/0x170 [ 279.923854] ? copy_msghdr_from_user+0x110/0x110 [ 279.924797] ? ___sys_recvmsg+0xd9/0x130 [ 279.925630] ? __perf_event_task_sched_in+0x183/0x470 [ 279.926656] ? ___sys_sendmsg+0x170/0x170 [ 279.927529] ? ctx_sched_in+0x530/0x530 [ 279.928369] ? update_curr+0x283/0x4f0 [ 279.929185] ? perf_event_update_userpage+0x570/0x570 [ 279.930201] ? __fget_light+0x57/0x520 [ 279.931023] ? __switch_to+0x53d/0xe70 [ 279.931846] ? sockfd_lookup_light+0x1a/0x140 [ 279.932761] __sys_sendmsg+0xb5/0x140 [ 279.933560] ? __sys_sendmsg_sock+0x20/0x20 [ 279.934436] ? fpregs_assert_state_consistent+0x1d/0xa0 [ 279.935490] do_syscall_64+0x3d/0x90 [ 279.936300] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 279.937311] RIP: 0033:0x7f21c814f887 [ 279.938085] Code: 0a 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b9 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 89 54 24 1c 48 89 74 24 10 [ 279.941448] RSP: 002b:00007fff11efd478 EFLAGS: 00000246 ORIG_RAX: 000000000000002e [ 279.942964] RAX: ffffffffffffffda RBX: 0000000064401979 RCX: 00007f21c814f887 [ 279.944337] RDX: 0000000000000000 RSI: 00007fff11efd4e0 RDI: 0000000000000003 [ 279.945660] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000 [ 279.947003] R10: 00007f21c8008708 R11: 0000000000000246 R12: 0000000000000001 [ 279.948345] R13: 0000000000409980 R14: 000000000047e538 R15: 0000000000485400 [ 279.949690] </TASK>
[ 279.950706] Allocated by task 2960: [ 279.951471] kasan_save_stack+0x1e/0x40 [ 279.952338] kasan_set_track+0x21/0x30 [ 279.953165] __kasan_kmalloc+0x77/0x90 [ 279.954006] flow_block_cb_setup_simple+0x3dd/0x7c0 [ 279.955001] tcf_block_offload_cmd.isra.0+0x189/0x2d0 [ 279.956020] tcf_block_get_ext+0x61c/0x1200 [ 279.956881] ingress_init+0x112/0x1c0 [sch_ingress] [ 279.957873] qdisc_create+0x401/0xea0 [ 279.958656] tc_modify_qdisc+0x6f7/0x16d0 [ 279.959506] rtnetlink_rcv_msg+0x5fe/0x9d0 [ 279.960392] netlink_rcv_skb+0x12c/0x360 [ 279.961216] netlink_unicast+0x553/0x790 [ 279.962044] netlink_sendmsg+0x7a1/0xcb0 [ 279.962906] sock_sendmsg+0xc5/0x190 [ 279.963702] ____sys_sendmsg+0x535/0x6b0 [ 279.964534] ___sys_sendmsg+0xeb/0x170 [ 279.965343] __sys_sendmsg+0xb5/0x140 [ 279.966132] do_syscall_64+0x3d/0x90 [ 279.966908] entry_SYSCALL_64_after_hwframe+0x46/0xb0
[ 279.968407] Freed by task 2960: [ 279.969114] kasan_save_stack+0x1e/0x40 [ 279.969929] kasan_set_track+0x21/0x30 [ 279.970729] kasan_save_free_info+0x2a/0x40 [ 279.971603] ____kasan_slab_free+0x11a/0x1b0 [ 279.972483] __kmem_cache_free+0x14d/0x280 [ 279.973337] tcf_block_setup+0x29d/0x6b0 [ 279.974173] tcf_block_offload_cmd.isra.0+0x226/0x2d0 [ 279.975186] tcf_block_get_ext+0x61c/0x1200 [ 279.976080] ingress_init+0x112/0x1c0 [sch_ingress] [ 279.977065] qdisc_create+0x401/0xea0 [ 279.977857] tc_modify_qdisc+0x6f7/0x16d0 [ 279.978695] rtnetlink_rcv_msg+0x5fe/0x9d0 [ 279.979562] netlink_rcv_skb+0x12c/0x360 [ 279.980388] netlink_unicast+0x553/0x790 [ 279.981214] netlink_sendmsg+0x7a1/0xcb0 [ 279.982043] sock_sendmsg+0xc5/0x190 [ 279.982827] ____sys_sendmsg+0x535/0x6b0 [ 279.983703] ___sys_sendmsg+0xeb/0x170 [ 279.984510] __sys_sendmsg+0xb5/0x140 [ 279.985298] do_syscall_64+0x3d/0x90 [ 279.986076] entry_SYSCALL_64_after_hwframe+0x46/0xb0
[ 279.987532] The buggy address belongs to the object at ffff888147e2bf00 which belongs to the cache kmalloc-192 of size 192 [ 279.989747] The buggy address is located 32 bytes inside of freed 192-byte region [ffff888147e2bf00, ffff888147e2bfc0)
[ 279.992367] The buggy address belongs to the physical page: [ 279.993430] page:00000000550f405c refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x147e2a [ 279.995182] head:00000000550f405c order:1 entire_mapcount:0 nr_pages_mapped:0 pincount:0 [ 279.996713] anon flags: 0x200000000010200(slab|head|node=0|zone=2) [ 279.997878] raw: 0200000000010200 ffff888100042a00 0000000000000000 dead000000000001 [ 279.999384] raw: 0000000000000000 0000000000200020 00000001ffffffff 0000000000000000 [ 280.000894] page dumped because: kasan: bad access detected
[ 280.002386] Memory state around the buggy address: [ 280.003338] ffff888147e2be00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 280.004781] ffff888147e2be80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc [ 280.006224] >ffff888147e2bf00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 280.007700] ^ [ 280.008592] ffff888147e2bf80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc [ 280.010035] ffff888147e2c000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 280.011564] ==================================================================
Fixes: 59094b1e5094 ("net: sched: use flow block API") Signed-off-by: Vlad Buslov vladbu@nvidia.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/cls_api.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 668130f089034..3f37e9c10af4d 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -1484,6 +1484,7 @@ static int tcf_block_bind(struct tcf_block *block,
err_unroll: list_for_each_entry_safe(block_cb, next, &bo->cb_list, list) { + list_del(&block_cb->driver_list); if (i-- > 0) { list_del(&block_cb->list); tcf_block_playback_offloads(block, block_cb->cb,
From: Cong Wang cong.wang@bytedance.com
[ Upstream commit c88f8d5cd95fd039cff95d682b8e71100c001df0 ]
When a tunnel device is bound with the underlying device, its dev->needed_headroom needs to be updated properly. IPv4 tunnels already do the same in ip_tunnel_bind_dev(). Otherwise we may not have enough header room for skb, especially after commit b17f709a2401 ("gue: TX support for using remote checksum offload option").
Fixes: 32b8a8e59c9c ("sit: add IPv4 over IPv4 support") Reported-by: Palash Oswal oswalpalash@gmail.com Link: https://lore.kernel.org/netdev/CAGyP=7fDcSPKu6nttbGwt7RXzE3uyYxLjCSE97J64pRx... Cc: Kuniyuki Iwashima kuniyu@amazon.com Cc: Eric Dumazet edumazet@google.com Signed-off-by: Cong Wang cong.wang@bytedance.com Reviewed-by: Eric Dumazet edumazet@google.com Reviewed-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/sit.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c index 70d81bba50939..3ffb6a5b1f82a 100644 --- a/net/ipv6/sit.c +++ b/net/ipv6/sit.c @@ -1095,12 +1095,13 @@ static netdev_tx_t sit_tunnel_xmit(struct sk_buff *skb,
static void ipip6_tunnel_bind_dev(struct net_device *dev) { + struct ip_tunnel *tunnel = netdev_priv(dev); + int t_hlen = tunnel->hlen + sizeof(struct iphdr); struct net_device *tdev = NULL; - struct ip_tunnel *tunnel; + int hlen = LL_MAX_HEADER; const struct iphdr *iph; struct flowi4 fl4;
- tunnel = netdev_priv(dev); iph = &tunnel->parms.iph;
if (iph->daddr) { @@ -1123,14 +1124,15 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev) tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link);
if (tdev && !netif_is_l3_master(tdev)) { - int t_hlen = tunnel->hlen + sizeof(struct iphdr); int mtu;
mtu = tdev->mtu - t_hlen; if (mtu < IPV6_MIN_MTU) mtu = IPV6_MIN_MTU; WRITE_ONCE(dev->mtu, mtu); + hlen = tdev->hard_header_len + tdev->needed_headroom; } + dev->needed_headroom = t_hlen + hlen; }
static void ipip6_tunnel_update(struct ip_tunnel *t, struct ip_tunnel_parm *p,
From: Andrea Mayer andrea.mayer@uniroma2.it
[ Upstream commit 46ef24c60f8ee70662968ac55325297ed4624d61 ]
On some distributions, the rp_filter is automatically set (=1) by default on a netdev basis (also on VRFs). In an SRv6 End.DT46 behavior, decapsulated IPv4 packets are routed using the table associated with the VRF bound to that tunnel. During lookup operations, the rp_filter can lead to packet loss when activated on the VRF. Therefore, we chose to make this selftest more robust by explicitly disabling the rp_filter during tests (as it is automatically set by some Linux distributions).
Fixes: 03a0b567a03d ("selftests: seg6: add selftest for SRv6 End.DT46 Behavior") Reported-by: Hangbin Liu liuhangbin@gmail.com Signed-off-by: Andrea Mayer andrea.mayer@uniroma2.it Tested-by: Hangbin Liu liuhangbin@gmail.com Reviewed-by: David Ahern dsahern@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../testing/selftests/net/srv6_end_dt46_l3vpn_test.sh | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh b/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh index aebaab8ce44cb..441eededa0312 100755 --- a/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh +++ b/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh @@ -292,6 +292,11 @@ setup_hs() ip netns exec ${hsname} sysctl -wq net.ipv6.conf.all.accept_dad=0 ip netns exec ${hsname} sysctl -wq net.ipv6.conf.default.accept_dad=0
+ # disable the rp_filter otherwise the kernel gets confused about how + # to route decap ipv4 packets. + ip netns exec ${rtname} sysctl -wq net.ipv4.conf.all.rp_filter=0 + ip netns exec ${rtname} sysctl -wq net.ipv4.conf.default.rp_filter=0 + ip -netns ${hsname} link add veth0 type veth peer name ${rtveth} ip -netns ${hsname} link set ${rtveth} netns ${rtname} ip -netns ${hsname} addr add ${IPv6_HS_NETWORK}::${hs}/64 dev veth0 nodad @@ -316,11 +321,6 @@ setup_hs() ip netns exec ${rtname} sysctl -wq net.ipv6.conf.${rtveth}.proxy_ndp=1 ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.proxy_arp=1
- # disable the rp_filter otherwise the kernel gets confused about how - # to route decap ipv4 packets. - ip netns exec ${rtname} sysctl -wq net.ipv4.conf.all.rp_filter=0 - ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.rp_filter=0 - ip netns exec ${rtname} sh -c "echo 1 > /proc/sys/net/vrf/strict_mode" }
From: Antoine Tenart atenart@kernel.org
[ Upstream commit dc6456e938e938d64ffb6383a286b2ac9790a37f ]
The skb hash comes from sk->sk_txhash when using TCP, except for some IPv6 RST packets. This is because in tcp_v6_send_reset when not in TIME_WAIT the hash is taken from sk->sk_hash, while it should come from sk->sk_txhash as those two hashes are not computed the same way.
Packetdrill script to test the above,
0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 +0 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 +0 connect(3, ..., ...) = -1 EINPROGRESS (Operation now in progress)
+0 > (flowlabel 0x1) S 0:0(0) <...>
// Wrong ack seq, trigger a rst. +0 < S. 0:0(0) ack 0 win 4000
// Check the flowlabel matches prior one from SYN. +0 > (flowlabel 0x1) R 0:0(0) <...>
Fixes: 9258b8b1be2e ("ipv6: tcp: send consistent autoflowlabel in RST packets") Signed-off-by: Antoine Tenart atenart@kernel.org Reviewed-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/tcp_ipv6.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index e4da7267ed4bd..e0706c33e5472 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -1064,7 +1064,7 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb) if (np->repflow) label = ip6_flowlabel(ipv6h); priority = sk->sk_priority; - txhash = sk->sk_hash; + txhash = sk->sk_txhash; } if (sk->sk_state == TCP_TIME_WAIT) { label = cpu_to_be32(inet_twsk(sk)->tw_flowlabel);
From: Angelo Dureghello angelo.dureghello@timesys.com
[ Upstream commit 6686317855c6997671982d4489ccdd946f644957 ]
Add rsvd2cpu capability for mv88e6321 model, to allow proper bpdu processing.
Signed-off-by: Angelo Dureghello angelo.dureghello@timesys.com Fixes: 51c901a775621 ("net: dsa: mv88e6xxx: distinguish Global 2 Rsvd2CPU") Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/mv88e6xxx/chip.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c index e57d86484a3a4..9959262ebad2c 100644 --- a/drivers/net/dsa/mv88e6xxx/chip.c +++ b/drivers/net/dsa/mv88e6xxx/chip.c @@ -5113,6 +5113,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = { .set_cpu_port = mv88e6095_g1_set_cpu_port, .set_egress_port = mv88e6095_g1_set_egress_port, .watchdog_ops = &mv88e6390_watchdog_ops, + .mgmt_rsvd2cpu = mv88e6352_g2_mgmt_rsvd2cpu, .reset = mv88e6352_g1_reset, .vtu_getnext = mv88e6185_g1_vtu_getnext, .vtu_loadpurge = mv88e6185_g1_vtu_loadpurge,
From: Maxim Korotkov korotkov.maxim.s@gmail.com
[ Upstream commit 3e46c89c74f2c38e5337d2cf44b0b551adff1cb4 ]
the variable 'history' is of type u16, it may be an error that the hweight32 macro was used for it I guess macro hweight16 should be used
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: 2a81490811d0 ("writeback: implement foreign cgroup inode detection") Signed-off-by: Maxim Korotkov korotkov.maxim.s@gmail.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230119104443.3002-1-korotkov.maxim.s@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- fs/fs-writeback.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 713e2d97935ff..909150a57aebb 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -829,7 +829,7 @@ void wbc_detach_inode(struct writeback_control *wbc) * is okay. The main goal is avoiding keeping an inode on * the wrong wb for an extended period of time. */ - if (hweight32(history) > WB_FRN_HIST_THR_SLOTS) + if (hweight16(history) > WB_FRN_HIST_THR_SLOTS) inode_switch_wbs(inode, max_id); }
From: Tao Su tao1.su@linux.intel.com
[ Upstream commit 8176080d59e6d4ff9fc97ae534063073b4f7a715 ]
Kernel hang in blkg_destroy_all() when total blkg greater than BLKG_DESTROY_BATCH_SIZE, because of not removing destroyed blkg in blkg_list. So the size of blkg_list is same after destroying a batch of blkg, and the infinite 'restart' occurs.
Since blkg should stay on the queue list until blkg_free_workfn(), skip destroyed blkg when restart a new round, which will solve this kernel hang issue and satisfy the previous will to restart.
Reported-by: Xiangfei Ma xiangfeix.ma@intel.com Tested-by: Xiangfei Ma xiangfeix.ma@intel.com Tested-by: Farrah Chen farrah.chen@intel.com Signed-off-by: Tao Su tao1.su@linux.intel.com Fixes: f1c006f1c685 ("blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy()") Suggested-and-reviewed-by: Yu Kuai yukuai3@huawei.com Link: https://lore.kernel.org/r/20230428045149.1310073-1-tao1.su@linux.intel.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-cgroup.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 9ac1efb053e08..2d8a28e4e22f7 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -501,6 +501,9 @@ static void blkg_destroy_all(struct gendisk *disk) list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) { struct blkcg *blkcg = blkg->blkcg;
+ if (hlist_unhashed(&blkg->blkcg_node)) + continue; + spin_lock(&blkcg->lock); blkg_destroy(blkg); spin_unlock(&blkcg->lock);
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
[ Upstream commit 7f5390750645756bd5da2b24fac285f2654dd922 ]
The commit in Fixes has only updated the remove function and missed the error handling path of the probe.
Add the missing reset_control_assert() call.
Fixes: 65a3b6935d92 ("watchdog: dw_wdt: get reset lines from dt") Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Reviewed-by: Philipp Zabel p.zabel@pengutronix.de Reviewed-by: Guenter Roeck linux@roeck-us.net Link: https://lore.kernel.org/r/fbb650650bbb33a8fa2fd028c23157bedeed50e1.168249186... Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Wim Van Sebroeck wim@linux-watchdog.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/watchdog/dw_wdt.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/watchdog/dw_wdt.c b/drivers/watchdog/dw_wdt.c index 52962e8d11a6f..61af5d1332ac6 100644 --- a/drivers/watchdog/dw_wdt.c +++ b/drivers/watchdog/dw_wdt.c @@ -635,7 +635,7 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
ret = dw_wdt_init_timeouts(dw_wdt, dev); if (ret) - goto out_disable_clk; + goto out_assert_rst;
wdd = &dw_wdt->wdd; wdd->ops = &dw_wdt_ops; @@ -666,12 +666,15 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
ret = watchdog_register_device(wdd); if (ret) - goto out_disable_pclk; + goto out_assert_rst;
dw_wdt_dbgfs_init(dw_wdt);
return 0;
+out_assert_rst: + reset_control_assert(dw_wdt->rst); + out_disable_pclk: clk_disable_unprepare(dw_wdt->pclk);
From: Sia Jee Heng jeeheng.sia@starfivetech.com
[ Upstream commit a15c90b67a662c75f469822a7f95c7aaa049e28f ]
Currently kernel_page_present() function doesn't support huge page detection causes the function to mistakenly return false to the hibernation core.
Add huge page detection to the function to solve the problem.
Fixes: 9e953cda5cdf ("riscv: Introduce huge page support for 32/64bit kernel") Signed-off-by: Sia Jee Heng jeeheng.sia@starfivetech.com Reviewed-by: Ley Foon Tan leyfoon.tan@starfivetech.com Reviewed-by: Mason Huo mason.huo@starfivetech.com Reviewed-by: Andrew Jones ajones@ventanamicro.com Reviewed-by: Alexandre Ghiti alexghiti@rivosinc.com Link: https://lore.kernel.org/r/20230330064321.1008373-4-jeeheng.sia@starfivetech.... Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/mm/pageattr.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 86c56616e5dea..ea3d61de065b3 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -217,18 +217,26 @@ bool kernel_page_present(struct page *page) pgd = pgd_offset_k(addr); if (!pgd_present(*pgd)) return false; + if (pgd_leaf(*pgd)) + return true;
p4d = p4d_offset(pgd, addr); if (!p4d_present(*p4d)) return false; + if (p4d_leaf(*p4d)) + return true;
pud = pud_offset(p4d, addr); if (!pud_present(*pud)) return false; + if (pud_leaf(*pud)) + return true;
pmd = pmd_offset(pud, addr); if (!pmd_present(*pmd)) return false; + if (pmd_leaf(*pmd)) + return true;
pte = pte_offset_kernel(pmd, addr); return pte_present(*pte);
From: Akhil R akhilrajeev@nvidia.com
[ Upstream commit 9f855779a3874eee70e9f6be57b5f7774f14e510 ]
Update the msg->len value correctly for SMBUS block read. The discrepancy went unnoticed as msg->len is used in SMBUS transfers only when a PEC byte is added.
Fixes: d7583c8a5748 ("i2c: tegra: Add SMBus block read function") Signed-off-by: Akhil R akhilrajeev@nvidia.com Acked-by: Thierry Reding treding@nvidia.com Signed-off-by: Wolfram Sang wsa@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i2c/busses/i2c-tegra.c | 40 +++++++++++++++++++++++----------- 1 file changed, 27 insertions(+), 13 deletions(-)
diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c index 6aab84c8d22b4..157066f06a32d 100644 --- a/drivers/i2c/busses/i2c-tegra.c +++ b/drivers/i2c/busses/i2c-tegra.c @@ -242,9 +242,10 @@ struct tegra_i2c_hw_feature { * @is_dvc: identifies the DVC I2C controller, has a different register layout * @is_vi: identifies the VI I2C controller, has a different register layout * @msg_complete: transfer completion notifier + * @msg_buf_remaining: size of unsent data in the message buffer + * @msg_len: length of message in current transfer * @msg_err: error code for completed message * @msg_buf: pointer to current message data - * @msg_buf_remaining: size of unsent data in the message buffer * @msg_read: indicates that the transfer is a read access * @timings: i2c timings information like bus frequency * @multimaster_mode: indicates that I2C controller is in multi-master mode @@ -277,6 +278,7 @@ struct tegra_i2c_dev {
struct completion msg_complete; size_t msg_buf_remaining; + unsigned int msg_len; int msg_err; u8 *msg_buf;
@@ -1169,7 +1171,7 @@ static void tegra_i2c_push_packet_header(struct tegra_i2c_dev *i2c_dev, else i2c_writel(i2c_dev, packet_header, I2C_TX_FIFO);
- packet_header = msg->len - 1; + packet_header = i2c_dev->msg_len - 1;
if (i2c_dev->dma_mode && !i2c_dev->msg_read) *dma_buf++ = packet_header; @@ -1242,20 +1244,32 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev, return err;
i2c_dev->msg_buf = msg->buf; + i2c_dev->msg_len = msg->len;
- /* The condition true implies smbus block read and len is already read */ - if (msg->flags & I2C_M_RECV_LEN && end_state != MSG_END_CONTINUE) - i2c_dev->msg_buf = msg->buf + 1; - - i2c_dev->msg_buf_remaining = msg->len; i2c_dev->msg_err = I2C_ERR_NONE; i2c_dev->msg_read = !!(msg->flags & I2C_M_RD); reinit_completion(&i2c_dev->msg_complete);
+ /* + * For SMBUS block read command, read only 1 byte in the first transfer. + * Adjust that 1 byte for the next transfer in the msg buffer and msg + * length. + */ + if (msg->flags & I2C_M_RECV_LEN) { + if (end_state == MSG_END_CONTINUE) { + i2c_dev->msg_len = 1; + } else { + i2c_dev->msg_buf += 1; + i2c_dev->msg_len -= 1; + } + } + + i2c_dev->msg_buf_remaining = i2c_dev->msg_len; + if (i2c_dev->msg_read) - xfer_size = msg->len; + xfer_size = i2c_dev->msg_len; else - xfer_size = msg->len + I2C_PACKET_HEADER_SIZE; + xfer_size = i2c_dev->msg_len + I2C_PACKET_HEADER_SIZE;
xfer_size = ALIGN(xfer_size, BYTES_PER_FIFO_WORD);
@@ -1295,7 +1309,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev, if (!i2c_dev->msg_read) { if (i2c_dev->dma_mode) { memcpy(i2c_dev->dma_buf + I2C_PACKET_HEADER_SIZE, - msg->buf, msg->len); + msg->buf, i2c_dev->msg_len);
dma_sync_single_for_device(i2c_dev->dma_dev, i2c_dev->dma_phys, @@ -1352,7 +1366,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev, i2c_dev->dma_phys, xfer_size, DMA_FROM_DEVICE);
- memcpy(i2c_dev->msg_buf, i2c_dev->dma_buf, msg->len); + memcpy(i2c_dev->msg_buf, i2c_dev->dma_buf, i2c_dev->msg_len); } }
@@ -1408,8 +1422,8 @@ static int tegra_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], ret = tegra_i2c_xfer_msg(i2c_dev, &msgs[i], MSG_END_CONTINUE); if (ret) break; - /* Set the read byte as msg len */ - msgs[i].len = msgs[i].buf[0]; + /* Set the msg length from first byte */ + msgs[i].len += msgs[i].buf[0]; dev_dbg(i2c_dev->dev, "reading %d bytes\n", msgs[i].len); } ret = tegra_i2c_xfer_msg(i2c_dev, &msgs[i], end_type);
From: Victor Nogueira victor@mojatatu.com
[ Upstream commit 526f28bd0fbdc699cda31426928802650c1528e5 ]
There are cases where the device is adminstratively UP, but operationally down. For example, we have a physical device (Nvidia ConnectX-6 Dx, 25Gbps) who's cable was pulled out, here is its ip link output:
5: ens2f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000 link/ether b8:ce:f6:4b:68:35 brd ff:ff:ff:ff:ff:ff altname enp179s0f1np1
As you can see, it's administratively UP but operationally down. In this case, sending a packet to this port caused a nasty kernel hang (so nasty that we were unable to capture it). Aborting a transmit based on operational status (in addition to administrative status) fixes the issue.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Acked-by: Jamal Hadi Salim jhs@mojatatu.com Signed-off-by: Victor Nogueira victor@mojatatu.com v1->v2: Add fixes tag v2->v3: Remove blank line between tags + add change log, suggested by Leon Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/act_mirred.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c index 8037ec9b1d311..a61482c5edbe7 100644 --- a/net/sched/act_mirred.c +++ b/net/sched/act_mirred.c @@ -264,7 +264,7 @@ TC_INDIRECT_SCOPE int tcf_mirred_act(struct sk_buff *skb, goto out; }
- if (unlikely(!(dev->flags & IFF_UP))) { + if (unlikely(!(dev->flags & IFF_UP)) || !netif_carrier_ok(dev)) { net_notice_ratelimited("tc mirred to Houston: device %s is down\n", dev->name); goto out;
From: Hayes Wang hayeswang@realtek.com
[ Upstream commit 8ceda6d5a1e5402fd852e6cc59a286ce3dc545ee ]
The feature of flow control becomes abnormal, if the device sends a pause frame and the tx/rx is disabled before sending a release frame. It causes the lost of packets.
Set PLA_RX_FIFO_FULL and PLA_RX_FIFO_EMPTY to zeros before disabling the tx/rx. And, toggle FC_PATCH_TASK before enabling tx/rx to reset the flow control patch and timer. Then, the hardware could clear the state and the flow control becomes normal after enabling tx/rx.
Besides, remove inline for fc_pause_on_auto() and fc_pause_off_auto().
Fixes: 195aae321c82 ("r8152: support new chips") Signed-off-by: Hayes Wang hayeswang@realtek.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/usb/r8152.c | 56 ++++++++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 20 deletions(-)
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index 23da1d9dafd1f..b0ce524ef1a50 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -5986,6 +5986,25 @@ static void rtl8153_disable(struct r8152 *tp) r8153_aldps_en(tp, true); }
+static u32 fc_pause_on_auto(struct r8152 *tp) +{ + return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 6 * 1024); +} + +static u32 fc_pause_off_auto(struct r8152 *tp) +{ + return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 14 * 1024); +} + +static void r8156_fc_parameter(struct r8152 *tp) +{ + u32 pause_on = tp->fc_pause_on ? tp->fc_pause_on : fc_pause_on_auto(tp); + u32 pause_off = tp->fc_pause_off ? tp->fc_pause_off : fc_pause_off_auto(tp); + + ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 16); + ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 16); +} + static int rtl8156_enable(struct r8152 *tp) { u32 ocp_data; @@ -5994,6 +6013,7 @@ static int rtl8156_enable(struct r8152 *tp) if (test_bit(RTL8152_UNPLUG, &tp->flags)) return -ENODEV;
+ r8156_fc_parameter(tp); set_tx_qlen(tp); rtl_set_eee_plus(tp); r8153_set_rx_early_timeout(tp); @@ -6025,9 +6045,24 @@ static int rtl8156_enable(struct r8152 *tp) ocp_write_word(tp, MCU_TYPE_USB, USB_L1_CTRL, ocp_data); }
+ ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_FW_TASK); + ocp_data &= ~FC_PATCH_TASK; + ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data); + usleep_range(1000, 2000); + ocp_data |= FC_PATCH_TASK; + ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data); + return rtl_enable(tp); }
+static void rtl8156_disable(struct r8152 *tp) +{ + ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, 0); + ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, 0); + + rtl8153_disable(tp); +} + static int rtl8156b_enable(struct r8152 *tp) { u32 ocp_data; @@ -6429,25 +6464,6 @@ static void rtl8153c_up(struct r8152 *tp) r8153b_u1u2en(tp, true); }
-static inline u32 fc_pause_on_auto(struct r8152 *tp) -{ - return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 6 * 1024); -} - -static inline u32 fc_pause_off_auto(struct r8152 *tp) -{ - return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 14 * 1024); -} - -static void r8156_fc_parameter(struct r8152 *tp) -{ - u32 pause_on = tp->fc_pause_on ? tp->fc_pause_on : fc_pause_on_auto(tp); - u32 pause_off = tp->fc_pause_off ? tp->fc_pause_off : fc_pause_off_auto(tp); - - ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 16); - ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 16); -} - static void rtl8156_change_mtu(struct r8152 *tp) { u32 rx_max_size = mtu_to_size(tp->netdev->mtu); @@ -9377,7 +9393,7 @@ static int rtl_ops_init(struct r8152 *tp) case RTL_VER_10: ops->init = r8156_init; ops->enable = rtl8156_enable; - ops->disable = rtl8153_disable; + ops->disable = rtl8156_disable; ops->up = rtl8156_up; ops->down = rtl8156_down; ops->unload = rtl8153_unload;
From: Hayes Wang hayeswang@realtek.com
[ Upstream commit 61b0ad6f58e2066e054c6d4839d67974d2861a7d ]
Fix the poor throughput for 2.5G devices, when changing the speed from auto mode to force mode. This patch is used to notify the MAC when the mode is changed.
Fixes: 195aae321c82 ("r8152: support new chips") Signed-off-by: Hayes Wang hayeswang@realtek.com Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/usb/r8152.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index b0ce524ef1a50..a7665accc81c8 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -199,6 +199,7 @@ #define OCP_EEE_AR 0xa41a #define OCP_EEE_DATA 0xa41c #define OCP_PHY_STATUS 0xa420 +#define OCP_INTR_EN 0xa424 #define OCP_NCTL_CFG 0xa42c #define OCP_POWER_CFG 0xa430 #define OCP_EEE_CFG 0xa432 @@ -620,6 +621,9 @@ enum spd_duplex { #define PHY_STAT_LAN_ON 3 #define PHY_STAT_PWRDN 5
+/* OCP_INTR_EN */ +#define INTR_SPEED_FORCE BIT(3) + /* OCP_NCTL_CFG */ #define PGA_RETURN_EN BIT(1)
@@ -7554,6 +7558,11 @@ static void r8156_hw_phy_cfg(struct r8152 *tp) ((swap_a & 0x1f) << 8) | ((swap_a >> 8) & 0x1f)); } + + /* Notify the MAC when the speed is changed to force mode. */ + data = ocp_reg_read(tp, OCP_INTR_EN); + data |= INTR_SPEED_FORCE; + ocp_reg_write(tp, OCP_INTR_EN, data); break; default: break; @@ -7949,6 +7958,11 @@ static void r8156b_hw_phy_cfg(struct r8152 *tp) break; }
+ /* Notify the MAC when the speed is changed to force mode. */ + data = ocp_reg_read(tp, OCP_INTR_EN); + data |= INTR_SPEED_FORCE; + ocp_reg_write(tp, OCP_INTR_EN, data); + if (rtl_phy_patch_request(tp, true, true)) return;
From: Hayes Wang hayeswang@realtek.com
[ Upstream commit cce8334f4aacd9936309a002d4a4de92a07cd2c2 ]
Move setting r8153b_rx_agg_chg_indicate() for 2.5G devices. The r8153b_rx_agg_chg_indicate() has to be called after enabling tx/rx. Otherwise, the coalescing settings are useless.
Fixes: 195aae321c82 ("r8152: support new chips") Signed-off-by: Hayes Wang hayeswang@realtek.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/usb/r8152.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index a7665accc81c8..059d610901d84 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -3027,12 +3027,16 @@ static int rtl_enable(struct r8152 *tp) ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CR, ocp_data);
switch (tp->version) { - case RTL_VER_08: - case RTL_VER_09: - case RTL_VER_14: - r8153b_rx_agg_chg_indicate(tp); + case RTL_VER_01: + case RTL_VER_02: + case RTL_VER_03: + case RTL_VER_04: + case RTL_VER_05: + case RTL_VER_06: + case RTL_VER_07: break; default: + r8153b_rx_agg_chg_indicate(tp); break; }
@@ -3086,7 +3090,6 @@ static void r8153_set_rx_early_timeout(struct r8152 *tp) 640 / 8); ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EXTRA_AGGR_TMR, ocp_data); - r8153b_rx_agg_chg_indicate(tp); break;
default: @@ -3120,7 +3123,6 @@ static void r8153_set_rx_early_size(struct r8152 *tp) case RTL_VER_15: ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EARLY_SIZE, ocp_data / 8); - r8153b_rx_agg_chg_indicate(tp); break; default: WARN_ON_ONCE(1);
From: Andy Moreton andy.moreton@amd.com
[ Upstream commit 281900a923d4c50df109b52a22ae3cdac150159b ]
The sfc driver does not report QSFP module EEPROM contents correctly as only the first page is fetched from hardware.
Commit 0e1a2a3e6e7d ("ethtool: Add SFF-8436 and SFF-8636 max EEPROM length definitions") added ETH_MODULE_SFF_8436_MAX_LEN for the overall size of the EEPROM info, so use that to report the full EEPROM contents.
Fixes: 9b17010da57a ("sfc: Add ethtool -m support for QSFP modules") Signed-off-by: Andy Moreton andy.moreton@amd.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/sfc/mcdi_port_common.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/sfc/mcdi_port_common.c b/drivers/net/ethernet/sfc/mcdi_port_common.c index 899cc16710048..0ab14f3d01d4d 100644 --- a/drivers/net/ethernet/sfc/mcdi_port_common.c +++ b/drivers/net/ethernet/sfc/mcdi_port_common.c @@ -972,12 +972,15 @@ static u32 efx_mcdi_phy_module_type(struct efx_nic *efx)
/* A QSFP+ NIC may actually have an SFP+ module attached. * The ID is page 0, byte 0. + * QSFP28 is of type SFF_8636, however, this is treated + * the same by ethtool, so we can also treat them the same. */ switch (efx_mcdi_phy_get_module_eeprom_byte(efx, 0, 0)) { - case 0x3: + case 0x3: /* SFP */ return MC_CMD_MEDIA_SFP_PLUS; - case 0xc: - case 0xd: + case 0xc: /* QSFP */ + case 0xd: /* QSFP+ */ + case 0x11: /* QSFP28 */ return MC_CMD_MEDIA_QSFP_PLUS; default: return 0; @@ -1075,7 +1078,7 @@ int efx_mcdi_phy_get_module_info(struct efx_nic *efx, struct ethtool_modinfo *mo
case MC_CMD_MEDIA_QSFP_PLUS: modinfo->type = ETH_MODULE_SFF_8436; - modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN; + modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN; break;
default:
From: David Howells dhowells@redhat.com
[ Upstream commit 0d098d83c5d9e107b2df7f5e11f81492f56d2fe7 ]
The hard call timeout is specified in the RXRPC_SET_CALL_TIMEOUT cmsg in seconds, so fix the point at which sendmsg() applies it to the call to convert to jiffies from seconds, not milliseconds.
Fixes: a158bdd3247b ("rxrpc: Fix timeout of a call that hasn't yet been granted a channel") Signed-off-by: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: "David S. Miller" davem@davemloft.net cc: Eric Dumazet edumazet@google.com cc: Jakub Kicinski kuba@kernel.org cc: Paolo Abeni pabeni@redhat.com cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/rxrpc/sendmsg.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 6caa47d352ed6..7498a77b5d397 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -699,7 +699,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) fallthrough; case 1: if (p.call.timeouts.hard > 0) { - j = msecs_to_jiffies(p.call.timeouts.hard); + j = p.call.timeouts.hard * HZ; now = jiffies; j += now; WRITE_ONCE(call->expect_term_by, j);
From: David Howells dhowells@redhat.com
[ Upstream commit 0eb362d254814ce04848730bf32e75b8ee1a4d6c ]
When sendmsg() creates an rxrpc call, it queues it to wait for a connection and channel to be assigned and then waits before it can start shovelling data as the encrypted DATA packet content includes a summary of the connection parameters.
However, sendmsg() may get interrupted before a connection gets assigned and further sendmsg() calls will fail with EBUSY until an assignment is made.
Fix this so that the call can at least be aborted without failing on EBUSY. We have to be careful here as sendmsg() mustn't be allowed to start the call timer if the call doesn't yet have a connection assigned as an oops may follow shortly thereafter.
Fixes: 540b1c48c37a ("rxrpc: Fix deadlock between call creation and sendmsg/recvmsg") Reported-by: Marc Dionne marc.dionne@auristor.com Signed-off-by: David Howells dhowells@redhat.com cc: "David S. Miller" davem@davemloft.net cc: Eric Dumazet edumazet@google.com cc: Jakub Kicinski kuba@kernel.org cc: Paolo Abeni pabeni@redhat.com cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/rxrpc/sendmsg.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 7498a77b5d397..c1b074c17b33e 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -656,10 +656,13 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) goto out_put_unlock; } else { switch (rxrpc_call_state(call)) { - case RXRPC_CALL_UNINITIALISED: case RXRPC_CALL_CLIENT_AWAIT_CONN: - case RXRPC_CALL_SERVER_PREALLOC: case RXRPC_CALL_SERVER_SECURING: + if (p.command == RXRPC_CMD_SEND_ABORT) + break; + fallthrough; + case RXRPC_CALL_UNINITIALISED: + case RXRPC_CALL_SERVER_PREALLOC: rxrpc_put_call(call, rxrpc_call_put_sendmsg); ret = -EBUSY; goto error_release_sock;
From: David Howells dhowells@redhat.com
[ Upstream commit db099c625b13a74d462521a46d98a8ce5b53af5d ]
afs_make_call() calls rxrpc_kernel_begin_call() to begin a call (which may get stalled in the background waiting for a connection to become available); it then calls rxrpc_kernel_set_max_life() to set the timeouts - but that starts the call timer so the call timer might then expire before we get a connection assigned - leading to the following oops if the call stalled:
BUG: kernel NULL pointer dereference, address: 0000000000000000 ... CPU: 1 PID: 5111 Comm: krxrpcio/0 Not tainted 6.3.0-rc7-build3+ #701 RIP: 0010:rxrpc_alloc_txbuf+0xc0/0x157 ... Call Trace: <TASK> rxrpc_send_ACK+0x50/0x13b rxrpc_input_call_event+0x16a/0x67d rxrpc_io_thread+0x1b6/0x45f ? _raw_spin_unlock_irqrestore+0x1f/0x35 ? rxrpc_input_packet+0x519/0x519 kthread+0xe7/0xef ? kthread_complete_and_exit+0x1b/0x1b ret_from_fork+0x22/0x30
Fix this by noting the timeouts in struct rxrpc_call when the call is created. The timer will be started when the first packet is transmitted.
It shouldn't be possible to trigger this directly from userspace through AF_RXRPC as sendmsg() will return EBUSY if the call is in the waiting-for-conn state if it dropped out of the wait due to a signal.
Fixes: 9d35d880e0e4 ("rxrpc: Move client call connection to the I/O thread") Reported-by: Marc Dionne marc.dionne@auristor.com Signed-off-by: David Howells dhowells@redhat.com cc: "David S. Miller" davem@davemloft.net cc: Eric Dumazet edumazet@google.com cc: Jakub Kicinski kuba@kernel.org cc: Paolo Abeni pabeni@redhat.com cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/afs.h | 4 ++-- fs/afs/internal.h | 2 +- fs/afs/rxrpc.c | 8 +++----- include/net/af_rxrpc.h | 21 +++++++++++---------- net/rxrpc/af_rxrpc.c | 3 +++ net/rxrpc/ar-internal.h | 1 + net/rxrpc/call_object.c | 9 ++++++++- net/rxrpc/sendmsg.c | 1 + 8 files changed, 30 insertions(+), 19 deletions(-)
diff --git a/fs/afs/afs.h b/fs/afs/afs.h index 432cb4b239614..81815724db6c9 100644 --- a/fs/afs/afs.h +++ b/fs/afs/afs.h @@ -19,8 +19,8 @@ #define AFSPATHMAX 1024 /* Maximum length of a pathname plus NUL */ #define AFSOPAQUEMAX 1024 /* Maximum length of an opaque field */
-#define AFS_VL_MAX_LIFESPAN (120 * HZ) -#define AFS_PROBE_MAX_LIFESPAN (30 * HZ) +#define AFS_VL_MAX_LIFESPAN 120 +#define AFS_PROBE_MAX_LIFESPAN 30
typedef u64 afs_volid_t; typedef u64 afs_vnodeid_t; diff --git a/fs/afs/internal.h b/fs/afs/internal.h index fd8567b98e2bb..cd23a3c5b6ace 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -127,7 +127,7 @@ struct afs_call { spinlock_t state_lock; int error; /* error code */ u32 abort_code; /* Remote abort ID or 0 */ - unsigned int max_lifespan; /* Maximum lifespan to set if not 0 */ + unsigned int max_lifespan; /* Maximum lifespan in secs to set if not 0 */ unsigned request_size; /* size of request data */ unsigned reply_max; /* maximum size of reply */ unsigned count2; /* count used in unmarshalling */ diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c index 7817e2b860e5e..6862e3dde364b 100644 --- a/fs/afs/rxrpc.c +++ b/fs/afs/rxrpc.c @@ -334,7 +334,9 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp) /* create a call */ rxcall = rxrpc_kernel_begin_call(call->net->socket, srx, call->key, (unsigned long)call, - tx_total_len, gfp, + tx_total_len, + call->max_lifespan, + gfp, (call->async ? afs_wake_up_async_call : afs_wake_up_call_waiter), @@ -349,10 +351,6 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp) }
call->rxcall = rxcall; - - if (call->max_lifespan) - rxrpc_kernel_set_max_life(call->net->socket, rxcall, - call->max_lifespan); call->issue_time = ktime_get_real();
/* send the request */ diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h index ba717eac0229a..73644bd42a3f9 100644 --- a/include/net/af_rxrpc.h +++ b/include/net/af_rxrpc.h @@ -40,16 +40,17 @@ typedef void (*rxrpc_user_attach_call_t)(struct rxrpc_call *, unsigned long); void rxrpc_kernel_new_call_notification(struct socket *, rxrpc_notify_new_call_t, rxrpc_discard_new_call_t); -struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *, - struct sockaddr_rxrpc *, - struct key *, - unsigned long, - s64, - gfp_t, - rxrpc_notify_rx_t, - bool, - enum rxrpc_interruptibility, - unsigned int); +struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock, + struct sockaddr_rxrpc *srx, + struct key *key, + unsigned long user_call_ID, + s64 tx_total_len, + u32 hard_timeout, + gfp_t gfp, + rxrpc_notify_rx_t notify_rx, + bool upgrade, + enum rxrpc_interruptibility interruptibility, + unsigned int debug_id); int rxrpc_kernel_send_data(struct socket *, struct rxrpc_call *, struct msghdr *, size_t, rxrpc_notify_end_tx_t); diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index ebbd4a1c3f86e..de5bebc99a4b5 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -265,6 +265,7 @@ static int rxrpc_listen(struct socket *sock, int backlog) * @key: The security context to use (defaults to socket setting) * @user_call_ID: The ID to use * @tx_total_len: Total length of data to transmit during the call (or -1) + * @hard_timeout: The maximum lifespan of the call in sec * @gfp: The allocation constraints * @notify_rx: Where to send notifications instead of socket queue * @upgrade: Request service upgrade for call @@ -283,6 +284,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock, struct key *key, unsigned long user_call_ID, s64 tx_total_len, + u32 hard_timeout, gfp_t gfp, rxrpc_notify_rx_t notify_rx, bool upgrade, @@ -313,6 +315,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock, p.tx_total_len = tx_total_len; p.interruptibility = interruptibility; p.kernel = true; + p.timeouts.hard = hard_timeout;
memset(&cp, 0, sizeof(cp)); cp.local = rx->local; diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 433060cade038..ed0ef2dbd592b 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -614,6 +614,7 @@ struct rxrpc_call { unsigned long expect_term_by; /* When we expect call termination by */ u32 next_rx_timo; /* Timeout for next Rx packet (jif) */ u32 next_req_timo; /* Timeout for next Rx request packet (jif) */ + u32 hard_timo; /* Maximum lifetime or 0 (jif) */ struct timer_list timer; /* Combined event timer */ struct work_struct destroyer; /* In-process-context destroyer */ rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */ diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 7ce562f6dc8d5..80ed67f4f3a7d 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -225,6 +225,13 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, if (cp->exclusive) __set_bit(RXRPC_CALL_EXCLUSIVE, &call->flags);
+ if (p->timeouts.normal) + call->next_rx_timo = min(msecs_to_jiffies(p->timeouts.normal), 1UL); + if (p->timeouts.idle) + call->next_req_timo = min(msecs_to_jiffies(p->timeouts.idle), 1UL); + if (p->timeouts.hard) + call->hard_timo = p->timeouts.hard * HZ; + ret = rxrpc_init_client_call_security(call); if (ret < 0) { rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, ret); @@ -256,7 +263,7 @@ void rxrpc_start_call_timer(struct rxrpc_call *call) call->keepalive_at = j; call->expect_rx_by = j; call->expect_req_by = j; - call->expect_term_by = j; + call->expect_term_by = j + call->hard_timo; call->timer.expires = now; }
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index c1b074c17b33e..8e0b94714e849 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -651,6 +651,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) if (IS_ERR(call)) return PTR_ERR(call); /* ... and we have the call lock. */ + p.call.nr_timeouts = 0; ret = 0; if (rxrpc_call_is_complete(call)) goto out_put_unlock;
From: Guo Ren guoren@linux.alibaba.com
[ Upstream commit f9c4bbddece7eff1155c70d48e3c9c2a01b9d778 ]
../arch/riscv/kernel/compat_syscall_table.c:12:41: warning: initialized field overwritten [-Woverride-init] 12 | #define __SYSCALL(nr, call) [nr] = (call), | ^ ../include/uapi/asm-generic/unistd.h:567:1: note: in expansion of macro '__SYSCALL' 567 | __SYSCALL(__NR_semget, sys_semget)
Fixes: 59c10c52f573 ("riscv: compat: syscall: Add compat_sys_call_table implementation") Reviewed-by: Conor Dooley conor.dooley@microchip.com Reported-by: kernel test robot lkp@intel.com Tested-by: Jisheng Zhang jszhang@kernel.org Signed-off-by: Guo Ren guoren@linux.alibaba.com Signed-off-by: Guo Ren guoren@kernel.org Signed-off-by: Drew Fustini dfustini@baylibre.com Link: https://lore.kernel.org/r/20230501223353.2833899-1-dfustini@baylibre.com Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/kernel/Makefile | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 4cf303a779ab9..8d02b9d05738d 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -9,6 +9,7 @@ CFLAGS_REMOVE_patch.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE) endif CFLAGS_syscall_table.o += $(call cc-option,-Wno-override-init,) +CFLAGS_compat_syscall_table.o += $(call cc-option,-Wno-override-init,)
ifdef CONFIG_KEXEC AFLAGS_kexec_relocate.o := -mcmodel=medany $(call cc-option,-mno-relax)
From: Felix Fietkau nbd@nbd.name
[ Upstream commit c6d96df9fa2c1d19525239d4262889cce594ce6c ]
Through testing I found out that hardware vlan rx offload support seems to have some hardware issues. At least when using multiple MACs and when receiving tagged packets on the secondary MAC, the hardware can sometimes start to emit wrong tags on the first MAC as well.
In order to avoid such issues, drop the feature configuration and use the offload feature only for DSA hardware untagging on MT7621/MT7622 devices where this feature works properly.
Fixes: 08666cbb7dd5 ("net: ethernet: mtk_eth_soc: add support for configuring vlan rx offload") Tested-by: Frank Wunderlich frank-w@public-files.de Signed-off-by: Felix Fietkau nbd@nbd.name Signed-off-by: Frank Wunderlich frank-w@public-files.de Tested-by: Arınç ÜNAL arinc.unal@arinc9.com Acked-by: Arınç ÜNAL arinc.unal@arinc9.com Link: https://lore.kernel.org/r/20230426172153.8352-1-linux@fw-web.de Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 106 ++++++++------------ drivers/net/ethernet/mediatek/mtk_eth_soc.h | 1 - 2 files changed, 40 insertions(+), 67 deletions(-)
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index f56d4e7d4ae5d..4671d738a37c7 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1870,9 +1870,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
while (done < budget) { unsigned int pktlen, *rxdcsum; - bool has_hwaccel_tag = false; struct net_device *netdev; - u16 vlan_proto, vlan_tci; dma_addr_t dma_addr; u32 hash, reason; int mac = 0; @@ -2007,31 +2005,16 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, skb_checksum_none_assert(skb); skb->protocol = eth_type_trans(skb, netdev);
- if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) { - if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { - if (trxd.rxd3 & RX_DMA_VTAG_V2) { - vlan_proto = RX_DMA_VPID(trxd.rxd4); - vlan_tci = RX_DMA_VID(trxd.rxd4); - has_hwaccel_tag = true; - } - } else if (trxd.rxd2 & RX_DMA_VTAG) { - vlan_proto = RX_DMA_VPID(trxd.rxd3); - vlan_tci = RX_DMA_VID(trxd.rxd3); - has_hwaccel_tag = true; - } - } - /* When using VLAN untagging in combination with DSA, the * hardware treats the MTK special tag as a VLAN and untags it. */ - if (has_hwaccel_tag && netdev_uses_dsa(netdev)) { - unsigned int port = vlan_proto & GENMASK(2, 0); + if (!MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2) && + (trxd.rxd2 & RX_DMA_VTAG) && netdev_uses_dsa(netdev)) { + unsigned int port = RX_DMA_VPID(trxd.rxd3) & GENMASK(2, 0);
if (port < ARRAY_SIZE(eth->dsa_meta) && eth->dsa_meta[port]) skb_dst_set_noref(skb, ð->dsa_meta[port]->dst); - } else if (has_hwaccel_tag) { - __vlan_hwaccel_put_tag(skb, htons(vlan_proto), vlan_tci); }
if (reason == MTK_PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED) @@ -2859,29 +2842,11 @@ static netdev_features_t mtk_fix_features(struct net_device *dev,
static int mtk_set_features(struct net_device *dev, netdev_features_t features) { - struct mtk_mac *mac = netdev_priv(dev); - struct mtk_eth *eth = mac->hw; netdev_features_t diff = dev->features ^ features; - int i;
if ((diff & NETIF_F_LRO) && !(features & NETIF_F_LRO)) mtk_hwlro_netdev_disable(dev);
- /* Set RX VLAN offloading */ - if (!(diff & NETIF_F_HW_VLAN_CTAG_RX)) - return 0; - - mtk_w32(eth, !!(features & NETIF_F_HW_VLAN_CTAG_RX), - MTK_CDMP_EG_CTRL); - - /* sync features with other MAC */ - for (i = 0; i < MTK_MAC_COUNT; i++) { - if (!eth->netdev[i] || eth->netdev[i] == dev) - continue; - eth->netdev[i]->features &= ~NETIF_F_HW_VLAN_CTAG_RX; - eth->netdev[i]->features |= features & NETIF_F_HW_VLAN_CTAG_RX; - } - return 0; }
@@ -3184,30 +3149,6 @@ static int mtk_open(struct net_device *dev) struct mtk_eth *eth = mac->hw; int i, err;
- if (mtk_uses_dsa(dev) && !eth->prog) { - for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) { - struct metadata_dst *md_dst = eth->dsa_meta[i]; - - if (md_dst) - continue; - - md_dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, - GFP_KERNEL); - if (!md_dst) - return -ENOMEM; - - md_dst->u.port_info.port_id = i; - eth->dsa_meta[i] = md_dst; - } - } else { - /* Hardware special tag parsing needs to be disabled if at least - * one MAC does not use DSA. - */ - u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL); - val &= ~MTK_CDMP_STAG_EN; - mtk_w32(eth, val, MTK_CDMP_IG_CTRL); - } - err = phylink_of_phy_connect(mac->phylink, mac->of_node, 0); if (err) { netdev_err(dev, "%s: could not attach PHY: %d\n", __func__, @@ -3246,6 +3187,40 @@ static int mtk_open(struct net_device *dev) phylink_start(mac->phylink); netif_tx_start_all_queues(dev);
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + return 0; + + if (mtk_uses_dsa(dev) && !eth->prog) { + for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) { + struct metadata_dst *md_dst = eth->dsa_meta[i]; + + if (md_dst) + continue; + + md_dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, + GFP_KERNEL); + if (!md_dst) + return -ENOMEM; + + md_dst->u.port_info.port_id = i; + eth->dsa_meta[i] = md_dst; + } + } else { + /* Hardware special tag parsing needs to be disabled if at least + * one MAC does not use DSA. + */ + u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL); + + val &= ~MTK_CDMP_STAG_EN; + mtk_w32(eth, val, MTK_CDMP_IG_CTRL); + + val = mtk_r32(eth, MTK_CDMQ_IG_CTRL); + val &= ~MTK_CDMQ_STAG_EN; + mtk_w32(eth, val, MTK_CDMQ_IG_CTRL); + + mtk_w32(eth, 0, MTK_CDMP_EG_CTRL); + } + return 0; }
@@ -3572,10 +3547,9 @@ static int mtk_hw_init(struct mtk_eth *eth) if (!MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { val = mtk_r32(eth, MTK_CDMP_IG_CTRL); mtk_w32(eth, val | MTK_CDMP_STAG_EN, MTK_CDMP_IG_CTRL); - }
- /* Enable RX VLan Offloading */ - mtk_w32(eth, 1, MTK_CDMP_EG_CTRL); + mtk_w32(eth, 1, MTK_CDMP_EG_CTRL); + }
/* set interrupt delays based on current Net DIM sample */ mtk_dim_rx(ð->rx_dim.work); @@ -4176,7 +4150,7 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np) eth->netdev[id]->hw_features |= NETIF_F_LRO;
eth->netdev[id]->vlan_features = eth->soc->hw_features & - ~(NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX); + ~NETIF_F_HW_VLAN_CTAG_TX; eth->netdev[id]->features |= eth->soc->hw_features; eth->netdev[id]->ethtool_ops = &mtk_ethtool_ops;
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index d4b4f9eaa4419..79112bd3e952e 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -48,7 +48,6 @@ #define MTK_HW_FEATURES (NETIF_F_IP_CSUM | \ NETIF_F_RXCSUM | \ NETIF_F_HW_VLAN_CTAG_TX | \ - NETIF_F_HW_VLAN_CTAG_RX | \ NETIF_F_SG | NETIF_F_TSO | \ NETIF_F_TSO6 | \ NETIF_F_IPV6_CSUM |\
From: Radhakrishna Sripada radhakrishna.sripada@intel.com
[ Upstream commit 6ece90e3665a9b7fb2637fcca26cebd42991580b ]
CPU transcoder mask is used to iterate over the available CPU transcoders in the macro for_each_cpu_transcoder().
The macro is broken on MTL and got highlighted when audio state was being tracked for each transcoder added in [1].
Add the missing CPU transcoder mask which is similar to ADL-P mask but without DSI transcoders.
[1]: https://patchwork.freedesktop.org/patch/523723/
Fixes: 7835303982d1 ("drm/i915/mtl: Add MeteorLake PCI IDs") Cc: Ville Syrjälä ville.syrjala@linux.intel.com Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Acked-by: Haridhar Kalvala haridhar.kalvala@intel.com Reviewed-by: Gustavo Sousa gustavo.sousa@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230420221248.2511314-1-radha... (cherry picked from commit bddc18913bd44adae5c828fd514d570f43ba1576) Signed-off-by: Joonas Lahtinen joonas.lahtinen@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/i915_pci.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c index 4fada7ebe8d82..36cc4fc87c48c 100644 --- a/drivers/gpu/drm/i915/i915_pci.c +++ b/drivers/gpu/drm/i915/i915_pci.c @@ -1133,6 +1133,8 @@ static const struct intel_gt_definition xelpmp_extra_gt[] = { static const struct intel_device_info mtl_info = { XE_HP_FEATURES, XE_LPDP_FEATURES, + .__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | + BIT(TRANSCODER_C) | BIT(TRANSCODER_D), /* * Real graphics IP version will be obtained from hardware GMD_ID * register. Value provided here is just for sanity checking.
From: Jeremy Sowden jeremy@azazel.net
[ Upstream commit de4773f0235acf74554f6a64ea60adc0d7b01895 ]
1. Don't hard-code pkg-config 2. Remove distro-specific default for CFLAGS 3. Use pkg-config for LDLIBS
Fixes: a50a88f026fb ("selftests: netfilter: fix a build error on openSUSE") Suggested-by: Jan Engelhardt jengelh@inai.de Signed-off-by: Jeremy Sowden jeremy@azazel.net Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/netfilter/Makefile | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/netfilter/Makefile b/tools/testing/selftests/netfilter/Makefile index 4504ee07be08d..3686bfa6c58d7 100644 --- a/tools/testing/selftests/netfilter/Makefile +++ b/tools/testing/selftests/netfilter/Makefile @@ -8,8 +8,11 @@ TEST_PROGS := nft_trans_stress.sh nft_fib.sh nft_nat.sh bridge_brouter.sh \ ipip-conntrack-mtu.sh conntrack_tcp_unreplied.sh \ conntrack_vrf.sh nft_synproxy.sh rpath.sh
-CFLAGS += $(shell pkg-config --cflags libmnl 2>/dev/null || echo "-I/usr/include/libmnl") -LDLIBS = -lmnl +HOSTPKG_CONFIG := pkg-config + +CFLAGS += $(shell $(HOSTPKG_CONFIG) --cflags libmnl 2>/dev/null) +LDLIBS += $(shell $(HOSTPKG_CONFIG) --libs libmnl 2>/dev/null || echo -lmnl) + TEST_GEN_FILES = nf-queue connect_close
include ../lib.mk
From: Geetha sowjanya gakula@marvell.com
[ Upstream commit 048486f81d01db4d100af021ee2ea211d19732a0 ]
APR table contains the lmtst base address of PF/VFs. These entries are updated by the PF/VF during the device probe. The lmtst address is fetched from HW using "TXN_REQ" and "ADDR_RSP_STS" registers. The lock tries to protect these registers from getting overwritten when multiple PFs invokes rvu_get_lmtaddr() simultaneously.
For example, if PF1 submit the request and got permitted before it reads the response and PF2 got scheduled submit the request then the response of PF1 is overwritten by the PF2 response.
Fixes: 893ae97214c3 ("octeontx2-af: cn10k: Support configurable LMTST regions") Signed-off-by: Geetha sowjanya gakula@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/rvu_cn10k.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c index 7dbbc115cde42..f9faa5b23bb9d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c @@ -60,13 +60,14 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc, u64 iova, u64 *lmt_addr) { u64 pa, val, pf; - int err; + int err = 0;
if (!iova) { dev_err(rvu->dev, "%s Requested Null address for transulation\n", __func__); return -EINVAL; }
+ mutex_lock(&rvu->rsrc_lock); rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_REQ, iova); pf = rvu_get_pf(pcifunc) & 0x1F; val = BIT_ULL(63) | BIT_ULL(14) | BIT_ULL(13) | pf << 8 | @@ -76,12 +77,13 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc, err = rvu_poll_reg(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS, BIT_ULL(0), false); if (err) { dev_err(rvu->dev, "%s LMTLINE iova transulation failed\n", __func__); - return err; + goto exit; } val = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS); if (val & ~0x1ULL) { dev_err(rvu->dev, "%s LMTLINE iova transulation failed err:%llx\n", __func__, val); - return -EIO; + err = -EIO; + goto exit; } /* PA[51:12] = RVU_AF_SMMU_TLN_FLIT0[57:18] * PA[11:0] = IOVA[11:0] @@ -89,8 +91,9 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc, pa = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TLN_FLIT0) >> 18; pa &= GENMASK_ULL(39, 0); *lmt_addr = (pa << 12) | (iova & 0xFFF); - - return 0; +exit: + mutex_unlock(&rvu->rsrc_lock); + return err; }
static int rvu_update_lmtaddr(struct rvu *rvu, u16 pcifunc, u64 lmt_addr)
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit c60a6b90e7890453f09e0d2163d6acadabe3415b ]
In the current driver, NPC exact match feature was not getting enabled as configured bit was not read properly. for_each_set_bit_from() need end bit as one bit post position in the bit map to read NPC exact nibble enable bits properly. This patch fixes the same.
Fixes: b747923afff8 ("octeontx2-af: Exact match support") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c index 006beb5cf98dd..f15efd41972ee 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c @@ -594,8 +594,7 @@ static int npc_scan_kex(struct rvu *rvu, int blkaddr, u8 intf) */ masked_cfg = cfg & NPC_EXACT_NIBBLE; bitnr = NPC_EXACT_NIBBLE_START; - for_each_set_bit_from(bitnr, (unsigned long *)&masked_cfg, - NPC_EXACT_NIBBLE_START) { + for_each_set_bit_from(bitnr, (unsigned long *)&masked_cfg, NPC_EXACT_NIBBLE_END + 1) { npc_scan_exact_result(mcam, bitnr, key_nibble, intf); key_nibble++; }
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit 60999cb83554ebcf6cfff8894bc2c3d99ea858ba ]
In current driver, NPC cam and mem table sizes are read from wrong register offset. This patch fixes the register offset so that correct values are populated on read.
Fixes: b747923afff8 ("octeontx2-af: Exact match support") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c index f69102d20c903..f2d7156262ff1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c @@ -1878,9 +1878,9 @@ int rvu_npc_exact_init(struct rvu *rvu) rvu->hw->table = table;
/* Read table size, ways and depth */ - table->mem_table.depth = FIELD_GET(GENMASK_ULL(31, 24), npc_const3); table->mem_table.ways = FIELD_GET(GENMASK_ULL(19, 16), npc_const3); - table->cam_table.depth = FIELD_GET(GENMASK_ULL(15, 0), npc_const3); + table->mem_table.depth = FIELD_GET(GENMASK_ULL(15, 0), npc_const3); + table->cam_table.depth = FIELD_GET(GENMASK_ULL(31, 24), npc_const3);
dev_dbg(rvu->dev, "%s: NPC exact match 4way_2k table(ways=%d, depth=%d)\n", __func__, table->mem_table.ways, table->cam_table.depth);
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit 2a6eecc592b4d59a04d513aa25fc0f30d52100cd ]
CN10kb supports large number of dmac filter flows to be inserted. Increase the field size to accommodate the same
Fixes: b747923afff8 ("octeontx2-af: Exact match support") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index f42b2b65bfd7b..0c8fc66ade82d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -335,11 +335,11 @@ struct otx2_flow_config { #define OTX2_PER_VF_VLAN_FLOWS 2 /* Rx + Tx per VF */ #define OTX2_VF_VLAN_RX_INDEX 0 #define OTX2_VF_VLAN_TX_INDEX 1 - u16 max_flows; - u8 dmacflt_max_flows; u32 *bmap_to_dmacindex; unsigned long *dmacflt_bmap; struct list_head flow_list; + u32 dmacflt_max_flows; + u16 max_flows; };
struct otx2_tc_info {
From: Hariprasad Kelam hkelam@marvell.com
[ Upstream commit cb5edce271764524b88b1a6866b3e626686d9a33 ]
Upon physical link change, firmware reports to the kernel about the change along with the details like speed, lmac_type_id, etc. Kernel derives lmac_type based on lmac_type_id received from firmware.
In a few scenarios, firmware returns an invalid lmac_type_id, which is resulting in below kernel panic. This patch adds the missing validation of the lmac_type_id field.
Internal error: Oops: 96000005 [#1] PREEMPT SMP [ 35.321595] Modules linked in: [ 35.328982] CPU: 0 PID: 31 Comm: kworker/0:1 Not tainted 5.4.210-g2e3169d8e1bc-dirty #17 [ 35.337014] Hardware name: Marvell CN103XX board (DT) [ 35.344297] Workqueue: events work_for_cpu_fn [ 35.352730] pstate: 40400089 (nZcv daIf +PAN -UAO) [ 35.360267] pc : strncpy+0x10/0x30 [ 35.366595] lr : cgx_link_change_handler+0x90/0x180
Fixes: 61071a871ea6 ("octeontx2-af: Forward CGX link notifications to PFs") Signed-off-by: Hariprasad Kelam hkelam@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/cgx.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index 724df6398bbe2..bd77152bb8d7c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -1231,6 +1231,14 @@ static inline void link_status_user_format(u64 lstat, linfo->an = FIELD_GET(RESP_LINKSTAT_AN, lstat); linfo->fec = FIELD_GET(RESP_LINKSTAT_FEC, lstat); linfo->lmac_type_id = FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, lstat); + + if (linfo->lmac_type_id >= LMAC_MODE_MAX) { + dev_err(&cgx->pdev->dev, "Unknown lmac_type_id %d reported by firmware on cgx port%d:%d", + linfo->lmac_type_id, cgx->cgx_id, lmac_id); + strncpy(linfo->lmac_type, "Unknown", LMACTYPE_STR_LEN - 1); + return; + } + lmac_string = cgx_lmactype_string[linfo->lmac_type_id]; strncpy(linfo->lmac_type, lmac_string, LMACTYPE_STR_LEN - 1); }
From: Suman Ghosh sumang@marvell.com
[ Upstream commit 2075bf150ddf320df02c05e242774dc0f73be1a1 ]
During the initial design, the IPv4 ip_flag mask was set to 0xff. Which results to filter only fragmets with (fragment_offset == 0). As part of the fix, updated the mask to 0x20 to filter all the fragmented packets irrespective of the fragment_offset value.
Fixes: c672e3727989 ("octeontx2-pf: Add support to filter packet based on IP fragment") Signed-off-by: Suman Ghosh sumang@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c index 044cc211424ed..8392f63e433fc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c @@ -544,7 +544,7 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic, struct otx2_tc_flow *node, if (match.mask->flags & FLOW_DIS_IS_FRAGMENT) { if (ntohs(flow_spec->etype) == ETH_P_IP) { flow_spec->ip_flag = IPV4_FLAG_MORE; - flow_mask->ip_flag = 0xff; + flow_mask->ip_flag = IPV4_FLAG_MORE; req->features |= BIT_ULL(NPC_IPFRAG_IPV4); } else if (ntohs(flow_spec->etype) == ETH_P_IPV6) { flow_spec->next_header = IPPROTO_FRAGMENT;
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit 406bed11fb91a0b35c26fe633d8700febaec6439 ]
1. As per previous implementation, mask and control parameter to generate the field hash value was not passed to the caller program. Updated the secret key mbox to share that information as well, as a part of the fix. 2. Earlier implementation did not consider hash reduction of both source and destination IPv6 addresses. Only source IPv6 address was considered. This fix solves that and provides option to hash
Fixes: 56d9f5fd2246 ("octeontx2-af: Use hashed field in MCAM key") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 16 +++++--- .../marvell/octeontx2/af/rvu_npc_hash.c | 37 ++++++++++++------- .../marvell/octeontx2/af/rvu_npc_hash.h | 6 +++ 3 files changed, 41 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index d2584ebb7a70c..ac90131d5b4ad 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -242,9 +242,9 @@ M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule, \ M(NPC_MCAM_GET_STATS, 0x6012, npc_mcam_entry_stats, \ npc_mcam_get_stats_req, \ npc_mcam_get_stats_rsp) \ -M(NPC_GET_SECRET_KEY, 0x6013, npc_get_secret_key, \ - npc_get_secret_key_req, \ - npc_get_secret_key_rsp) \ +M(NPC_GET_FIELD_HASH_INFO, 0x6013, npc_get_field_hash_info, \ + npc_get_field_hash_info_req, \ + npc_get_field_hash_info_rsp) \ M(NPC_GET_FIELD_STATUS, 0x6014, npc_get_field_status, \ npc_get_field_status_req, \ npc_get_field_status_rsp) \ @@ -1517,14 +1517,20 @@ struct npc_mcam_get_stats_rsp { u8 stat_ena; /* enabled */ };
-struct npc_get_secret_key_req { +struct npc_get_field_hash_info_req { struct mbox_msghdr hdr; u8 intf; };
-struct npc_get_secret_key_rsp { +struct npc_get_field_hash_info_rsp { struct mbox_msghdr hdr; u64 secret_key[3]; +#define NPC_MAX_HASH 2 +#define NPC_MAX_HASH_MASK 2 + /* NPC_AF_INTF(0..1)_HASH(0..1)_MASK(0..1) */ + u64 hash_mask[NPC_MAX_INTF][NPC_MAX_HASH][NPC_MAX_HASH_MASK]; + /* NPC_AF_INTF(0..1)_HASH(0..1)_RESULT_CTRL */ + u64 hash_ctrl[NPC_MAX_INTF][NPC_MAX_HASH]; };
enum ptp_op { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c index f2d7156262ff1..9ec5b3c65020b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c @@ -110,8 +110,8 @@ static u64 npc_update_use_hash(int lt, int ld) * in KEX_LD_CFG */ cfg = KEX_LD_CFG_USE_HASH(0x1, 0x03, - ld ? 0x8 : 0x18, - 0x1, 0x0, 0x10); + ld ? 0x18 : 0x8, + 0x1, 0x0, ld ? 0x14 : 0x10); break; }
@@ -134,7 +134,6 @@ static void npc_program_mkex_hash_rx(struct rvu *rvu, int blkaddr, if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) { u64 cfg = npc_update_use_hash(lt, ld);
- hash_cnt++; if (hash_cnt == NPC_MAX_HASH) return;
@@ -149,6 +148,8 @@ static void npc_program_mkex_hash_rx(struct rvu *rvu, int blkaddr, mkex_hash->hash_mask[intf][ld][1]); SET_KEX_LD_HASH_CTRL(intf, ld, mkex_hash->hash_ctrl[intf][ld]); + + hash_cnt++; } } } @@ -171,7 +172,6 @@ static void npc_program_mkex_hash_tx(struct rvu *rvu, int blkaddr, if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) { u64 cfg = npc_update_use_hash(lt, ld);
- hash_cnt++; if (hash_cnt == NPC_MAX_HASH) return;
@@ -187,8 +187,6 @@ static void npc_program_mkex_hash_tx(struct rvu *rvu, int blkaddr, SET_KEX_LD_HASH_CTRL(intf, ld, mkex_hash->hash_ctrl[intf][ld]); hash_cnt++; - if (hash_cnt == NPC_MAX_HASH) - return; } } } @@ -242,8 +240,8 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf, struct flow_msg *omask) { struct npc_mcam_kex_hash *mkex_hash = rvu->kpu.mkex_hash; - struct npc_get_secret_key_req req; - struct npc_get_secret_key_rsp rsp; + struct npc_get_field_hash_info_req req; + struct npc_get_field_hash_info_rsp rsp; u64 ldata[2], cfg; u32 field_hash; u8 hash_idx; @@ -254,7 +252,7 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf, }
req.intf = intf; - rvu_mbox_handler_npc_get_secret_key(rvu, &req, &rsp); + rvu_mbox_handler_npc_get_field_hash_info(rvu, &req, &rsp);
for (hash_idx = 0; hash_idx < NPC_MAX_HASH; hash_idx++) { cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_CFG(intf, hash_idx)); @@ -315,13 +313,13 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf, } }
-int rvu_mbox_handler_npc_get_secret_key(struct rvu *rvu, - struct npc_get_secret_key_req *req, - struct npc_get_secret_key_rsp *rsp) +int rvu_mbox_handler_npc_get_field_hash_info(struct rvu *rvu, + struct npc_get_field_hash_info_req *req, + struct npc_get_field_hash_info_rsp *rsp) { u64 *secret_key = rsp->secret_key; u8 intf = req->intf; - int blkaddr; + int i, j, blkaddr;
blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); if (blkaddr < 0) { @@ -333,6 +331,19 @@ int rvu_mbox_handler_npc_get_secret_key(struct rvu *rvu, secret_key[1] = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY1(intf)); secret_key[2] = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY2(intf));
+ for (i = 0; i < NPC_MAX_HASH; i++) { + for (j = 0; j < NPC_MAX_HASH_MASK; j++) { + rsp->hash_mask[NIX_INTF_RX][i][j] = + GET_KEX_LD_HASH_MASK(NIX_INTF_RX, i, j); + rsp->hash_mask[NIX_INTF_TX][i][j] = + GET_KEX_LD_HASH_MASK(NIX_INTF_TX, i, j); + } + } + + for (i = 0; i < NPC_MAX_INTF; i++) + for (j = 0; j < NPC_MAX_HASH; j++) + rsp->hash_ctrl[i][j] = GET_KEX_LD_HASH_CTRL(i, j); + return 0; }
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h index 3efeb09c58dec..65936f4aeaacf 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h @@ -31,6 +31,12 @@ rvu_write64(rvu, blkaddr, \ NPC_AF_INTFX_HASHX_MASKX(intf, ld, mask_idx), cfg)
+#define GET_KEX_LD_HASH_CTRL(intf, ld) \ + rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_RESULT_CTRL(intf, ld)) + +#define GET_KEX_LD_HASH_MASK(intf, ld, mask_idx) \ + rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_MASKX(intf, ld, mask_idx)) + #define SET_KEX_LD_HASH_CTRL(intf, ld, cfg) \ rvu_write64(rvu, blkaddr, \ NPC_AF_INTFX_HASHX_RESULT_CTRL(intf, ld), cfg)
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit f66155905959076619c9c519fb099e8ae6cb6f7b ]
1. Allow field hash configuration for both source and destination IPv6. 2. Configure hardware parser based on hash extract feature enable flag for IPv6. 3. Fix IPv6 endianness issue while updating the source/destination IP address via ntuple rule.
Fixes: 56d9f5fd2246 ("octeontx2-af: Use hashed field in MCAM key") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../marvell/octeontx2/af/rvu_npc_fs.c | 23 +++-- .../marvell/octeontx2/af/rvu_npc_fs.h | 4 + .../marvell/octeontx2/af/rvu_npc_hash.c | 88 ++++++++++--------- .../marvell/octeontx2/af/rvu_npc_hash.h | 4 +- 4 files changed, 69 insertions(+), 50 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c index f15efd41972ee..952319453701b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c @@ -13,11 +13,6 @@ #include "rvu_npc_fs.h" #include "rvu_npc_hash.h"
-#define NPC_BYTESM GENMASK_ULL(19, 16) -#define NPC_HDR_OFFSET GENMASK_ULL(15, 8) -#define NPC_KEY_OFFSET GENMASK_ULL(5, 0) -#define NPC_LDATA_EN BIT_ULL(7) - static const char * const npc_flow_names[] = { [NPC_DMAC] = "dmac", [NPC_SMAC] = "smac", @@ -442,6 +437,7 @@ static void npc_handle_multi_layer_fields(struct rvu *rvu, int blkaddr, u8 intf) static void npc_scan_ldata(struct rvu *rvu, int blkaddr, u8 lid, u8 lt, u64 cfg, u8 intf) { + struct npc_mcam_kex_hash *mkex_hash = rvu->kpu.mkex_hash; struct npc_mcam *mcam = &rvu->hw->mcam; u8 hdr, key, nr_bytes, bit_offset; u8 la_ltype, la_start; @@ -490,8 +486,21 @@ do { \ NPC_SCAN_HDR(NPC_SIP_IPV4, NPC_LID_LC, NPC_LT_LC_IP, 12, 4); NPC_SCAN_HDR(NPC_DIP_IPV4, NPC_LID_LC, NPC_LT_LC_IP, 16, 4); NPC_SCAN_HDR(NPC_IPFRAG_IPV6, NPC_LID_LC, NPC_LT_LC_IP6_EXT, 6, 1); - NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16); - NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16); + if (rvu->hw->cap.npc_hash_extract) { + if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][0]) + NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 4); + else + NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16); + + if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][1]) + NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 4); + else + NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16); + } else { + NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16); + NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16); + } + NPC_SCAN_HDR(NPC_SPORT_UDP, NPC_LID_LD, NPC_LT_LD_UDP, 0, 2); NPC_SCAN_HDR(NPC_DPORT_UDP, NPC_LID_LD, NPC_LT_LD_UDP, 2, 2); NPC_SCAN_HDR(NPC_SPORT_TCP, NPC_LID_LD, NPC_LT_LD_TCP, 0, 2); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h index bdd65ce56a32d..3f5c9042d10e7 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.h @@ -9,6 +9,10 @@ #define __RVU_NPC_FS_H
#define IPV6_WORDS 4 +#define NPC_BYTESM GENMASK_ULL(19, 16) +#define NPC_HDR_OFFSET GENMASK_ULL(15, 8) +#define NPC_KEY_OFFSET GENMASK_ULL(5, 0) +#define NPC_LDATA_EN BIT_ULL(7)
void npc_update_entry(struct rvu *rvu, enum key_fields type, struct mcam_entry *entry, u64 val_lo, diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c index 9ec5b3c65020b..b6e885263245c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c @@ -78,42 +78,43 @@ static u32 rvu_npc_toeplitz_hash(const u64 *data, u64 *key, size_t data_bit_len, return hash_out; }
-u32 npc_field_hash_calc(u64 *ldata, struct npc_mcam_kex_hash *mkex_hash, - u64 *secret_key, u8 intf, u8 hash_idx) +u32 npc_field_hash_calc(u64 *ldata, struct npc_get_field_hash_info_rsp rsp, + u8 intf, u8 hash_idx) { u64 hash_key[3]; u64 data_padded[2]; u32 field_hash;
- hash_key[0] = secret_key[1] << 31; - hash_key[0] |= secret_key[2]; - hash_key[1] = secret_key[1] >> 33; - hash_key[1] |= secret_key[0] << 31; - hash_key[2] = secret_key[0] >> 33; + hash_key[0] = rsp.secret_key[1] << 31; + hash_key[0] |= rsp.secret_key[2]; + hash_key[1] = rsp.secret_key[1] >> 33; + hash_key[1] |= rsp.secret_key[0] << 31; + hash_key[2] = rsp.secret_key[0] >> 33;
- data_padded[0] = mkex_hash->hash_mask[intf][hash_idx][0] & ldata[0]; - data_padded[1] = mkex_hash->hash_mask[intf][hash_idx][1] & ldata[1]; + data_padded[0] = rsp.hash_mask[intf][hash_idx][0] & ldata[0]; + data_padded[1] = rsp.hash_mask[intf][hash_idx][1] & ldata[1]; field_hash = rvu_npc_toeplitz_hash(data_padded, hash_key, 128, 159);
- field_hash &= mkex_hash->hash_ctrl[intf][hash_idx] >> 32; - field_hash |= mkex_hash->hash_ctrl[intf][hash_idx]; + field_hash &= FIELD_GET(GENMASK(63, 32), rsp.hash_ctrl[intf][hash_idx]); + field_hash += FIELD_GET(GENMASK(31, 0), rsp.hash_ctrl[intf][hash_idx]); return field_hash; }
-static u64 npc_update_use_hash(int lt, int ld) +static u64 npc_update_use_hash(struct rvu *rvu, int blkaddr, + u8 intf, int lid, int lt, int ld) { - u64 cfg = 0; - - switch (lt) { - case NPC_LT_LC_IP6: - /* Update use_hash(bit-20) and bytesm1 (bit-16:19) - * in KEX_LD_CFG - */ - cfg = KEX_LD_CFG_USE_HASH(0x1, 0x03, - ld ? 0x18 : 0x8, - 0x1, 0x0, ld ? 0x14 : 0x10); - break; - } + u8 hdr, key; + u64 cfg; + + cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_LIDX_LTX_LDX_CFG(intf, lid, lt, ld)); + hdr = FIELD_GET(NPC_HDR_OFFSET, cfg); + key = FIELD_GET(NPC_KEY_OFFSET, cfg); + + /* Update use_hash(bit-20) to 'true' and + * bytesm1(bit-16:19) to '0x3' in KEX_LD_CFG + */ + cfg = KEX_LD_CFG_USE_HASH(0x1, 0x03, + hdr, 0x1, 0x0, key);
return cfg; } @@ -132,11 +133,13 @@ static void npc_program_mkex_hash_rx(struct rvu *rvu, int blkaddr, for (lt = 0; lt < NPC_MAX_LT; lt++) { for (ld = 0; ld < NPC_MAX_LD; ld++) { if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) { - u64 cfg = npc_update_use_hash(lt, ld); + u64 cfg;
if (hash_cnt == NPC_MAX_HASH) return;
+ cfg = npc_update_use_hash(rvu, blkaddr, + intf, lid, lt, ld); /* Set updated KEX configuration */ SET_KEX_LD(intf, lid, lt, ld, cfg); /* Set HASH configuration */ @@ -170,11 +173,13 @@ static void npc_program_mkex_hash_tx(struct rvu *rvu, int blkaddr, for (lt = 0; lt < NPC_MAX_LT; lt++) { for (ld = 0; ld < NPC_MAX_LD; ld++) if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) { - u64 cfg = npc_update_use_hash(lt, ld); + u64 cfg;
if (hash_cnt == NPC_MAX_HASH) return;
+ cfg = npc_update_use_hash(rvu, blkaddr, + intf, lid, lt, ld); /* Set updated KEX configuration */ SET_KEX_LD(intf, lid, lt, ld, cfg); /* Set HASH configuration */ @@ -268,44 +273,45 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf, * is hashed to 32 bit value. */ case NPC_LT_LC_IP6: - if (features & BIT_ULL(NPC_SIP_IPV6)) { + /* ld[0] == hash_idx[0] == Source IPv6 + * ld[1] == hash_idx[1] == Destination IPv6 + */ + if ((features & BIT_ULL(NPC_SIP_IPV6)) && !hash_idx) { u32 src_ip[IPV6_WORDS];
be32_to_cpu_array(src_ip, pkt->ip6src, IPV6_WORDS); - ldata[0] = (u64)src_ip[0] << 32 | src_ip[1]; - ldata[1] = (u64)src_ip[2] << 32 | src_ip[3]; + ldata[1] = (u64)src_ip[0] << 32 | src_ip[1]; + ldata[0] = (u64)src_ip[2] << 32 | src_ip[3]; field_hash = npc_field_hash_calc(ldata, - mkex_hash, - rsp.secret_key, + rsp, intf, hash_idx); npc_update_entry(rvu, NPC_SIP_IPV6, entry, - field_hash, 0, 32, 0, intf); + field_hash, 0, + GENMASK(31, 0), 0, intf); memcpy(&opkt->ip6src, &pkt->ip6src, sizeof(pkt->ip6src)); memcpy(&omask->ip6src, &mask->ip6src, sizeof(mask->ip6src)); - break; - } - - if (features & BIT_ULL(NPC_DIP_IPV6)) { + } else if ((features & BIT_ULL(NPC_DIP_IPV6)) && hash_idx) { u32 dst_ip[IPV6_WORDS];
be32_to_cpu_array(dst_ip, pkt->ip6dst, IPV6_WORDS); - ldata[0] = (u64)dst_ip[0] << 32 | dst_ip[1]; - ldata[1] = (u64)dst_ip[2] << 32 | dst_ip[3]; + ldata[1] = (u64)dst_ip[0] << 32 | dst_ip[1]; + ldata[0] = (u64)dst_ip[2] << 32 | dst_ip[3]; field_hash = npc_field_hash_calc(ldata, - mkex_hash, - rsp.secret_key, + rsp, intf, hash_idx); npc_update_entry(rvu, NPC_DIP_IPV6, entry, - field_hash, 0, 32, 0, intf); + field_hash, 0, + GENMASK(31, 0), 0, intf); memcpy(&opkt->ip6dst, &pkt->ip6dst, sizeof(pkt->ip6dst)); memcpy(&omask->ip6dst, &mask->ip6dst, sizeof(mask->ip6dst)); } + break; } } diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h index 65936f4aeaacf..a1c3d987b8044 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h @@ -62,8 +62,8 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf, struct flow_msg *omask); void npc_config_secret_key(struct rvu *rvu, int blkaddr); void npc_program_mkex_hash(struct rvu *rvu, int blkaddr); -u32 npc_field_hash_calc(u64 *ldata, struct npc_mcam_kex_hash *mkex_hash, - u64 *secret_key, u8 intf, u8 hash_idx); +u32 npc_field_hash_calc(u64 *ldata, struct npc_get_field_hash_info_rsp rsp, + u8 intf, u8 hash_idx);
static struct npc_mcam_kex_hash npc_mkex_hash_default __maybe_unused = { .lid_lt_ld_hash_en = {
From: Ratheesh Kannoth rkannoth@marvell.com
[ Upstream commit 5eb1b7220948a69298a436148a735f32ec325289 ]
Firmware enables PFs and allocate mbox resources for each of the PFs. Currently PF driver configures mbox resources without checking whether PF is enabled or not. This results in crash. This patch fixes this issue by skipping disabled PF's mbox initialization.
Fixes: 9bdc47a6e328 ("octeontx2-af: Mbox communication support btw AF and it's VFs") Signed-off-by: Ratheesh Kannoth rkannoth@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/marvell/octeontx2/af/mbox.c | 5 +- .../net/ethernet/marvell/octeontx2/af/mbox.h | 3 +- .../net/ethernet/marvell/octeontx2/af/rvu.c | 49 +++++++++++++++---- 3 files changed, 46 insertions(+), 11 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c index 2898931d5260a..9690ac01f02c8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c @@ -157,7 +157,7 @@ EXPORT_SYMBOL(otx2_mbox_init); */ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase, struct pci_dev *pdev, void *reg_base, - int direction, int ndevs) + int direction, int ndevs, unsigned long *pf_bmap) { struct otx2_mbox_dev *mdev; int devid, err; @@ -169,6 +169,9 @@ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase, mbox->hwbase = hwbase[0];
for (devid = 0; devid < ndevs; devid++) { + if (!test_bit(devid, pf_bmap)) + continue; + mdev = &mbox->dev[devid]; mdev->mbase = hwbase[devid]; mdev->hwbase = hwbase[devid]; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index ac90131d5b4ad..d9ee56ff73b46 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -96,9 +96,10 @@ void otx2_mbox_destroy(struct otx2_mbox *mbox); int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase, struct pci_dev *pdev, void __force *reg_base, int direction, int ndevs); + int otx2_mbox_regions_init(struct otx2_mbox *mbox, void __force **hwbase, struct pci_dev *pdev, void __force *reg_base, - int direction, int ndevs); + int direction, int ndevs, unsigned long *bmap); void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid); int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid); int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 3f5e09b77d4bd..873f081c030de 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -2274,7 +2274,7 @@ static inline void rvu_afvf_mbox_up_handler(struct work_struct *work) }
static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, - int num, int type) + int num, int type, unsigned long *pf_bmap) { struct rvu_hwinfo *hw = rvu->hw; int region; @@ -2286,6 +2286,9 @@ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, */ if (type == TYPE_AFVF) { for (region = 0; region < num; region++) { + if (!test_bit(region, pf_bmap)) + continue; + if (hw->cap.per_pf_mbox_regs) { bar4 = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFX_BAR4_ADDR(0)) + @@ -2307,6 +2310,9 @@ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, * RVU_AF_PF_BAR4_ADDR register. */ for (region = 0; region < num; region++) { + if (!test_bit(region, pf_bmap)) + continue; + if (hw->cap.per_pf_mbox_regs) { bar4 = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFX_BAR4_ADDR(region)); @@ -2335,20 +2341,41 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, int err = -EINVAL, i, dir, dir_up; void __iomem *reg_base; struct rvu_work *mwork; + unsigned long *pf_bmap; void **mbox_regions; const char *name; + u64 cfg;
- mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL); - if (!mbox_regions) + pf_bmap = bitmap_zalloc(num, GFP_KERNEL); + if (!pf_bmap) return -ENOMEM;
+ /* RVU VFs */ + if (type == TYPE_AFVF) + bitmap_set(pf_bmap, 0, num); + + if (type == TYPE_AFPF) { + /* Mark enabled PFs in bitmap */ + for (i = 0; i < num; i++) { + cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(i)); + if (cfg & BIT_ULL(20)) + set_bit(i, pf_bmap); + } + } + + mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL); + if (!mbox_regions) { + err = -ENOMEM; + goto free_bitmap; + } + switch (type) { case TYPE_AFPF: name = "rvu_afpf_mailbox"; dir = MBOX_DIR_AFPF; dir_up = MBOX_DIR_AFPF_UP; reg_base = rvu->afreg_base; - err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF); + err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF, pf_bmap); if (err) goto free_regions; break; @@ -2357,7 +2384,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, dir = MBOX_DIR_PFVF; dir_up = MBOX_DIR_PFVF_UP; reg_base = rvu->pfreg_base; - err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF); + err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF, pf_bmap); if (err) goto free_regions; break; @@ -2388,16 +2415,19 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, }
err = otx2_mbox_regions_init(&mw->mbox, mbox_regions, rvu->pdev, - reg_base, dir, num); + reg_base, dir, num, pf_bmap); if (err) goto exit;
err = otx2_mbox_regions_init(&mw->mbox_up, mbox_regions, rvu->pdev, - reg_base, dir_up, num); + reg_base, dir_up, num, pf_bmap); if (err) goto exit;
for (i = 0; i < num; i++) { + if (!test_bit(i, pf_bmap)) + continue; + mwork = &mw->mbox_wrk[i]; mwork->rvu = rvu; INIT_WORK(&mwork->work, mbox_handler); @@ -2406,8 +2436,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, mwork->rvu = rvu; INIT_WORK(&mwork->work, mbox_up_handler); } - kfree(mbox_regions); - return 0; + goto free_regions;
exit: destroy_workqueue(mw->mbox_wq); @@ -2416,6 +2445,8 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, iounmap((void __iomem *)mbox_regions[num]); free_regions: kfree(mbox_regions); +free_bitmap: + bitmap_free(pf_bmap); return err; }
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit c926252205c424c4842dbdbe02f8e3296f623204 ]
At the stage of enabling packet I/O in otx2_open, If mailbox timeout occurs then interface ends up in down state where as hardware packet I/O is enabled. Hence disable packet I/O also before bailing out.
Fixes: 1ea0166da050 ("octeontx2-pf: Fix the device state on error") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 3489553ad7010..23eee2b3d4081 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -1835,13 +1835,22 @@ int otx2_open(struct net_device *netdev) otx2_dmacflt_reinstall_flows(pf);
err = otx2_rxtx_enable(pf, true); - if (err) + /* If a mbox communication error happens at this point then interface + * will end up in a state such that it is in down state but hardware + * mcam entries are enabled to receive the packets. Hence disable the + * packet I/O. + */ + if (err == EIO) + goto err_disable_rxtx; + else if (err) goto err_tx_stop_queues;
otx2_do_set_rx_mode(pf);
return 0;
+err_disable_rxtx: + otx2_rxtx_enable(pf, false); err_tx_stop_queues: netif_tx_stop_all_queues(netdev); netif_carrier_off(netdev);
From: Subbaraya Sundeep sbhatta@marvell.com
[ Upstream commit 99ae1260fdb5f15beab8a3adfb93a9041c87a2c1 ]
When a VF device probe fails due to error in MSIX vector allocation then the resources NIX and NPA LFs were not detached. Fix this by detaching the LFs when MSIX vector allocation fails.
Fixes: 3184fb5ba96e ("octeontx2-vf: Virtual function driver support") Signed-off-by: Subbaraya Sundeep sbhatta@marvell.com Signed-off-by: Sunil Kovvuri Goutham sgoutham@marvell.com Signed-off-by: Sai Krishna saikrishnag@marvell.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index ab126f8706c74..53366dbfbf27c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -621,7 +621,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
err = otx2vf_realloc_msix_vectors(vf); if (err) - goto err_mbox_destroy; + goto err_detach_rsrc;
err = otx2_set_real_num_queues(netdev, qcount, qcount); if (err)
From: Shannon Nelson shannon.nelson@amd.com
[ Upstream commit 3711d44fac1f80ea69ecb7315fed05b3812a7401 ]
It seems that ethtool is calling into .get_rxnfc more often with ETHTOOL_GRXCLSRLCNT which ionic doesn't know about. We don't need to log a message about it, just return not supported.
Fixes: aa3198819bea6 ("ionic: Add RSS support") Signed-off-by: Shannon Nelson shannon.nelson@amd.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/pensando/ionic/ionic_ethtool.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c index 01c22701482d9..d7370fb60a168 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c @@ -691,7 +691,7 @@ static int ionic_get_rxnfc(struct net_device *netdev, info->data = lif->nxqs; break; default: - netdev_err(netdev, "Command parameter %d is not supported\n", + netdev_dbg(netdev, "Command parameter %d is not supported\n", info->cmd); err = -EOPNOTSUPP; }
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 9ad685dbfe7e856bbf17a7177b64676d324d6ed7 ]
It is not possible to set the number of lanes when setting link modes using the legacy IOCTL ethtool interface. Since 'struct ethtool_link_ksettings' is not initialized in this path, drivers receive an uninitialized number of lanes in 'struct ethtool_link_ksettings::lanes'.
When this information is later queried from drivers, it results in the ethtool code making decisions based on uninitialized memory, leading to the following KMSAN splat [1]. In practice, this most likely only happens with the tun driver that simply returns whatever it got in the set operation.
As far as I can tell, this uninitialized memory is not leaked to user space thanks to the 'ethtool_ops->cap_link_lanes_supported' check in linkmodes_prepare_data().
Fix by initializing the structure in the IOCTL path. Did not find any more call sites that pass an uninitialized structure when calling 'ethtool_ops::set_link_ksettings()'.
[1] BUG: KMSAN: uninit-value in ethnl_update_linkmodes net/ethtool/linkmodes.c:273 [inline] BUG: KMSAN: uninit-value in ethnl_set_linkmodes+0x190b/0x19d0 net/ethtool/linkmodes.c:333 ethnl_update_linkmodes net/ethtool/linkmodes.c:273 [inline] ethnl_set_linkmodes+0x190b/0x19d0 net/ethtool/linkmodes.c:333 ethnl_default_set_doit+0x88d/0xde0 net/ethtool/netlink.c:640 genl_family_rcv_msg_doit net/netlink/genetlink.c:968 [inline] genl_family_rcv_msg net/netlink/genetlink.c:1048 [inline] genl_rcv_msg+0x141a/0x14c0 net/netlink/genetlink.c:1065 netlink_rcv_skb+0x3f8/0x750 net/netlink/af_netlink.c:2577 genl_rcv+0x40/0x60 net/netlink/genetlink.c:1076 netlink_unicast_kernel net/netlink/af_netlink.c:1339 [inline] netlink_unicast+0xf41/0x1270 net/netlink/af_netlink.c:1365 netlink_sendmsg+0x127d/0x1430 net/netlink/af_netlink.c:1942 sock_sendmsg_nosec net/socket.c:724 [inline] sock_sendmsg net/socket.c:747 [inline] ____sys_sendmsg+0xa24/0xe40 net/socket.c:2501 ___sys_sendmsg+0x2a1/0x3f0 net/socket.c:2555 __sys_sendmsg net/socket.c:2584 [inline] __do_sys_sendmsg net/socket.c:2593 [inline] __se_sys_sendmsg net/socket.c:2591 [inline] __x64_sys_sendmsg+0x36b/0x540 net/socket.c:2591 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Uninit was stored to memory at: tun_get_link_ksettings+0x37/0x60 drivers/net/tun.c:3544 __ethtool_get_link_ksettings+0x17b/0x260 net/ethtool/ioctl.c:441 ethnl_set_linkmodes+0xee/0x19d0 net/ethtool/linkmodes.c:327 ethnl_default_set_doit+0x88d/0xde0 net/ethtool/netlink.c:640 genl_family_rcv_msg_doit net/netlink/genetlink.c:968 [inline] genl_family_rcv_msg net/netlink/genetlink.c:1048 [inline] genl_rcv_msg+0x141a/0x14c0 net/netlink/genetlink.c:1065 netlink_rcv_skb+0x3f8/0x750 net/netlink/af_netlink.c:2577 genl_rcv+0x40/0x60 net/netlink/genetlink.c:1076 netlink_unicast_kernel net/netlink/af_netlink.c:1339 [inline] netlink_unicast+0xf41/0x1270 net/netlink/af_netlink.c:1365 netlink_sendmsg+0x127d/0x1430 net/netlink/af_netlink.c:1942 sock_sendmsg_nosec net/socket.c:724 [inline] sock_sendmsg net/socket.c:747 [inline] ____sys_sendmsg+0xa24/0xe40 net/socket.c:2501 ___sys_sendmsg+0x2a1/0x3f0 net/socket.c:2555 __sys_sendmsg net/socket.c:2584 [inline] __do_sys_sendmsg net/socket.c:2593 [inline] __se_sys_sendmsg net/socket.c:2591 [inline] __x64_sys_sendmsg+0x36b/0x540 net/socket.c:2591 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Uninit was stored to memory at: tun_set_link_ksettings+0x37/0x60 drivers/net/tun.c:3553 ethtool_set_link_ksettings+0x600/0x690 net/ethtool/ioctl.c:609 __dev_ethtool net/ethtool/ioctl.c:3024 [inline] dev_ethtool+0x1db9/0x2a70 net/ethtool/ioctl.c:3078 dev_ioctl+0xb07/0x1270 net/core/dev_ioctl.c:524 sock_do_ioctl+0x295/0x540 net/socket.c:1213 sock_ioctl+0x729/0xd90 net/socket.c:1316 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:870 [inline] __se_sys_ioctl+0x222/0x400 fs/ioctl.c:856 __x64_sys_ioctl+0x96/0xe0 fs/ioctl.c:856 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Local variable link_ksettings created at: ethtool_set_link_ksettings+0x54/0x690 net/ethtool/ioctl.c:577 __dev_ethtool net/ethtool/ioctl.c:3024 [inline] dev_ethtool+0x1db9/0x2a70 net/ethtool/ioctl.c:3078
Fixes: 012ce4dd3102 ("ethtool: Extend link modes settings uAPI with lanes") Reported-and-tested-by: syzbot+ef6edd9f1baaa54d6235@syzkaller.appspotmail.com Link: https://lore.kernel.org/netdev/0000000000004bb41105fa70f361@google.com/ Reviewed-by: Danielle Ratson danieller@nvidia.com Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ethtool/ioctl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c index 646b3e490c71a..f0c646a17700f 100644 --- a/net/ethtool/ioctl.c +++ b/net/ethtool/ioctl.c @@ -573,8 +573,8 @@ static int ethtool_get_link_ksettings(struct net_device *dev, static int ethtool_set_link_ksettings(struct net_device *dev, void __user *useraddr) { + struct ethtool_link_ksettings link_ksettings = {}; int err; - struct ethtool_link_ksettings link_ksettings;
ASSERT_RTNL();
From: Shannon Nelson shannon.nelson@amd.com
[ Upstream commit 4a54903ff68ddb33b6463c94b4eb37fc584ef760 ]
Add a check for NULL on the alloc return. If devlink_alloc() fails and we try to use devlink_priv() on the NULL return, the kernel gets very unhappy and panics. With this fix, the driver load will still fail, but at least it won't panic the kernel.
Fixes: df69ba43217d ("ionic: Add basic framework for IONIC Network device driver") Signed-off-by: Shannon Nelson shannon.nelson@amd.com Reviewed-by: Simon Horman simon.horman@corigine.com Reviewed-by: Jiri Pirko jiri@nvidia.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/pensando/ionic/ionic_devlink.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c index e6ff757895abb..4ec66a6be0738 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c @@ -61,6 +61,8 @@ struct ionic *ionic_devlink_alloc(struct device *dev) struct devlink *dl;
dl = devlink_alloc(&ionic_dl_ops, sizeof(struct ionic), dev); + if (!dl) + return NULL;
return devlink_priv(dl); }
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 6a341729fb31b4c5df9f74f24b4b1c98410c9b87 ]
syzkaller reported a warning below [0].
We can reproduce it by sending 0-byte data from the (AF_PACKET, SOCK_PACKET) socket via some devices whose dev->hard_header_len is 0.
struct sockaddr_pkt addr = { .spkt_family = AF_PACKET, .spkt_device = "tun0", }; int fd;
fd = socket(AF_PACKET, SOCK_PACKET, 0); sendto(fd, NULL, 0, 0, (struct sockaddr *)&addr, sizeof(addr));
We have a similar fix for the (AF_PACKET, SOCK_RAW) socket as commit dc633700f00f ("net/af_packet: check len when min_header_len equals to 0").
Let's add the same test for the SOCK_PACKET socket.
[0]: skb_assert_len WARNING: CPU: 1 PID: 19945 at include/linux/skbuff.h:2552 skb_assert_len include/linux/skbuff.h:2552 [inline] WARNING: CPU: 1 PID: 19945 at include/linux/skbuff.h:2552 __dev_queue_xmit+0x1f26/0x31d0 net/core/dev.c:4159 Modules linked in: CPU: 1 PID: 19945 Comm: syz-executor.0 Not tainted 6.3.0-rc7-02330-gca6270c12e20 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 RIP: 0010:skb_assert_len include/linux/skbuff.h:2552 [inline] RIP: 0010:__dev_queue_xmit+0x1f26/0x31d0 net/core/dev.c:4159 Code: 89 de e8 1d a2 85 fd 84 db 75 21 e8 64 a9 85 fd 48 c7 c6 80 2a 1f 86 48 c7 c7 c0 06 1f 86 c6 05 23 cf 27 04 01 e8 fa ee 56 fd <0f> 0b e8 43 a9 85 fd 0f b6 1d 0f cf 27 04 31 ff 89 de e8 e3 a1 85 RSP: 0018:ffff8880217af6e0 EFLAGS: 00010282 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffc90001133000 RDX: 0000000000040000 RSI: ffffffff81186922 RDI: 0000000000000001 RBP: ffff8880217af8b0 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000001 R11: 0000000000000001 R12: ffff888030045640 R13: ffff8880300456b0 R14: ffff888030045650 R15: ffff888030045718 FS: 00007fc5864da640(0000) GS:ffff88806cd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020005740 CR3: 000000003f856003 CR4: 0000000000770ee0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: <TASK> dev_queue_xmit include/linux/netdevice.h:3085 [inline] packet_sendmsg_spkt+0xc4b/0x1230 net/packet/af_packet.c:2066 sock_sendmsg_nosec net/socket.c:724 [inline] sock_sendmsg+0x1b4/0x200 net/socket.c:747 ____sys_sendmsg+0x331/0x970 net/socket.c:2503 ___sys_sendmsg+0x11d/0x1c0 net/socket.c:2557 __sys_sendmmsg+0x18c/0x430 net/socket.c:2643 __do_sys_sendmmsg net/socket.c:2672 [inline] __se_sys_sendmmsg net/socket.c:2669 [inline] __x64_sys_sendmmsg+0x9c/0x100 net/socket.c:2669 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3c/0x90 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x72/0xdc RIP: 0033:0x7fc58791de5d Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 73 9f 1b 00 f7 d8 64 89 01 48 RSP: 002b:00007fc5864d9cc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000133 RAX: ffffffffffffffda RBX: 00000000004bbf80 RCX: 00007fc58791de5d RDX: 0000000000000001 RSI: 0000000020005740 RDI: 0000000000000004 RBP: 00000000004bbf80 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000b R14: 00007fc58797e530 R15: 0000000000000000 </TASK> ---[ end trace 0000000000000000 ]--- skb len=0 headroom=16 headlen=0 tailroom=304 mac=(16,0) net=(16,-1) trans=-1 shinfo(txflags=0 nr_frags=0 gso(size=0 type=0 segs=0)) csum(0x0 ip_summed=0 complete_sw=0 valid=0 level=0) hash(0x0 sw=0 l4=0) proto=0x0000 pkttype=0 iif=0 dev name=sit0 feat=0x00000006401d7869 sk family=17 type=10 proto=0
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Reviewed-by: Willem de Bruijn willemb@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/packet/af_packet.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index 1259b34a28ebe..cc7a42ba94f93 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2036,7 +2036,7 @@ static int packet_sendmsg_spkt(struct socket *sock, struct msghdr *msg, goto retry; }
- if (!dev_validate_header(dev, skb->data, len)) { + if (!dev_validate_header(dev, skb->data, len) || !skb->len) { err = -EINVAL; goto out_unlock; }
From: Chia-I Wu olvaffe@gmail.com
[ Upstream commit 2397e3d8d2e120355201a8310b61929f5a8bd2c0 ]
mgr->ctx_handles should be protected by mgr->lock.
v2: improve commit message v3: add a Fixes tag
Signed-off-by: Chia-I Wu olvaffe@gmail.com Reviewed-by: Christian König christian.koenig@amd.com Fixes: 52c6a62c64fa ("drm/amdgpu: add interface for editing a foreign process's priority v3") Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c index e9b45089a28a6..863b2a34b2d64 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c @@ -38,6 +38,7 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev, { struct fd f = fdget(fd); struct amdgpu_fpriv *fpriv; + struct amdgpu_ctx_mgr *mgr; struct amdgpu_ctx *ctx; uint32_t id; int r; @@ -51,8 +52,11 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev, return r; }
- idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id) + mgr = &fpriv->ctx_mgr; + mutex_lock(&mgr->lock); + idr_for_each_entry(&mgr->ctx_handles, ctx, id) amdgpu_ctx_priority_override(ctx, priority); + mutex_unlock(&mgr->lock);
fdput(f); return 0;
From: Ruliang Lin u202112092@hust.edu.cn
[ Upstream commit 0d727e1856ef22dd9337199430258cb64cbbc658 ]
Smatch complains that: snd_usb_caiaq_input_init() warn: missing error code 'ret'
This patch adds a new case to handle the situation where the device does not support any input methods in the `snd_usb_caiaq_input_init` function. It returns an `-EINVAL` error code to indicate that no input methods are supported on the device.
Fixes: 523f1dce3743 ("[ALSA] Add Native Instrument usb audio device support") Signed-off-by: Ruliang Lin u202112092@hust.edu.cn Reviewed-by: Dongliang Mu dzm91@hust.edu.cn Acked-by: Daniel Mack daniel@zonque.org Link: https://lore.kernel.org/r/20230504065054.3309-1-u202112092@hust.edu.cn Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/usb/caiaq/input.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/sound/usb/caiaq/input.c b/sound/usb/caiaq/input.c index 1e2cf2f08eecd..84f26dce7f5d0 100644 --- a/sound/usb/caiaq/input.c +++ b/sound/usb/caiaq/input.c @@ -804,6 +804,7 @@ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev)
default: /* no input methods supported on this device */ + ret = -EINVAL; goto exit_free_idev; }
From: Claudio Imbrenda imbrenda@linux.ibm.com
[ Upstream commit 292a7d6fca33df70ca4b8e9b0d0e74adf87582dc ]
On machines without the Destroy Secure Configuration Fast UVC, the topmost level of page tables is set aside and freed asynchronously as last step of the asynchronous teardown.
Each gmap has a host_to_guest radix tree mapping host (userspace) addresses (with 1M granularity) to gmap segment table entries (pmds).
If a guest is smaller than 2GB, the topmost level of page tables is the segment table (i.e. there are only 2 levels). Replacing it means that the pointers in the host_to_guest mapping would become stale and cause all kinds of nasty issues.
This patch fixes the issue by disallowing asynchronous teardown for guests with only 2 levels of page tables. Userspace should (and already does) try using the normal destroy if the asynchronous one fails.
Update s390_replace_asce so it refuses to replace segment type ASCEs. This is still needed in case the normal destroy VM fails.
Fixes: fb491d5500a7 ("KVM: s390: pv: asynchronous destroy for reboot") Reviewed-by: Marc Hartmayer mhartmay@linux.ibm.com Reviewed-by: Janosch Frank frankja@linux.ibm.com Signed-off-by: Claudio Imbrenda imbrenda@linux.ibm.com Message-Id: 20230421085036.52511-2-imbrenda@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/s390/kvm/pv.c | 5 +++++ arch/s390/mm/gmap.c | 7 +++++++ 2 files changed, 12 insertions(+)
diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c index e032ebbf51b97..3ce5f4351156a 100644 --- a/arch/s390/kvm/pv.c +++ b/arch/s390/kvm/pv.c @@ -314,6 +314,11 @@ int kvm_s390_pv_set_aside(struct kvm *kvm, u16 *rc, u16 *rrc) */ if (kvm->arch.pv.set_aside) return -EINVAL; + + /* Guest with segment type ASCE, refuse to destroy asynchronously */ + if ((kvm->arch.gmap->asce & _ASCE_TYPE_MASK) == _ASCE_TYPE_SEGMENT) + return -EINVAL; + priv = kzalloc(sizeof(*priv), GFP_KERNEL); if (!priv) return -ENOMEM; diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 74e1d873dce05..784fc6cbddb1a 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -2830,6 +2830,9 @@ EXPORT_SYMBOL_GPL(s390_unlist_old_asce); * s390_replace_asce - Try to replace the current ASCE of a gmap with a copy * @gmap: the gmap whose ASCE needs to be replaced * + * If the ASCE is a SEGMENT type then this function will return -EINVAL, + * otherwise the pointers in the host_to_guest radix tree will keep pointing + * to the wrong pages, causing use-after-free and memory corruption. * If the allocation of the new top level page table fails, the ASCE is not * replaced. * In any case, the old ASCE is always removed from the gmap CRST list. @@ -2844,6 +2847,10 @@ int s390_replace_asce(struct gmap *gmap)
s390_unlist_old_asce(gmap);
+ /* Replacing segment type ASCEs would cause serious issues */ + if ((gmap->asce & _ASCE_TYPE_MASK) == _ASCE_TYPE_SEGMENT) + return -EINVAL; + page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); if (!page) return -ENOMEM;
From: Claudio Imbrenda imbrenda@linux.ibm.com
[ Upstream commit c148dc8e2fa403be501612ee409db866eeed35c0 ]
Fix a potential race in gmap_make_secure() and remove the last user of follow_page() without FOLL_GET.
The old code is locking something it doesn't have a reference to, and as explained by Jason and David in this discussion: https://lore.kernel.org/linux-mm/Y9J4P%2FRNvY1Ztn0Q@nvidia.com/ it can lead to all kind of bad things, including the page getting unmapped (MADV_DONTNEED), freed, reallocated as a larger folio and the unlock_page() would target the wrong bit. There is also another race with the FOLL_WRITE, which could race between the follow_page() and the get_locked_pte().
The main point is to remove the last use of follow_page() without FOLL_GET or FOLL_PIN, removing the races can be considered a nice bonus.
Link: https://lore.kernel.org/linux-mm/Y9J4P%2FRNvY1Ztn0Q@nvidia.com/ Suggested-by: Jason Gunthorpe jgg@nvidia.com Fixes: 214d9bbcd3a6 ("s390/mm: provide memory management functions for protected KVM guests") Reviewed-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Claudio Imbrenda imbrenda@linux.ibm.com Message-Id: 20230428092753.27913-2-imbrenda@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/s390/kernel/uv.c | 32 +++++++++++--------------------- 1 file changed, 11 insertions(+), 21 deletions(-)
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c index 9f18a4af9c131..cb2ee06df286c 100644 --- a/arch/s390/kernel/uv.c +++ b/arch/s390/kernel/uv.c @@ -192,21 +192,10 @@ static int expected_page_refs(struct page *page) return res; }
-static int make_secure_pte(pte_t *ptep, unsigned long addr, - struct page *exp_page, struct uv_cb_header *uvcb) +static int make_page_secure(struct page *page, struct uv_cb_header *uvcb) { - pte_t entry = READ_ONCE(*ptep); - struct page *page; int expected, cc = 0;
- if (!pte_present(entry)) - return -ENXIO; - if (pte_val(entry) & _PAGE_INVALID) - return -ENXIO; - - page = pte_page(entry); - if (page != exp_page) - return -ENXIO; if (PageWriteback(page)) return -EAGAIN; expected = expected_page_refs(page); @@ -304,17 +293,18 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb) goto out;
rc = -ENXIO; - page = follow_page(vma, uaddr, FOLL_WRITE); - if (IS_ERR_OR_NULL(page)) - goto out; - - lock_page(page); ptep = get_locked_pte(gmap->mm, uaddr, &ptelock); - if (should_export_before_import(uvcb, gmap->mm)) - uv_convert_from_secure(page_to_phys(page)); - rc = make_secure_pte(ptep, uaddr, page, uvcb); + if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) { + page = pte_page(*ptep); + rc = -EAGAIN; + if (trylock_page(page)) { + if (should_export_before_import(uvcb, gmap->mm)) + uv_convert_from_secure(page_to_phys(page)); + rc = make_page_secure(page, uvcb); + unlock_page(page); + } + } pte_unmap_unlock(ptep, ptelock); - unlock_page(page); out: mmap_read_unlock(gmap->mm);
From: Arınç ÜNAL arinc.unal@arinc9.com
[ Upstream commit 37c218d8021e36e226add4bab93d071d30fe0704 ]
The multi-chip module MT7530 switch with a 40 MHz oscillator on the MT7621AT, MT7621DAT, and MT7621ST SoCs forwards corrupt frames using trgmii.
This is caused by the assumption that MT7621 SoCs have got 150 MHz PLL, hence using the ncpo1 value, 0x0780.
My testing shows this value works on Unielec U7621-06, Bartel's testing shows it won't work on Hi-Link HLK-MT7621A and Netgear WAC104. All devices tested have got 40 MHz oscillators.
Using the value for 125 MHz PLL, 0x0640, works on all boards at hand. The definitions for 125 MHz PLL exist on the Banana Pi BPI-R2 BSP source code whilst 150 MHz PLL don't.
Forwarding frames using trgmii on the MCM MT7530 switch with a 25 MHz oscillator on the said MT7621 SoCs works fine because the ncpo1 value defined for it is for 125 MHz PLL.
Change the 150 MHz PLL comment to 125 MHz PLL, and use the 125 MHz PLL ncpo1 values for both oscillator frequencies.
Link: https://github.com/BPI-SINOVOIP/BPI-R2-bsp/blob/81d24bbce7d99524d0771a8bdb2d... Fixes: 7ef6f6f8d237 ("net: dsa: mt7530: Add MT7621 TRGMII mode support") Tested-by: Bartel Eerdekens bartel.eerdekens@constell8.be Signed-off-by: Arınç ÜNAL arinc.unal@arinc9.com Reviewed-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/mt7530.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index 326f992536a7e..f61e6e89339e1 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -446,9 +446,9 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface) else ssc_delta = 0x87; if (priv->id == ID_MT7621) { - /* PLL frequency: 150MHz: 1.2GBit */ + /* PLL frequency: 125MHz: 1.0GBit */ if (xtal == HWTRAP_XTAL_40MHZ) - ncpo1 = 0x0780; + ncpo1 = 0x0640; if (xtal == HWTRAP_XTAL_25MHZ) ncpo1 = 0x0a00; } else { /* PLL frequency: 250MHz: 2.0Gbit */
From: Daniel Golle daniel@makrotopia.org
[ Upstream commit 7f54cc9772ced2d76ac11832f0ada43798443ac9 ]
MT7988 shares a significant part of the setup function with MT7531. Split-off those parts into a shared function which is going to be used also by mt7988_setup.
Signed-off-by: Daniel Golle daniel@makrotopia.org Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: 120a56b01bee ("net: dsa: mt7530: fix network connectivity with multiple CPU ports") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/mt7530.c | 99 ++++++++++++++++++++++------------------ 1 file changed, 55 insertions(+), 44 deletions(-)
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index f61e6e89339e1..54babe1f2b751 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -2312,12 +2312,65 @@ mt7530_setup(struct dsa_switch *ds) return 0; }
+static int +mt7531_setup_common(struct dsa_switch *ds) +{ + struct mt7530_priv *priv = ds->priv; + struct dsa_port *cpu_dp; + int ret, i; + + /* BPDU to CPU port */ + dsa_switch_for_each_cpu_port(cpu_dp, ds) { + mt7530_rmw(priv, MT7531_CFC, MT7531_CPU_PMAP_MASK, + BIT(cpu_dp->index)); + break; + } + mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK, + MT753X_BPDU_CPU_ONLY); + + /* Enable and reset MIB counters */ + mt7530_mib_reset(ds); + + for (i = 0; i < MT7530_NUM_PORTS; i++) { + /* Disable forwarding by default on all ports */ + mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK, + PCR_MATRIX_CLR); + + /* Disable learning by default on all ports */ + mt7530_set(priv, MT7530_PSC_P(i), SA_DIS); + + mt7530_set(priv, MT7531_DBG_CNT(i), MT7531_DIS_CLR); + + if (dsa_is_cpu_port(ds, i)) { + ret = mt753x_cpu_port_enable(ds, i); + if (ret) + return ret; + } else { + mt7530_port_disable(ds, i); + + /* Set default PVID to 0 on all user ports */ + mt7530_rmw(priv, MT7530_PPBV1_P(i), G0_PORT_VID_MASK, + G0_PORT_VID_DEF); + } + + /* Enable consistent egress tag */ + mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK, + PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT)); + } + + /* Flush the FDB table */ + ret = mt7530_fdb_cmd(priv, MT7530_FDB_FLUSH, NULL); + if (ret < 0) + return ret; + + return 0; +} + static int mt7531_setup(struct dsa_switch *ds) { struct mt7530_priv *priv = ds->priv; struct mt7530_dummy_poll p; - struct dsa_port *cpu_dp; u32 val, id; int ret, i;
@@ -2395,44 +2448,7 @@ mt7531_setup(struct dsa_switch *ds) mt7531_ind_c45_phy_write(priv, MT753X_CTRL_PHY_ADDR, MDIO_MMD_VEND2, CORE_PLL_GROUP4, val);
- /* BPDU to CPU port */ - dsa_switch_for_each_cpu_port(cpu_dp, ds) { - mt7530_rmw(priv, MT7531_CFC, MT7531_CPU_PMAP_MASK, - BIT(cpu_dp->index)); - break; - } - mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK, - MT753X_BPDU_CPU_ONLY); - - /* Enable and reset MIB counters */ - mt7530_mib_reset(ds); - - for (i = 0; i < MT7530_NUM_PORTS; i++) { - /* Disable forwarding by default on all ports */ - mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK, - PCR_MATRIX_CLR); - - /* Disable learning by default on all ports */ - mt7530_set(priv, MT7530_PSC_P(i), SA_DIS); - - mt7530_set(priv, MT7531_DBG_CNT(i), MT7531_DIS_CLR); - - if (dsa_is_cpu_port(ds, i)) { - ret = mt753x_cpu_port_enable(ds, i); - if (ret) - return ret; - } else { - mt7530_port_disable(ds, i); - - /* Set default PVID to 0 on all user ports */ - mt7530_rmw(priv, MT7530_PPBV1_P(i), G0_PORT_VID_MASK, - G0_PORT_VID_DEF); - } - - /* Enable consistent egress tag */ - mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK, - PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT)); - } + mt7531_setup_common(ds);
/* Setup VLAN ID 0 for VLAN-unaware bridges */ ret = mt7530_setup_vlan0(priv); @@ -2442,11 +2458,6 @@ mt7531_setup(struct dsa_switch *ds) ds->assisted_learning_on_cpu_port = true; ds->mtu_enforcement_ingress = true;
- /* Flush the FDB table */ - ret = mt7530_fdb_cmd(priv, MT7530_FDB_FLUSH, NULL); - if (ret < 0) - return ret; - return 0; }
From: Arınç ÜNAL arinc.unal@arinc9.com
[ Upstream commit 120a56b01beed51ab5956a734adcfd2760307107 ]
On mt753x_cpu_port_enable() there's code that enables flooding for the CPU port only. Since mt753x_cpu_port_enable() runs twice when both CPU ports are enabled, port 6 becomes the only port to forward the frames to. But port 5 is the active port, so no frames received from the user ports will be forwarded to port 5 which breaks network connectivity.
Every bit of the BC_FFP, UNM_FFP, and UNU_FFP bits represents a port. Fix this issue by setting the bit that corresponds to the CPU port without overwriting the other bits.
Clear the bits beforehand only for the MT7531 switch. According to the documents MT7621 Giga Switch Programming Guide v0.3 and MT7531 Reference Manual for Development Board v1.0, after reset, the BC_FFP, UNM_FFP, and UNU_FFP bits are set to 1 for MT7531, 0 for MT7530.
The commit 5e5502e012b8 ("net: dsa: mt7530: fix roaming from DSA user ports") silently changed the method to set the bits on the MT7530_MFC. Instead of clearing the relevant bits before mt7530_cpu_port_enable() which runs under a for loop, the commit started doing it on mt7530_cpu_port_enable().
Back then, this didn't really matter as only a single CPU port could be used since the CPU port number was hardcoded. The driver was later changed with commit 1f9a6abecf53 ("net: dsa: mt7530: get cpu-port via dp->cpu_dp instead of constant") to retrieve the CPU port via dp->cpu_dp. With that, this silent change became an issue for when using multiple CPU ports.
Fixes: 5e5502e012b8 ("net: dsa: mt7530: fix roaming from DSA user ports") Signed-off-by: Arınç ÜNAL arinc.unal@arinc9.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/mt7530.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index 54babe1f2b751..69b6a9265e4e4 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -1015,9 +1015,9 @@ mt753x_cpu_port_enable(struct dsa_switch *ds, int port) mt7530_write(priv, MT7530_PVC_P(port), PORT_SPEC_TAG);
- /* Disable flooding by default */ - mt7530_rmw(priv, MT7530_MFC, BC_FFP_MASK | UNM_FFP_MASK | UNU_FFP_MASK, - BC_FFP(BIT(port)) | UNM_FFP(BIT(port)) | UNU_FFP(BIT(port))); + /* Enable flooding on the CPU port */ + mt7530_set(priv, MT7530_MFC, BC_FFP(BIT(port)) | UNM_FFP(BIT(port)) | + UNU_FFP(BIT(port)));
/* Set CPU port number */ if (priv->id == ID_MT7621) @@ -2331,6 +2331,10 @@ mt7531_setup_common(struct dsa_switch *ds) /* Enable and reset MIB counters */ mt7530_mib_reset(ds);
+ /* Disable flooding on all ports */ + mt7530_clear(priv, MT7530_MFC, BC_FFP_MASK | UNM_FFP_MASK | + UNU_FFP_MASK); + for (i = 0; i < MT7530_NUM_PORTS; i++) { /* Disable forwarding by default on all ports */ mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK,
From: Michal Swiatkowski michal.swiatkowski@linux.intel.com
[ Upstream commit 9f699b71c2f31c51bd1483a20e1c8ddc5986a8c9 ]
VF to VF traffic shouldn't go outside. To enforce it, set only the loopback enable bit in case of all ingress type rules added via the tc tool.
Fixes: 0d08a441fb1a ("ice: ndo_setup_tc implementation for PF") Reported-by: Sujai Buvaneswaran Sujai.Buvaneswaran@intel.com Signed-off-by: Michal Swiatkowski michal.swiatkowski@linux.intel.com Tested-by: George Kuruvinakunnel george.kuruvinakunnel@intel.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ice/ice_tc_lib.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c index ce72d512eddf9..a9db9bdd72629 100644 --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c @@ -693,17 +693,18 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr) * results into order of switch rule evaluation. */ rule_info.priority = 7; + rule_info.flags_info.act_valid = true;
if (fltr->direction == ICE_ESWITCH_FLTR_INGRESS) { rule_info.sw_act.flag |= ICE_FLTR_RX; rule_info.sw_act.src = hw->pf_id; rule_info.rx = true; + rule_info.flags_info.act = ICE_SINGLE_ACT_LB_ENABLE; } else { rule_info.sw_act.flag |= ICE_FLTR_TX; rule_info.sw_act.src = vsi->idx; rule_info.rx = false; rule_info.flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE; - rule_info.flags_info.act_valid = true; }
/* specify the cookie as filter_rule_id */
From: Wenliang Wang wangwenliang.1995@bytedance.com
[ Upstream commit f8bb5104394560e29017c25bcade4c6b7aabd108 ]
For multi-queue and large ring-size use case, the following error occurred when free_unused_bufs: rcu: INFO: rcu_sched self-detected stall on CPU.
Fixes: 986a4f4d452d ("virtio_net: multiqueue support") Signed-off-by: Wenliang Wang wangwenliang.1995@bytedance.com Acked-by: Michael S. Tsirkin mst@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/virtio_net.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 0644069592211..259d54b229bf1 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -3411,12 +3411,14 @@ static void free_unused_bufs(struct virtnet_info *vi) struct virtqueue *vq = vi->sq[i].vq; while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) virtnet_sq_free_unused_buf(vq, buf); + cond_resched(); }
for (i = 0; i < vi->max_queue_pairs; i++) { struct virtqueue *vq = vi->rq[i].vq; while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) virtnet_rq_free_unused_buf(vq, buf); + cond_resched(); } }
From: Wei Fang wei.fang@nxp.com
[ Upstream commit 299efdc2380aac588557f4d0b2ce7bee05bd0cf2 ]
We should check whether the current SFI (Stream Filter Instance) table is full before creating a new SFI entry. However, the previous logic checks the handle by mistake and might lead to unpredictable behavior.
Fixes: 888ae5a3952b ("net: enetc: add tc flower psfp offload driver") Signed-off-by: Wei Fang wei.fang@nxp.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/freescale/enetc/enetc_qos.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c index fcebb54224c09..a8539a8554a13 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c @@ -1255,7 +1255,7 @@ static int enetc_psfp_parse_clsflower(struct enetc_ndev_priv *priv, int index;
index = enetc_get_free_index(priv); - if (sfi->handle < 0) { + if (index < 0) { NL_SET_ERR_MSG_MOD(extack, "No Stream Filter resource!"); err = -ENOSPC; goto free_fmi;
From: Shenwei Wang shenwei.wang@nxp.com
[ Upstream commit 26312c685ae0bca61e06ac75ee158b1e69546415 ]
In the current xdp_xmit implementation, if any single frame fails to transmit due to insufficient buffer descriptors, the function nevertheless reports success in sending all frames. This results in erroneously indicating that frames were transmitted when in fact they were dropped.
This patch fixes the issue by ensureing the return value properly indicates the actual number of frames successfully transmitted, rather than potentially reporting success for all frames when some could not transmit.
Fixes: 6d6b39f180b8 ("net: fec: add initial XDP support") Signed-off-by: Gagandeep Singh g.singh@nxp.com Signed-off-by: Shenwei Wang shenwei.wang@nxp.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/freescale/fec_main.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 2341597408d12..5fd3b41319827 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -3737,7 +3737,8 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep, entries_free = fec_enet_get_free_txdesc_num(txq); if (entries_free < MAX_SKB_FRAGS + 1) { netdev_err(fep->netdev, "NOT enough BD for SG!\n"); - return NETDEV_TX_OK; + xdp_return_frame(frame); + return NETDEV_TX_BUSY; }
/* Fill in a Tx ring entry */ @@ -3795,6 +3796,7 @@ static int fec_enet_xdp_xmit(struct net_device *dev, struct fec_enet_private *fep = netdev_priv(dev); struct fec_enet_priv_tx_q *txq; int cpu = smp_processor_id(); + unsigned int sent_frames = 0; struct netdev_queue *nq; unsigned int queue; int i; @@ -3805,8 +3807,11 @@ static int fec_enet_xdp_xmit(struct net_device *dev,
__netif_tx_lock(nq, cpu);
- for (i = 0; i < num_frames; i++) - fec_enet_txq_xmit_frame(fep, txq, frames[i]); + for (i = 0; i < num_frames; i++) { + if (fec_enet_txq_xmit_frame(fep, txq, frames[i]) != 0) + break; + sent_frames++; + }
/* Make sure the update to bdp and tx_skbuff are performed. */ wmb(); @@ -3816,7 +3821,7 @@ static int fec_enet_xdp_xmit(struct net_device *dev,
__netif_tx_unlock(nq);
- return num_frames; + return sent_frames; }
static const struct net_device_ops fec_netdev_ops = {
From: Florian Fainelli f.fainelli@gmail.com
[ Upstream commit 93e0401e0fc0c54b0ac05b687cd135c2ac38187c ]
The call to phy_stop() races with the later call to phy_disconnect(), resulting in concurrent phy_suspend() calls being run from different CPUs. The final call to phy_disconnect() ensures that the PHY is stopped and suspended, too.
Fixes: c96e731c93ff ("net: bcmgenet: connect and disconnect from the PHY state machine") Signed-off-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/genet/bcmgenet.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c index d937daa8ee883..f28ffc31df220 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c @@ -3465,7 +3465,6 @@ static void bcmgenet_netif_stop(struct net_device *dev) /* Disable MAC transmit. TX DMA disabled must be done before this */ umac_enable_set(priv, CMD_TX_EN, false);
- phy_stop(dev->phydev); bcmgenet_disable_rx_napi(priv); bcmgenet_intr_disable(priv);
On 5/15/2023 9:26 AM, Greg Kroah-Hartman wrote:
From: Florian Fainelli f.fainelli@gmail.com
[ Upstream commit 93e0401e0fc0c54b0ac05b687cd135c2ac38187c ]
The call to phy_stop() races with the later call to phy_disconnect(), resulting in concurrent phy_suspend() calls being run from different CPUs. The final call to phy_disconnect() ensures that the PHY is stopped and suspended, too.
Fixes: c96e731c93ff ("net: bcmgenet: connect and disconnect from the PHY state machine") Signed-off-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org
Please drop this patch until https://lore.kernel.org/lkml/20230515025608.2587012-1-f.fainelli@gmail.com/ is merged, thanks!
On Mon, May 15, 2023 at 08:42:31PM -0700, Florian Fainelli wrote:
On 5/15/2023 9:26 AM, Greg Kroah-Hartman wrote:
From: Florian Fainelli f.fainelli@gmail.com
[ Upstream commit 93e0401e0fc0c54b0ac05b687cd135c2ac38187c ]
The call to phy_stop() races with the later call to phy_disconnect(), resulting in concurrent phy_suspend() calls being run from different CPUs. The final call to phy_disconnect() ensures that the PHY is stopped and suspended, too.
Fixes: c96e731c93ff ("net: bcmgenet: connect and disconnect from the PHY state machine") Signed-off-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org
Please drop this patch until https://lore.kernel.org/lkml/20230515025608.2587012-1-f.fainelli@gmail.com/ is merged, thanks!
Now dropped from all queues. Let us know when this fix is merged so we can queue them both up.
thanks,
greg k-h
From: Kan Liang kan.liang@linux.intel.com
[ Upstream commit 07d85ba9d04e1ebd282f656a29ddf08c5b7b32a2 ]
Hundreds of "read LOST count failed" error messages may be displayed, when the below command is launched.
perf record -e '{cpu/mem-loads-aux/,cpu/event=0xcd,umask=0x1/}:S' -a
According to the commit 89e3106fa25fb1b6 ("libperf: Handle read format in perf_evsel__read()"), the PERF_FORMAT_GROUP is only available for the leader. However, the record__read_lost_samples() goes through every entry of an evlist, which includes both leader and member. The member event errors out and triggers the error message. Since there may be hundreds of CPUs on a server, the message will be printed hundreds of times, which is very annoying.
The message itself is correct, but the pr_err is a overkill. Other error messages in the record__read_lost_samples() are all pr_debug. To make the output format consistent, change the pr_err("read LOST count failed\n"); to pr_debug("read LOST count failed\n");. User can still get the message via -v option.
Fixes: e3a23261ad06d598 ("perf record: Read and inject LOST_SAMPLES events") Signed-off-by: Kan Liang kan.liang@linux.intel.com Acked-by: Namhyung Kim namhyung@kernel.org Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/r/20230301150413.27011-1-kan.liang@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/builtin-record.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 8374117e66f6e..be7c0c29d15b0 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -1866,7 +1866,7 @@ static void __record__read_lost_samples(struct record *rec, struct evsel *evsel, int id_hdr_size;
if (perf_evsel__read(&evsel->core, cpu_idx, thread_idx, &count) < 0) { - pr_err("read LOST count failed\n"); + pr_debug("read LOST count failed\n"); return; }
From: Ian Rogers irogers@google.com
[ Upstream commit 7a9b223ca0761a7c7c72e569b86b84a907aa0f92 ]
Add a build target to echo the python/perf.so's name from Makefile.perf. Use it in tests/make so the correct target is built and tested for.
Fixes: caec54705adb73b0 ("perf build: Fix python/perf.so library's name") Signed-off-by: Ian Rogers irogers@google.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andres Freund andres@anarazel.de Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Leo Yan leo.yan@linaro.org Cc: Mark Rutland mark.rutland@arm.com Cc: Martin Liška mliska@suse.cz Cc: Namhyung Kim namhyung@kernel.org Cc: Nathan Chancellor nathan@kernel.org Cc: Nick Desaulniers ndesaulniers@google.com Cc: Pavithra Gurushankar gpavithrasha@gmail.com Cc: Peter Zijlstra peterz@infradead.org Cc: Quentin Monnet quentin@isovalent.com Cc: Roberto Sassu roberto.sassu@huawei.com Cc: Stephane Eranian eranian@google.com Cc: Tiezhu Yang yangtiezhu@loongson.cn Cc: Tom Rix trix@redhat.com Cc: Yang Jihong yangjihong1@huawei.com Cc: llvm@lists.linux.dev Link: https://lore.kernel.org/r/20230311065753.3012826-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/Makefile.perf | 7 +++++-- tools/perf/tests/make | 5 +++-- 2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf index b7d9c42062300..cc2b0ace54bac 100644 --- a/tools/perf/Makefile.perf +++ b/tools/perf/Makefile.perf @@ -647,13 +647,16 @@ all: shell_compatibility_test $(ALL_PROGRAMS) $(LANG_BINDINGS) $(OTHER_PROGRAMS) # Create python binding output directory if not already present _dummy := $(shell [ -d '$(OUTPUT)python' ] || mkdir -p '$(OUTPUT)python')
-$(OUTPUT)python/perf$(PYTHON_EXTENSION_SUFFIX): $(PYTHON_EXT_SRCS) $(PYTHON_EXT_DEPS) $(LIBPERF) +$(OUTPUT)python/perf$(PYTHON_EXTENSION_SUFFIX): $(PYTHON_EXT_SRCS) $(PYTHON_EXT_DEPS) $(LIBPERF) $(LIBSUBCMD) $(QUIET_GEN)LDSHARED="$(CC) -pthread -shared" \ CFLAGS='$(CFLAGS)' LDFLAGS='$(LDFLAGS)' \ $(PYTHON_WORD) util/setup.py \ --quiet build_ext; \ cp $(PYTHON_EXTBUILD_LIB)perf*.so $(OUTPUT)python/
+python_perf_target: + @echo "Target is: $(OUTPUT)python/perf$(PYTHON_EXTENSION_SUFFIX)" + please_set_SHELL_PATH_to_a_more_modern_shell: $(Q)$$(:)
@@ -1151,7 +1154,7 @@ FORCE: .PHONY: all install clean config-clean strip install-gtk .PHONY: shell_compatibility_test please_set_SHELL_PATH_to_a_more_modern_shell .PHONY: .FORCE-PERF-VERSION-FILE TAGS tags cscope FORCE prepare -.PHONY: archheaders +.PHONY: archheaders python_perf_target
endif # force_fixdep
diff --git a/tools/perf/tests/make b/tools/perf/tests/make index 009d6efb673ce..deb37fb982e97 100644 --- a/tools/perf/tests/make +++ b/tools/perf/tests/make @@ -62,10 +62,11 @@ lib = lib endif
has = $(shell which $1 2>/dev/null) +python_perf_so := $(shell $(MAKE) python_perf_target|grep "Target is:"|awk '{print $$3}')
# standard single make variable specified make_clean_all := clean all -make_python_perf_so := python/perf.so +make_python_perf_so := $(python_perf_so) make_debug := DEBUG=1 make_no_libperl := NO_LIBPERL=1 make_no_libpython := NO_LIBPYTHON=1 @@ -204,7 +205,7 @@ test_make_doc := $(test_ok) test_make_help_O := $(test_ok) test_make_doc_O := $(test_ok)
-test_make_python_perf_so := test -f $(PERF_O)/python/perf.so +test_make_python_perf_so := test -f $(PERF_O)/$(python_perf_so)
test_make_perf_o := test -f $(PERF_O)/perf.o test_make_util_map_o := test -f $(PERF_O)/util/map.o
From: Roman Lozko lozko.roma@gmail.com
[ Upstream commit 1f64cfdebfe0494264271e8d7a3a47faf5f58ec7 ]
Integers are not converted to floats during division in Python 2 which results in incorrect IPC values. Fix by switching to new division behavior.
Fixes: a483e64c0b62e93a ("perf scripting python: intel-pt-events.py: Add --insn-trace and --src-trace") Signed-off-by: Roman Lozko lozko.roma@gmail.com Acked-by: Adrian Hunter adrian.hunter@intel.com Cc: Adrian Hunter adrian.hunter@intel.com Link: https://lore.kernel.org/r/20230310150445.2925841-1-lozko.roma@gmail.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/scripts/python/intel-pt-events.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/scripts/python/intel-pt-events.py b/tools/perf/scripts/python/intel-pt-events.py index 08862a2582f44..1c76368f13c1a 100644 --- a/tools/perf/scripts/python/intel-pt-events.py +++ b/tools/perf/scripts/python/intel-pt-events.py @@ -11,7 +11,7 @@ # FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for # more details.
-from __future__ import print_function +from __future__ import division, print_function
import io import os
From: Adrian Hunter adrian.hunter@intel.com
[ Upstream commit 80c3a7d9f20401169283b5670dbb8d7ac07a1d55 ]
Python scripting can be used without libtraceevent. In particular, scripting for Intel PT does not use tracepoints, and so does not need libtraceevent support.
Alter the build and employ conditional compilation to allow Python scripting without libtraceevent.
Example:
Before:
$ ldd `which perf` | grep -i python $ ldd `which perf` | grep -i libtraceevent $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.031 MB perf.data ] $ perf script intel-pt-events.py |& head -3 Error: Couldn't find script `intel-pt-events.py'
See perf script -l for available scripts.
After:
$ ldd `which perf` | grep -i python libpython3.10.so.1.0 => /lib/x86_64-linux-gnu/libpython3.10.so.1.0 (0x00007f4bac400000) $ ldd `which perf` | grep -i libtraceevent $ perf script intel-pt-events.py | head Intel PT Branch Trace, Power Events, Event Trace and PTWRITE Switch In 8021/8021 [000] 11234.097713404 0/0 perf-exec 8021/8021 [000] 11234.098041726 psb offset: 0x0 0 [unknown] ([unknown]) perf-exec 8021/8021 [000] 11234.098041726 cbr 45 freq: 4505 MHz (161%) 0 [unknown] ([unknown]) uname 8021/8021 [000] 11234.098082170 branches:uH tr strt 0 [unknown] ([unknown]) => 7f3a8b9422b0 _start+0x0 (/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2) uname 8021/8021 [000] 11234.098082379 branches:uH tr end 7f3a8b9422b0 _start+0x0 (/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2) => 0 [unknown] ([unknown]) uname 8021/8021 [000] 11234.098083629 branches:uH tr strt 0 [unknown] ([unknown]) => 7f3a8b9422b0 _start+0x0 (/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2) uname 8021/8021 [000] 11234.098083629 branches:uH call 7f3a8b9422b3 _start+0x3 (/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2) => 7f3a8b943050 _dl_start+0x0 (/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2) uname 8021/8021 [000] 11234.098083837 branches:uH tr end 7f3a8b943060 _dl_start+0x10 (/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2) => 0 [unknown] ([unknown]) IPC: 0.01 (9/938) uname 8021/8021 [000] 11234.098084670 branches:uH tr strt 0 [unknown] ([unknown]) => 7f3a8b943060 _dl_start+0x10 (/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
Fixes: 378ef0f5d9d7f465 ("perf build: Use libtraceevent from the system") Signed-off-by: Adrian Hunter adrian.hunter@intel.com Cc: Ian Rogers irogers@google.com Cc: Jiri Olsa jolsa@kernel.org Cc: Namhyung Kim namhyung@kernel.org Link: https://lore.kernel.org/r/20230315084321.14563-1-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/Build | 2 +- tools/perf/builtin-script.c | 2 +- tools/perf/scripts/Build | 4 +- .../perf/scripts/python/Perf-Trace-Util/Build | 2 +- .../scripts/python/Perf-Trace-Util/Context.c | 4 + tools/perf/util/Build | 2 +- tools/perf/util/scripting-engines/Build | 2 +- .../scripting-engines/trace-event-python.c | 75 +++++++++++++------ tools/perf/util/trace-event-scripting.c | 9 ++- 9 files changed, 72 insertions(+), 30 deletions(-)
diff --git a/tools/perf/Build b/tools/perf/Build index 6dd67e5022955..aa76236228349 100644 --- a/tools/perf/Build +++ b/tools/perf/Build @@ -56,6 +56,6 @@ CFLAGS_builtin-report.o += -DDOCDIR="BUILD_STR($(srcdir_SQ)/Documentation)" perf-y += util/ perf-y += arch/ perf-y += ui/ -perf-$(CONFIG_LIBTRACEEVENT) += scripts/ +perf-y += scripts/
gtk-y += ui/gtk/ diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c index 69394ac0a20dc..2f185012b9dda 100644 --- a/tools/perf/builtin-script.c +++ b/tools/perf/builtin-script.c @@ -2288,8 +2288,8 @@ static void setup_scripting(void) { #ifdef HAVE_LIBTRACEEVENT setup_perl_scripting(); - setup_python_scripting(); #endif + setup_python_scripting(); }
static int flush_scripting(void) diff --git a/tools/perf/scripts/Build b/tools/perf/scripts/Build index 68d4b54574adb..7d8e2e57faac5 100644 --- a/tools/perf/scripts/Build +++ b/tools/perf/scripts/Build @@ -1,2 +1,4 @@ -perf-$(CONFIG_LIBPERL) += perl/Perf-Trace-Util/ +ifeq ($(CONFIG_LIBTRACEEVENT),y) + perf-$(CONFIG_LIBPERL) += perl/Perf-Trace-Util/ +endif perf-$(CONFIG_LIBPYTHON) += python/Perf-Trace-Util/ diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Build b/tools/perf/scripts/python/Perf-Trace-Util/Build index d5fed4e426179..7d0e33ce6aba4 100644 --- a/tools/perf/scripts/python/Perf-Trace-Util/Build +++ b/tools/perf/scripts/python/Perf-Trace-Util/Build @@ -1,3 +1,3 @@ -perf-$(CONFIG_LIBTRACEEVENT) += Context.o +perf-y += Context.o
CFLAGS_Context.o += $(PYTHON_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-nested-externs diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c index 895f5fc239653..b0d449f41650f 100644 --- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c +++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c @@ -59,6 +59,7 @@ static struct scripting_context *get_scripting_context(PyObject *args) return get_args(args, "context", NULL); }
+#ifdef HAVE_LIBTRACEEVENT static PyObject *perf_trace_context_common_pc(PyObject *obj, PyObject *args) { struct scripting_context *c = get_scripting_context(args); @@ -90,6 +91,7 @@ static PyObject *perf_trace_context_common_lock_depth(PyObject *obj,
return Py_BuildValue("i", common_lock_depth(c)); } +#endif
static PyObject *perf_sample_insn(PyObject *obj, PyObject *args) { @@ -178,12 +180,14 @@ static PyObject *perf_sample_srccode(PyObject *obj, PyObject *args) }
static PyMethodDef ContextMethods[] = { +#ifdef HAVE_LIBTRACEEVENT { "common_pc", perf_trace_context_common_pc, METH_VARARGS, "Get the common preempt count event field value."}, { "common_flags", perf_trace_context_common_flags, METH_VARARGS, "Get the common flags event field value."}, { "common_lock_depth", perf_trace_context_common_lock_depth, METH_VARARGS, "Get the common lock depth event field value."}, +#endif { "perf_sample_insn", perf_sample_insn, METH_VARARGS, "Get the machine code instruction."}, { "perf_set_itrace_options", perf_set_itrace_options, diff --git a/tools/perf/util/Build b/tools/perf/util/Build index 79b9498886a20..fa87597398780 100644 --- a/tools/perf/util/Build +++ b/tools/perf/util/Build @@ -78,7 +78,7 @@ perf-y += pmu-bison.o perf-y += pmu-hybrid.o perf-y += svghelper.o perf-$(CONFIG_LIBTRACEEVENT) += trace-event-info.o -perf-$(CONFIG_LIBTRACEEVENT) += trace-event-scripting.o +perf-y += trace-event-scripting.o perf-$(CONFIG_LIBTRACEEVENT) += trace-event.o perf-$(CONFIG_LIBTRACEEVENT) += trace-event-parse.o perf-$(CONFIG_LIBTRACEEVENT) += trace-event-read.o diff --git a/tools/perf/util/scripting-engines/Build b/tools/perf/util/scripting-engines/Build index 2c96aa3cc1ec8..c220fec970324 100644 --- a/tools/perf/util/scripting-engines/Build +++ b/tools/perf/util/scripting-engines/Build @@ -1,7 +1,7 @@ ifeq ($(CONFIG_LIBTRACEEVENT),y) perf-$(CONFIG_LIBPERL) += trace-event-perl.o - perf-$(CONFIG_LIBPYTHON) += trace-event-python.o endif +perf-$(CONFIG_LIBPYTHON) += trace-event-python.o
CFLAGS_trace-event-perl.o += $(PERL_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-shadow -Wno-nested-externs -Wno-undef -Wno-switch-default -Wno-bad-function-cast -Wno-declaration-after-statement -Wno-switch-enum
diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c index e930f5f1f36d2..53c32b75c0cab 100644 --- a/tools/perf/util/scripting-engines/trace-event-python.c +++ b/tools/perf/util/scripting-engines/trace-event-python.c @@ -30,7 +30,9 @@ #include <linux/bitmap.h> #include <linux/compiler.h> #include <linux/time64.h> +#ifdef HAVE_LIBTRACEEVENT #include <traceevent/event-parse.h> +#endif
#include "../build-id.h" #include "../counts.h" @@ -87,18 +89,21 @@ PyMODINIT_FUNC initperf_trace_context(void); PyMODINIT_FUNC PyInit_perf_trace_context(void); #endif
+#ifdef HAVE_LIBTRACEEVENT #define TRACE_EVENT_TYPE_MAX \ ((1 << (sizeof(unsigned short) * 8)) - 1)
static DECLARE_BITMAP(events_defined, TRACE_EVENT_TYPE_MAX);
-#define MAX_FIELDS 64 #define N_COMMON_FIELDS 7
-extern struct scripting_context *scripting_context; - static char *cur_field_name; static int zero_flag_atom; +#endif + +#define MAX_FIELDS 64 + +extern struct scripting_context *scripting_context;
static PyObject *main_module, *main_dict;
@@ -153,6 +158,26 @@ static PyObject *get_handler(const char *handler_name) return handler; }
+static void call_object(PyObject *handler, PyObject *args, const char *die_msg) +{ + PyObject *retval; + + retval = PyObject_CallObject(handler, args); + if (retval == NULL) + handler_call_die(die_msg); + Py_DECREF(retval); +} + +static void try_call_object(const char *handler_name, PyObject *args) +{ + PyObject *handler; + + handler = get_handler(handler_name); + if (handler) + call_object(handler, args, handler_name); +} + +#ifdef HAVE_LIBTRACEEVENT static int get_argument_count(PyObject *handler) { int arg_count = 0; @@ -181,25 +206,6 @@ static int get_argument_count(PyObject *handler) return arg_count; }
-static void call_object(PyObject *handler, PyObject *args, const char *die_msg) -{ - PyObject *retval; - - retval = PyObject_CallObject(handler, args); - if (retval == NULL) - handler_call_die(die_msg); - Py_DECREF(retval); -} - -static void try_call_object(const char *handler_name, PyObject *args) -{ - PyObject *handler; - - handler = get_handler(handler_name); - if (handler) - call_object(handler, args, handler_name); -} - static void define_value(enum tep_print_arg_type field_type, const char *ev_name, const char *field_name, @@ -379,6 +385,7 @@ static PyObject *get_field_numeric_entry(struct tep_event *event, obj = list; return obj; } +#endif
static const char *get_dsoname(struct map *map) { @@ -906,6 +913,7 @@ static PyObject *get_perf_sample_dict(struct perf_sample *sample, return dict; }
+#ifdef HAVE_LIBTRACEEVENT static void python_process_tracepoint(struct perf_sample *sample, struct evsel *evsel, struct addr_location *al, @@ -1037,6 +1045,16 @@ static void python_process_tracepoint(struct perf_sample *sample,
Py_DECREF(t); } +#else +static void python_process_tracepoint(struct perf_sample *sample __maybe_unused, + struct evsel *evsel __maybe_unused, + struct addr_location *al __maybe_unused, + struct addr_location *addr_al __maybe_unused) +{ + fprintf(stderr, "Tracepoint events are not supported because " + "perf is not linked with libtraceevent.\n"); +} +#endif
static PyObject *tuple_new(unsigned int sz) { @@ -1967,6 +1985,7 @@ static int python_stop_script(void) return 0; }
+#ifdef HAVE_LIBTRACEEVENT static int python_generate_script(struct tep_handle *pevent, const char *outfile) { int i, not_first, count, nr_events; @@ -2157,6 +2176,18 @@ static int python_generate_script(struct tep_handle *pevent, const char *outfile
return 0; } +#else +static int python_generate_script(struct tep_handle *pevent __maybe_unused, + const char *outfile __maybe_unused) +{ + fprintf(stderr, "Generating Python perf-script is not supported." + " Install libtraceevent and rebuild perf to enable it.\n" + "For example:\n # apt install libtraceevent-dev (ubuntu)" + "\n # yum install libtraceevent-devel (Fedora)" + "\n etc.\n"); + return -1; +} +#endif
struct scripting_ops python_scripting_ops = { .name = "Python", diff --git a/tools/perf/util/trace-event-scripting.c b/tools/perf/util/trace-event-scripting.c index 56175c53f9af7..bd0000300c774 100644 --- a/tools/perf/util/trace-event-scripting.c +++ b/tools/perf/util/trace-event-scripting.c @@ -9,7 +9,9 @@ #include <stdlib.h> #include <string.h> #include <errno.h> +#ifdef HAVE_LIBTRACEEVENT #include <traceevent/event-parse.h> +#endif
#include "debug.h" #include "trace-event.h" @@ -27,10 +29,11 @@ void scripting_context__update(struct scripting_context *c, struct addr_location *addr_al) { c->event_data = sample->raw_data; + c->pevent = NULL; +#ifdef HAVE_LIBTRACEEVENT if (evsel->tp_format) c->pevent = evsel->tp_format->tep; - else - c->pevent = NULL; +#endif c->event = event; c->sample = sample; c->evsel = evsel; @@ -122,6 +125,7 @@ void setup_python_scripting(void) } #endif
+#ifdef HAVE_LIBTRACEEVENT static void print_perl_unsupported_msg(void) { fprintf(stderr, "Perl scripting not supported." @@ -186,3 +190,4 @@ void setup_perl_scripting(void) register_perl_scripting(&perl_scripting_ops); } #endif +#endif
From: Namhyung Kim namhyung@kernel.org
[ Upstream commit 6094c7744bb0563e833e81d8df8513f9a4e7a257 ]
The earlier commit f0cdde28fecc0d7f ("perf hist: Improve srcfile sort key performance") updated the srcfile logic but missed to change the ->cmp() callback which is called for every sample.
It should use the same logic like in the srcline to speed up the processing because it'd return the same information repeatedly for the same address. The real processing will be done in sort__srcfile_collapse().
Fixes: f0cdde28fecc0d7f ("perf hist: Improve srcfile sort key performance") Signed-off-by: Namhyung Kim namhyung@kernel.org Cc: Adrian Hunter adrian.hunter@intel.com Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@kernel.org Cc: Jiri Olsa jolsa@kernel.org Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/r/20230323025005.191239-1-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/sort.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c index 37662cdec5eef..2bf3f88276972 100644 --- a/tools/perf/util/sort.c +++ b/tools/perf/util/sort.c @@ -603,12 +603,7 @@ static char *hist_entry__get_srcfile(struct hist_entry *e) static int64_t sort__srcfile_cmp(struct hist_entry *left, struct hist_entry *right) { - if (!left->srcfile) - left->srcfile = hist_entry__get_srcfile(left); - if (!right->srcfile) - right->srcfile = hist_entry__get_srcfile(right); - - return strcmp(right->srcfile, left->srcfile); + return sort__srcline_cmp(left, right); }
static int64_t
From: Thomas Richter tmricht@linux.ibm.com
[ Upstream commit eb2feb68cb7d404288493c41480843bc9f404789 ]
Commit 7f76b31130680fb3 ("perf list: Add IBM z16 event description for s390") contains the verbal description for z16 extended counter set.
However some entries of the public description contain UTF-8 characters which breaks the build on some distros.
Fix this and remove the UTF-8 characters.
Fixes: 7f76b31130680fb3 ("perf list: Add IBM z16 event description for s390") Reported-by: Arnaldo Carvalho de Melo acme@redhat.com Suggested-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Thomas Richter tmricht@linux.ibm.com Tested-by: Arnaldo Carvalho de Melo acme@redhat.com Cc: Sumanth Korikkar sumanthk@linux.ibm.com Cc: Sven Schnelle svens@linux.ibm.com Cc: Thomas Richter tmricht@linux.ibm.com Cc: Vasily Gorbik gor@linux.ibm.com Link: https://lore.kernel.org/r/ZBwkl77/I31AQk12@osiris Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/pmu-events/arch/s390/cf_z16/extended.json | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/tools/perf/pmu-events/arch/s390/cf_z16/extended.json b/tools/perf/pmu-events/arch/s390/cf_z16/extended.json index c306190fc06f2..c2b10ec1c6e01 100644 --- a/tools/perf/pmu-events/arch/s390/cf_z16/extended.json +++ b/tools/perf/pmu-events/arch/s390/cf_z16/extended.json @@ -95,28 +95,28 @@ "EventCode": "145", "EventName": "DCW_REQ", "BriefDescription": "Directory Write Level 1 Data Cache from Cache", - "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache." + "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache." }, { "Unit": "CPU-M-CF", "EventCode": "146", "EventName": "DCW_REQ_IV", "BriefDescription": "Directory Write Level 1 Data Cache from Cache with Intervention", - "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache with intervention." + "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache with intervention." }, { "Unit": "CPU-M-CF", "EventCode": "147", "EventName": "DCW_REQ_CHIP_HIT", "BriefDescription": "Directory Write Level 1 Data Cache from Cache with Chip HP Hit", - "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache after using chip level horizontal persistence, Chip-HP hit." + "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache after using chip level horizontal persistence, Chip-HP hit." }, { "Unit": "CPU-M-CF", "EventCode": "148", "EventName": "DCW_REQ_DRAWER_HIT", "BriefDescription": "Directory Write Level 1 Data Cache from Cache with Drawer HP Hit", - "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestor’s Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit." + "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit." }, { "Unit": "CPU-M-CF", @@ -284,7 +284,7 @@ "EventCode": "172", "EventName": "ICW_REQ_DRAWER_HIT", "BriefDescription": "Directory Write Level 1 Instruction Cache from Cache with Drawer HP Hit", - "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestor’s Level-2 cache using drawer level horizontal persistence, Drawer-HP hit." + "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestors Level-2 cache using drawer level horizontal persistence, Drawer-HP hit." }, { "Unit": "CPU-M-CF",
From: Patrice Duroux patrice.duroux@gmail.com
[ Upstream commit 9835b742ac3ee16dee361e7ccda8022f99d1cd94 ]
It's not 2&>1, the correct is 2>&1
Fixes: ade1d0307b2fb3d9 ("perf offcpu: Update offcpu test for child process") Signed-off-by: Patrice Duroux patrice.duroux@gmail.com Acked-by: Ian Rogers irogers@google.com Cc: Namhyung Kim namhyung@kernel.org Link: https://lore.kernel.org/r/20230303193058.21274-1-patrice.duroux@gmail.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/tests/shell/record_offcpu.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/tests/shell/record_offcpu.sh b/tools/perf/tests/shell/record_offcpu.sh index e01973d4e0fba..f062ae9a95e1a 100755 --- a/tools/perf/tests/shell/record_offcpu.sh +++ b/tools/perf/tests/shell/record_offcpu.sh @@ -65,7 +65,7 @@ test_offcpu_child() {
# perf bench sched messaging creates 400 processes if ! perf record --off-cpu -e dummy -o ${perfdata} -- \ - perf bench sched messaging -g 10 > /dev/null 2&>1 + perf bench sched messaging -g 10 > /dev/null 2>&1 then echo "Child task off-cpu test [Failed record]" err=1
From: Yang Jihong yangjihong1@huawei.com
[ Upstream commit ecd4960d908e27e40b63a7046df2f942c148c6f6 ]
If no target is specified for 'latency' subcommand, the execution fails because - 1 (invalid value) is written to set_ftrace_pid tracefs file. Make system wide the default target, which is the same as the default behavior of 'trace' subcommand.
Before the fix:
# perf ftrace latency -T schedule failed to set ftrace pid
After the fix:
# perf ftrace latency -T schedule ^C# DURATION | COUNT | GRAPH | 0 - 1 us | 0 | | 1 - 2 us | 0 | | 2 - 4 us | 0 | | 4 - 8 us | 2828 | #### | 8 - 16 us | 23953 | ######################################## | 16 - 32 us | 408 | | 32 - 64 us | 318 | | 64 - 128 us | 4 | | 128 - 256 us | 3 | | 256 - 512 us | 0 | | 512 - 1024 us | 1 | | 1 - 2 ms | 4 | | 2 - 4 ms | 0 | | 4 - 8 ms | 0 | | 8 - 16 ms | 0 | | 16 - 32 ms | 0 | | 32 - 64 ms | 0 | | 64 - 128 ms | 0 | | 128 - 256 ms | 4 | | 256 - 512 ms | 2 | | 512 - 1024 ms | 0 | | 1 - ... s | 0 | |
Fixes: 53be50282269b46c ("perf ftrace: Add 'latency' subcommand") Signed-off-by: Yang Jihong yangjihong1@huawei.com Acked-by: Namhyung Kim namhyung@kernel.org Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/r/20230324032702.109964-1-yangjihong1@huawei.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/builtin-ftrace.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c index d7fe00f66b831..fb1b66ef2e167 100644 --- a/tools/perf/builtin-ftrace.c +++ b/tools/perf/builtin-ftrace.c @@ -1228,10 +1228,12 @@ int cmd_ftrace(int argc, const char **argv) goto out_delete_filters; }
+ /* Make system wide (-a) the default target. */ + if (!argc && target__none(&ftrace.target)) + ftrace.target.system_wide = true; + switch (subcmd) { case PERF_FTRACE_TRACE: - if (!argc && target__none(&ftrace.target)) - ftrace.target.system_wide = true; cmd_func = __cmd_ftrace; break; case PERF_FTRACE_LATENCY:
From: Kajol Jain kjain@linux.ibm.com
[ Upstream commit 5d9df8731c0941f3add30f96745a62586a0c9d52 ]
Commit 3c22ba5243040c13 ("perf vendor events powerpc: Update POWER9 events") added and updated power9 PMU JSON events. However some of the JSON events which are part of other.json and pipeline.json files, contains UTF-8 characters in their brief description. Having UTF-8 character could breaks the perf build on some distros.
Fix this issue by removing the UTF-8 characters from other.json and pipeline.json files.
Result without the fix:
[command]# file -i pmu-events/arch/powerpc/power9/* pmu-events/arch/powerpc/power9/cache.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/floating-point.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/frontend.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/marked.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/memory.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/metrics.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/nest_metrics.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/other.json: application/json; charset=utf-8 pmu-events/arch/powerpc/power9/pipeline.json: application/json; charset=utf-8 pmu-events/arch/powerpc/power9/pmc.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/translation.json: application/json; charset=us-ascii [command]#
Result with the fix:
[command]# file -i pmu-events/arch/powerpc/power9/* pmu-events/arch/powerpc/power9/cache.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/floating-point.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/frontend.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/marked.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/memory.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/metrics.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/nest_metrics.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/other.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/pipeline.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/pmc.json: application/json; charset=us-ascii pmu-events/arch/powerpc/power9/translation.json: application/json; charset=us-ascii [command]#
Fixes: 3c22ba5243040c13 ("perf vendor events powerpc: Update POWER9 events") Reported-by: Arnaldo Carvalho de Melo acme@kernel.com Signed-off-by: Kajol Jain kjain@linux.ibm.com Acked-by: Ian Rogers irogers@google.com Tested-by: Arnaldo Carvalho de Melo acme@redhat.com Cc: Athira Rajeev atrajeev@linux.vnet.ibm.com Cc: Disha Goel disgoel@linux.ibm.com Cc: Jiri Olsa jolsa@kernel.org Cc: Madhavan Srinivasan maddy@linux.ibm.com Cc: Sukadev Bhattiprolu sukadev@linux.vnet.ibm.com Cc: linuxppc-dev@lists.ozlabs.org Link: https://lore.kernel.org/lkml/ZBxP77deq7ikTxwG@kernel.org/ Link: https://lore.kernel.org/r/20230328112908.113158-1-kjain@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/pmu-events/arch/powerpc/power9/other.json | 4 ++-- tools/perf/pmu-events/arch/powerpc/power9/pipeline.json | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/tools/perf/pmu-events/arch/powerpc/power9/other.json b/tools/perf/pmu-events/arch/powerpc/power9/other.json index 3f69422c21f99..f10bd554521a0 100644 --- a/tools/perf/pmu-events/arch/powerpc/power9/other.json +++ b/tools/perf/pmu-events/arch/powerpc/power9/other.json @@ -1417,7 +1417,7 @@ { "EventCode": "0x45054", "EventName": "PM_FMA_CMPL", - "BriefDescription": "two flops operation completed (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only. " + "BriefDescription": "two flops operation completed (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only." }, { "EventCode": "0x201E8", @@ -2017,7 +2017,7 @@ { "EventCode": "0xC0BC", "EventName": "PM_LSU_FLUSH_OTHER", - "BriefDescription": "Other LSU flushes including: Sync (sync ack from L2 caused search of LRQ for oldest snooped load, This will either signal a Precise Flush of the oldest snooped loa or a Flush Next PPC); Data Valid Flush Next (several cases of this, one example is store and reload are lined up such that a store-hit-reload scenario exists and the CDF has already launched and has gotten bad/stale data); Bad Data Valid Flush Next (might be a few cases of this, one example is a larxa (D$ hit) return data and dval but can't allocate to LMQ (LMQ full or other reason). Already gave dval but can't watch it for snoop_hit_larx. Need to take the “bad dval” back and flush all younger ops)" + "BriefDescription": "Other LSU flushes including: Sync (sync ack from L2 caused search of LRQ for oldest snooped load, This will either signal a Precise Flush of the oldest snooped loa or a Flush Next PPC); Data Valid Flush Next (several cases of this, one example is store and reload are lined up such that a store-hit-reload scenario exists and the CDF has already launched and has gotten bad/stale data); Bad Data Valid Flush Next (might be a few cases of this, one example is a larxa (D$ hit) return data and dval but can't allocate to LMQ (LMQ full or other reason). Already gave dval but can't watch it for snoop_hit_larx. Need to take the 'bad dval' back and flush all younger ops)" }, { "EventCode": "0x5094", diff --git a/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json b/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json index d0265f255de2b..723bffa41c448 100644 --- a/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json +++ b/tools/perf/pmu-events/arch/powerpc/power9/pipeline.json @@ -442,7 +442,7 @@ { "EventCode": "0x4D052", "EventName": "PM_2FLOP_CMPL", - "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg " + "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg" }, { "EventCode": "0x1F142",
From: Arnaldo Carvalho de Melo acme@redhat.com
[ Upstream commit 57f14b5ae1a97537f2abd2828ee7212cada7036e ]
An audit showed just this one problem with zfree(), fix it.
Fixes: 9fbc61f832ebf432 ("perf pmu: Add support for PMU capabilities") Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/pmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c index 2bdeb89352e7a..be49be366c05c 100644 --- a/tools/perf/util/pmu.c +++ b/tools/perf/util/pmu.c @@ -1833,7 +1833,7 @@ static int perf_pmu__new_caps(struct list_head *list, char *name, char *value) return 0;
free_name: - zfree(caps->name); + zfree(&caps->name); free_caps: free(caps);
From: Markus Elfring Markus.Elfring@web.de
[ Upstream commit c160118a90d4acf335993d8d59b02ae2147a524e ]
Addresses of two data structure members were determined before corresponding null pointer checks in the implementation of the function “sort__sym_from_cmp”.
Thus avoid the risk for undefined behaviour by removing extra initialisations for the local variables “from_l” and “from_r” (also because they were already reassigned with the same value behind this pointer check).
This issue was detected by using the Coccinelle software.
Fixes: 1b9e97a2a95e4941 ("perf tools: Fix report -F symbol_from for data without branch info") Signed-off-by: elfring@users.sourceforge.net Acked-by: Ian Rogers irogers@google.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: German Gomez german.gomez@arm.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Kan Liang kan.liang@linux.intel.com Cc: Mark Rutland mark.rutland@arm.com Cc: Namhyung Kim namhyung@kernel.org Link: https://lore.kernel.org/cocci/54a21fea-64e3-de67-82ef-d61b90ffad05@web.de/ Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/sort.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c index 2bf3f88276972..22808643ab725 100644 --- a/tools/perf/util/sort.c +++ b/tools/perf/util/sort.c @@ -966,8 +966,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type, static int64_t sort__sym_from_cmp(struct hist_entry *left, struct hist_entry *right) { - struct addr_map_symbol *from_l = &left->branch_info->from; - struct addr_map_symbol *from_r = &right->branch_info->from; + struct addr_map_symbol *from_l, *from_r;
if (!left->branch_info || !right->branch_info) return cmp_null(left->branch_info, right->branch_info);
From: James Clark james.clark@arm.com
[ Upstream commit 449067f3fc9f340da54e383738286881e6634d0b ]
In this context, timeless refers to the trace data rather than the perf event data. But when detecting whether there are timestamps in the trace data or not, the presence of a timestamp flag on any perf event is used.
Since commit f42c0ce573df ("perf record: Always get text_poke events with --kcore option") timestamps were added to a tracking event when --kcore is used which breaks this detection mechanism. Fix it by detecting if trace timestamps exist by looking at the ETM config flags. This would have always been a more accurate way of doing it anyway.
This fixes the following error message when using --kcore with Coresight:
$ perf record --kcore -e cs_etm// --per-thread $ perf report The perf.data/data data has no samples!
Fixes: f42c0ce573df79d1 ("perf record: Always get text_poke events with --kcore option") Reported-by: Yang Shi shy828301@gmail.com Signed-off-by: James Clark james.clark@arm.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: John Garry john.g.garry@oracle.com Cc: Leo Yan leo.yan@linaro.org Cc: Mark Rutland mark.rutland@arm.com Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Suzuki Poulouse suzuki.poulose@arm.com Cc: Will Deacon will@kernel.org Cc: coresight@lists.linaro.org Cc: denik@google.com Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/lkml/CAHbLzkrJQTrYBtPkf=jf3OpQ-yBcJe7XkvQstX9j2frz4W... Link: https://lore.kernel.org/r/20230424134748.228137-2-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/cs-etm.c | 30 ++++++++++++++++++------------ 1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c index 33303d03c2fa4..2f327986090e1 100644 --- a/tools/perf/util/cs-etm.c +++ b/tools/perf/util/cs-etm.c @@ -2488,26 +2488,29 @@ static int cs_etm__process_auxtrace_event(struct perf_session *session, return 0; }
-static bool cs_etm__is_timeless_decoding(struct cs_etm_auxtrace *etm) +static int cs_etm__setup_timeless_decoding(struct cs_etm_auxtrace *etm) { struct evsel *evsel; struct evlist *evlist = etm->session->evlist; - bool timeless_decoding = true;
/* Override timeless mode with user input from --itrace=Z */ - if (etm->synth_opts.timeless_decoding) - return true; + if (etm->synth_opts.timeless_decoding) { + etm->timeless_decoding = true; + return 0; + }
/* - * Circle through the list of event and complain if we find one - * with the time bit set. + * Find the cs_etm evsel and look at what its timestamp setting was */ - evlist__for_each_entry(evlist, evsel) { - if ((evsel->core.attr.sample_type & PERF_SAMPLE_TIME)) - timeless_decoding = false; - } + evlist__for_each_entry(evlist, evsel) + if (cs_etm__evsel_is_auxtrace(etm->session, evsel)) { + etm->timeless_decoding = + !(evsel->core.attr.config & BIT(ETM_OPT_TS)); + return 0; + }
- return timeless_decoding; + pr_err("CS ETM: Couldn't find ETM evsel\n"); + return -EINVAL; }
/* @@ -2884,7 +2887,6 @@ int cs_etm__process_auxtrace_info_full(union perf_event *event, etm->snapshot_mode = (ptr[CS_ETM_SNAPSHOT] != 0); etm->metadata = metadata; etm->auxtrace_type = auxtrace_info->type; - etm->timeless_decoding = cs_etm__is_timeless_decoding(etm);
etm->auxtrace.process_event = cs_etm__process_event; etm->auxtrace.process_auxtrace_event = cs_etm__process_auxtrace_event; @@ -2894,6 +2896,10 @@ int cs_etm__process_auxtrace_info_full(union perf_event *event, etm->auxtrace.evsel_is_auxtrace = cs_etm__evsel_is_auxtrace; session->auxtrace = &etm->auxtrace;
+ err = cs_etm__setup_timeless_decoding(etm); + if (err) + return err; + etm->unknown_thread = thread__new(999999999, 999999999); if (!etm->unknown_thread) { err = -ENOMEM;
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
[ Upstream commit 8fd91151ebcb21b3f2f2bf158ac6092192550b2b ]
SS_ENCRYPTION is (0 << 7 = 0), so the test can never be true. Use a direct comparison to SS_ENCRYPTION instead.
The same king of test is already done the same way in sun8i_ss_run_task().
Fixes: 359e893e8af4 ("crypto: sun8i-ss - rework handling of IV") Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c index 902f6be057ec6..e97fb203690ae 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c @@ -151,7 +151,7 @@ static int sun8i_ss_setup_ivs(struct skcipher_request *areq) } rctx->p_iv[i] = a; /* we need to setup all others IVs only in the decrypt way */ - if (rctx->op_dir & SS_ENCRYPTION) + if (rctx->op_dir == SS_ENCRYPTION) return 0; todo = min(len, sg_dma_len(sg)); len -= todo;
From: Herbert Xu herbert@gondor.apana.org.au
[ Upstream commit c35e03eaece71101ff6cbf776b86403860ac8cc3 ]
The crypto completion function currently takes a pointer to a struct crypto_async_request object. However, in reality the API does not allow the use of any part of the object apart from the data field. For example, ahash/shash will create a fake object on the stack to pass along a different data field.
This leads to potential bugs where the user may try to dereference or otherwise use the crypto_async_request object.
This patch adds some temporary scaffolding so that the completion function can take a void * instead. Once affected users have been converted this can be removed.
The helper crypto_request_complete will remain even after the conversion is complete. It should be used instead of calling the completion function directly.
Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Reviewed-by: Giovanni Cabiddu giovanni.cabiddu@intel.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Stable-dep-of: 4140aafcff16 ("crypto: engine - fix crypto_queue backlog handling") Signed-off-by: Sasha Levin sashal@kernel.org --- include/crypto/algapi.h | 7 +++++++ include/linux/crypto.h | 6 ++++++ 2 files changed, 13 insertions(+)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index 61b327206b557..1fd81e74a174f 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -302,4 +302,11 @@ enum { CRYPTO_MSG_ALG_LOADED, };
+static inline void crypto_request_complete(struct crypto_async_request *req, + int err) +{ + crypto_completion_t complete = req->complete; + complete(req, err); +} + #endif /* _CRYPTO_ALGAPI_H */ diff --git a/include/linux/crypto.h b/include/linux/crypto.h index 5d1e961f810ec..b18f6e669fb10 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -176,6 +176,7 @@ struct crypto_async_request; struct crypto_tfm; struct crypto_type;
+typedef struct crypto_async_request crypto_completion_data_t; typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err);
/** @@ -595,6 +596,11 @@ struct crypto_wait { /* * Async ops completion helper functioons */ +static inline void *crypto_get_completion_data(crypto_completion_data_t *req) +{ + return req->data; +} + void crypto_req_done(struct crypto_async_request *req, int err);
static inline int crypto_wait_req(int err, struct crypto_wait *wait)
From: Herbert Xu herbert@gondor.apana.org.au
[ Upstream commit 6909823d47c17cba84e9244d04050b5db8d53789 ]
Use the crypto_request_complete helper instead of calling the completion function directly.
Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Stable-dep-of: 4140aafcff16 ("crypto: engine - fix crypto_queue backlog handling") Signed-off-by: Sasha Levin sashal@kernel.org --- crypto/crypto_engine.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index bb8e77077f020..48c15f4079bb8 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -54,7 +54,7 @@ static void crypto_finalize_request(struct crypto_engine *engine, } } lockdep_assert_in_softirq(); - req->complete(req, err); + crypto_request_complete(req, err);
kthread_queue_work(engine->kworker, &engine->pump_requests); } @@ -130,7 +130,7 @@ static void crypto_pump_requests(struct crypto_engine *engine, engine->cur_req = async_req;
if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS);
if (engine->busy) was_busy = true; @@ -214,7 +214,7 @@ static void crypto_pump_requests(struct crypto_engine *engine, }
req_err_2: - async_req->complete(async_req, ret); + crypto_request_complete(async_req, ret);
retry: /* If retry mechanism is supported, send new requests to engine */
From: Olivier Bacon olivierb89@gmail.com
[ Upstream commit 4140aafcff167b5b9e8dae6a1709a6de7cac6f74 ]
CRYPTO_TFM_REQ_MAY_BACKLOG tells the crypto driver that it should internally backlog requests until the crypto hw's queue becomes full. At that point, crypto_engine backlogs the request and returns -EBUSY. Calling driver such as dm-crypt then waits until the complete() function is called with a status of -EINPROGRESS before sending a new request.
The problem lies in the call to complete() with a value of -EINPROGRESS that is made when a backlog item is present on the queue. The call is done before the successful execution of the crypto request. In the case that do_one_request() returns < 0 and the retry support is available, the request is put back in the queue. This leads upper drivers to send a new request even if the queue is still full.
The problem can be reproduced by doing a large dd into a crypto dm-crypt device. This is pretty easy to see when using Freescale CAAM crypto driver and SWIOTLB dma. Since the actual amount of requests that can be hold in the queue is unlimited we get IOs error and dma allocation.
The fix is to call complete with a value of -EINPROGRESS only if the request is not enqueued back in crypto_queue. This is done by calling complete() later in the code. In order to delay the decision, crypto_queue is modified to correctly set the backlog pointer when a request is enqueued back.
Fixes: 6a89f492f8e5 ("crypto: engine - support for parallel requests based on retry mechanism") Co-developed-by: Sylvain Ouellet souellet@genetec.com Signed-off-by: Sylvain Ouellet souellet@genetec.com Signed-off-by: Olivier Bacon obacon@genetec.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Sasha Levin sashal@kernel.org --- crypto/algapi.c | 3 +++ crypto/crypto_engine.c | 6 +++--- 2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/crypto/algapi.c b/crypto/algapi.c index 9de0677b3643d..60b98d2c400e3 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -963,6 +963,9 @@ EXPORT_SYMBOL_GPL(crypto_enqueue_request); void crypto_enqueue_request_head(struct crypto_queue *queue, struct crypto_async_request *request) { + if (unlikely(queue->qlen >= queue->max_qlen)) + queue->backlog = queue->backlog->prev; + queue->qlen++; list_add(&request->list, &queue->list); } diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index 48c15f4079bb8..50bac2ab55f17 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -129,9 +129,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, if (!engine->retry_support) engine->cur_req = async_req;
- if (backlog) - crypto_request_complete(backlog, -EINPROGRESS); - if (engine->busy) was_busy = true; else @@ -217,6 +214,9 @@ static void crypto_pump_requests(struct crypto_engine *engine, crypto_request_complete(async_req, ret);
retry: + if (backlog) + crypto_request_complete(backlog, -EINPROGRESS); + /* If retry mechanism is supported, send new requests to engine */ if (engine->retry_support) { spin_lock_irqsave(&engine->queue_lock, flags);
From: Yang Jihong yangjihong1@huawei.com
[ Upstream commit 1511e4696acb715a4fe48be89e1e691daec91c0e ]
In elf_read_build_id(), if gnu build_id is found, should return the size of the actually copied data. If descsz is greater thanBuild_ID_SIZE, write_buildid data access may occur.
Fixes: be96ea8ffa788dcc ("perf symbols: Fix issue with binaries using 16-bytes buildids (v2)") Reported-by: Will Ochowicz Will.Ochowicz@genusplc.com Signed-off-by: Yang Jihong yangjihong1@huawei.com Tested-by: Will Ochowicz Will.Ochowicz@genusplc.com Acked-by: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Leo Yan leo.yan@linaro.org Cc: Mark Rutland mark.rutland@arm.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Stephane Eranian eranian@google.com Link: https://lore.kernel.org/lkml/CWLP265MB49702F7BA3D6D8F13E4B1A719C649@CWLP265M... Link: https://lore.kernel.org/r/20230427012841.231729-1-yangjihong1@huawei.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/symbol-elf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c index 96767d1b3f1c2..714fd9d0b51ef 100644 --- a/tools/perf/util/symbol-elf.c +++ b/tools/perf/util/symbol-elf.c @@ -581,7 +581,7 @@ static int elf_read_build_id(Elf *elf, void *bf, size_t size) size_t sz = min(size, descsz); memcpy(bf, ptr, sz); memset(bf + sz, 0, size - sz); - err = descsz; + err = sz; break; } }
From: Yang Jihong yangjihong1@huawei.com
[ Upstream commit 9b86c49710eec7b4fbb78a0232b2dd0972a2b576 ]
When is_valid_tracepoint() returns 1, need to call put_events_file() to free `dir_path`.
Fixes: 25a7d914274de386 ("perf parse-events: Use get/put_events_file()") Signed-off-by: Yang Jihong yangjihong1@huawei.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/r/20230421025953.173826-1-yangjihong1@huawei.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/util/tracepoint.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/tools/perf/util/tracepoint.c b/tools/perf/util/tracepoint.c index 89ef56c433110..92dd8b455b902 100644 --- a/tools/perf/util/tracepoint.c +++ b/tools/perf/util/tracepoint.c @@ -50,6 +50,7 @@ int is_valid_tracepoint(const char *event_string) sys_dirent->d_name, evt_dirent->d_name); if (!strcmp(evt_path, event_string)) { closedir(evt_dir); + put_events_file(dir_path); closedir(sys_dir); return 1; }
From: Dmitrii Dolgov 9erthalion6@gmail.com
[ Upstream commit ecc68ee216c6c5b2f84915e1441adf436f1b019b ]
It seems that perf stat -b <prog id> doesn't produce any results:
$ perf stat -e cycles -b 4 -I 10000 -vvv Control descriptor is not initialized cycles: 0 0 0 time counts unit events 10.007641640 <not supported> cycles
Looks like this happens because fentry/fexit progs are getting loaded, but the corresponding perf event is not enabled and not added into the events bpf map. I think there is some mixing up between two type of bpf support, one for bperf and one for bpf_profiler. Both are identified via evsel__is_bpf, based on which perf events are enabled, but for the latter (bpf_profiler) a perf event is required. Using evsel__is_bperf to check only bperf produces expected results:
$ perf stat -e cycles -b 4 -I 10000 -vvv Control descriptor is not initialized ------------------------------------------------------------ perf_event_attr: size 136 sample_type IDENTIFIER read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING disabled 1 exclude_guest 1 ------------------------------------------------------------ sys_perf_event_open: pid -1 cpu 0 group_fd -1 flags 0x8 = 3 ------------------------------------------------------------ [...perf_event_attr for other CPUs...] ------------------------------------------------------------ cycles: 309426 169009 169009 time counts unit events 10.010091271 309426 cycles
The final numbers correspond (at least in the level of magnitude) to the same metric obtained via bpftool.
Fixes: 112cb56164bc2108 ("perf stat: Introduce config stat.bpf-counter-events") Reviewed-by: Song Liu song@kernel.org Signed-off-by: Dmitrii Dolgov 9erthalion6@gmail.com Tested-by: Song Liu song@kernel.org Cc: Ian Rogers irogers@google.com Cc: Ingo Molnar mingo@redhat.com Cc: Jiri Olsa jolsa@kernel.org Cc: Namhyung Kim namhyung@kernel.org Cc: Song Liu song@kernel.org Link: https://lore.kernel.org/r/20230412182316.11628-1-9erthalion6@gmail.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/perf/builtin-stat.c | 4 ++-- tools/perf/util/evsel.h | 5 +++++ 2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 387dc9c9e7bee..682db49eef4cb 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -773,7 +773,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) counter->reset_group = false; if (bpf_counter__load(counter, &target)) return -1; - if (!evsel__is_bpf(counter)) + if (!(evsel__is_bperf(counter))) all_counters_use_bpf = false; }
@@ -789,7 +789,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
if (counter->reset_group || counter->errored) continue; - if (evsel__is_bpf(counter)) + if (evsel__is_bperf(counter)) continue; try_again: if (create_perf_stat_counter(counter, &stat_config, &target, diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h index d572be41b9608..2899c97d997cd 100644 --- a/tools/perf/util/evsel.h +++ b/tools/perf/util/evsel.h @@ -269,6 +269,11 @@ static inline bool evsel__is_bpf(struct evsel *evsel) return evsel->bpf_counter_ops != NULL; }
+static inline bool evsel__is_bperf(struct evsel *evsel) +{ + return evsel->bpf_counter_ops != NULL && list_empty(&evsel->bpf_counter_list); +} + #define EVSEL__MAX_ALIASES 8
extern const char *const evsel__hw_cache[PERF_COUNT_HW_CACHE_MAX][EVSEL__MAX_ALIASES];
From: David Matlack dmatlack@google.com
[ Upstream commit 3af15ff47c4df3af7b36ea8315f43c6b0af49253 ]
Change tdp_mmu to a read-only parameter and drop the per-vm tdp_mmu_enabled. For 32-bit KVM, make tdp_mmu_enabled a macro that is always false so that the compiler can continue omitting cals to the TDP MMU.
The TDP MMU was introduced in 5.10 and has been enabled by default since 5.15. At this point there are no known functionality gaps between the TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales better with the number of vCPUs. In other words, there is no good reason to disable the TDP MMU on a live system.
Purposely do not drop tdp_mmu=N support (i.e. do not force 64-bit KVM to always use the TDP MMU) since tdp_mmu=N is still used to get test coverage of KVM's shadow MMU TDP support, which is used in 32-bit KVM.
Signed-off-by: David Matlack dmatlack@google.com Reviewed-by: Kai Huang kai.huang@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Message-Id: 20220921173546.2674386-2-dmatlack@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Stable-dep-of: edbdb43fc96b ("KVM: x86: Preserve TDP MMU roots until they are explicitly invalidated") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/include/asm/kvm_host.h | 13 ++------- arch/x86/kvm/mmu.h | 6 ++-- arch/x86/kvm/mmu/mmu.c | 51 ++++++++++++++++++++++----------- arch/x86/kvm/mmu/tdp_mmu.c | 9 ++---- 4 files changed, 41 insertions(+), 38 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 24480b4f1c575..adc3149c833a9 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1342,21 +1342,12 @@ struct kvm_arch { struct task_struct *nx_huge_page_recovery_thread;
#ifdef CONFIG_X86_64 - /* - * Whether the TDP MMU is enabled for this VM. This contains a - * snapshot of the TDP MMU module parameter from when the VM was - * created and remains unchanged for the life of the VM. If this is - * true, TDP MMU handler functions will run for various MMU - * operations. - */ - bool tdp_mmu_enabled; - /* The number of TDP MMU pages across all roots. */ atomic64_t tdp_mmu_pages;
/* - * List of kvm_mmu_page structs being used as roots. - * All kvm_mmu_page structs in the list should have + * List of struct kvm_mmu_pages being used as roots. + * All struct kvm_mmu_pages in the list should have * tdp_mmu_page set. * * For reads, this list is protected by: diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 59804be91b5b0..0f38b78ab04b7 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -254,14 +254,14 @@ static inline bool kvm_shadow_root_allocated(struct kvm *kvm) }
#ifdef CONFIG_X86_64 -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; } +extern bool tdp_mmu_enabled; #else -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; } +#define tdp_mmu_enabled false #endif
static inline bool kvm_memslots_have_rmaps(struct kvm *kvm) { - return !is_tdp_mmu_enabled(kvm) || kvm_shadow_root_allocated(kvm); + return !tdp_mmu_enabled || kvm_shadow_root_allocated(kvm); }
static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ce135539145fd..583979755bd4f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -99,6 +99,13 @@ module_param_named(flush_on_reuse, force_flush_and_sync_on_reuse, bool, 0644); */ bool tdp_enabled = false;
+bool __ro_after_init tdp_mmu_allowed; + +#ifdef CONFIG_X86_64 +bool __read_mostly tdp_mmu_enabled = true; +module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0444); +#endif + static int max_huge_page_level __read_mostly; static int tdp_root_level __read_mostly; static int max_tdp_level __read_mostly; @@ -1293,7 +1300,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, { struct kvm_rmap_head *rmap_head;
- if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true);
@@ -1326,7 +1333,7 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, { struct kvm_rmap_head *rmap_head;
- if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false);
@@ -1409,7 +1416,7 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, } }
- if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) write_protected |= kvm_tdp_mmu_write_protect_gfn(kvm, slot, gfn, min_level);
@@ -1572,7 +1579,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) flush = kvm_handle_gfn_range(kvm, range, kvm_zap_rmap);
- if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);
return flush; @@ -1585,7 +1592,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmap);
- if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range);
return flush; @@ -1660,7 +1667,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap);
- if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) young |= kvm_tdp_mmu_age_gfn_range(kvm, range);
return young; @@ -1673,7 +1680,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap);
- if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) young |= kvm_tdp_mmu_test_age_gfn(kvm, range);
return young; @@ -3610,7 +3617,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) if (r < 0) goto out_unlock;
- if (is_tdp_mmu_enabled(vcpu->kvm)) { + if (tdp_mmu_enabled) { root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); mmu->root.hpa = root; } else if (shadow_root_level >= PT64_ROOT_4LEVEL) { @@ -5743,6 +5750,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, tdp_root_level = tdp_forced_root_level; max_tdp_level = tdp_max_root_level;
+#ifdef CONFIG_X86_64 + tdp_mmu_enabled = tdp_mmu_allowed && tdp_enabled; +#endif /* * max_huge_page_level reflects KVM's MMU capabilities irrespective * of kernel support, e.g. KVM may be capable of using 1GB pages when @@ -5990,7 +6000,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * write and in the same critical section as making the reload request, * e.g. before kvm_zap_obsolete_pages() could drop mmu_lock and yield. */ - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_invalidate_all_roots(kvm);
/* @@ -6015,7 +6025,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * Deferring the zap until the final reference to the root is put would * lead to use-after-free. */ - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_zap_invalidated_roots(kvm); }
@@ -6127,7 +6137,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end);
- if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start, gfn_end, true, flush); @@ -6160,7 +6170,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, write_unlock(&kvm->mmu_lock); }
- if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level); read_unlock(&kvm->mmu_lock); @@ -6403,7 +6413,7 @@ void kvm_mmu_try_split_huge_pages(struct kvm *kvm, u64 start, u64 end, int target_level) { - if (!is_tdp_mmu_enabled(kvm)) + if (!tdp_mmu_enabled) return;
if (kvm_memslots_have_rmaps(kvm)) @@ -6424,7 +6434,7 @@ void kvm_mmu_slot_try_split_huge_pages(struct kvm *kvm, u64 start = memslot->base_gfn; u64 end = start + memslot->npages;
- if (!is_tdp_mmu_enabled(kvm)) + if (!tdp_mmu_enabled) return;
if (kvm_memslots_have_rmaps(kvm)) { @@ -6507,7 +6517,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, write_unlock(&kvm->mmu_lock); }
- if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot); read_unlock(&kvm->mmu_lock); @@ -6542,7 +6552,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, write_unlock(&kvm->mmu_lock); }
- if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_clear_dirty_slot(kvm, memslot); read_unlock(&kvm->mmu_lock); @@ -6577,7 +6587,7 @@ void kvm_mmu_zap_all(struct kvm *kvm)
kvm_mmu_commit_zap_page(kvm, &invalid_list);
- if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_zap_all(kvm);
write_unlock(&kvm->mmu_lock); @@ -6742,6 +6752,13 @@ void __init kvm_mmu_x86_module_init(void) if (nx_huge_pages == -1) __set_nx_huge_pages(get_nx_auto_mode());
+ /* + * Snapshot userspace's desire to enable the TDP MMU. Whether or not the + * TDP MMU is actually enabled is determined in kvm_configure_mmu() + * when the vendor module is loaded. + */ + tdp_mmu_allowed = tdp_mmu_enabled; + kvm_mmu_spte_module_init(); }
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d6df38d371a00..03511e83050fa 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -10,23 +10,18 @@ #include <asm/cmpxchg.h> #include <trace/events/kvm.h>
-static bool __read_mostly tdp_mmu_enabled = true; -module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644); - /* Initializes the TDP MMU for the VM, if enabled. */ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq;
- if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) + if (!tdp_mmu_enabled) return 0;
wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0); if (!wq) return -ENOMEM;
- /* This should not be changed for the lifetime of the VM. */ - kvm->arch.tdp_mmu_enabled = true; INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); kvm->arch.tdp_mmu_zap_wq = wq; @@ -47,7 +42,7 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm,
void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - if (!kvm->arch.tdp_mmu_enabled) + if (!tdp_mmu_enabled) return;
/* Also waits for any queued work items. */
From: David Matlack dmatlack@google.com
[ Upstream commit 991c8047b740f192a057d5f22df2f91f087cdb72 ]
Move kvm_mmu_{init,uninit}_tdp_mmu() behind tdp_mmu_enabled. This makes these functions consistent with the rest of the calls into the TDP MMU from mmu.c, and which is now possible since tdp_mmu_enabled is only modified when the x86 vendor module is loaded. i.e. It will never change during the lifetime of a VM.
This change also enabled removing the stub definitions for 32-bit KVM, as the compiler will just optimize the calls out like it does for all the other TDP MMU functions.
No functional change intended.
Signed-off-by: David Matlack dmatlack@google.com Reviewed-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Message-Id: 20220921173546.2674386-3-dmatlack@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Stable-dep-of: edbdb43fc96b ("KVM: x86: Preserve TDP MMU roots until they are explicitly invalidated") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/mmu/mmu.c | 11 +++++++---- arch/x86/kvm/mmu/tdp_mmu.c | 6 ------ arch/x86/kvm/mmu/tdp_mmu.h | 7 +++---- 3 files changed, 10 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 583979755bd4f..8666e8ff48a6e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6051,9 +6051,11 @@ int kvm_mmu_init_vm(struct kvm *kvm) INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages); spin_lock_init(&kvm->arch.mmu_unsync_pages_lock);
- r = kvm_mmu_init_tdp_mmu(kvm); - if (r < 0) - return r; + if (tdp_mmu_enabled) { + r = kvm_mmu_init_tdp_mmu(kvm); + if (r < 0) + return r; + }
node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; @@ -6083,7 +6085,8 @@ void kvm_mmu_uninit_vm(struct kvm *kvm)
kvm_page_track_unregister_notifier(kvm, node);
- kvm_mmu_uninit_tdp_mmu(kvm); + if (tdp_mmu_enabled) + kvm_mmu_uninit_tdp_mmu(kvm);
mmu_free_vm_memory_caches(kvm); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 03511e83050fa..7e5952e95d3bf 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -15,9 +15,6 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq;
- if (!tdp_mmu_enabled) - return 0; - wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0); if (!wq) return -ENOMEM; @@ -42,9 +39,6 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm,
void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - if (!tdp_mmu_enabled) - return; - /* Also waits for any queued work items. */ destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index d3714200b932a..e4ab2dac269d6 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -7,6 +7,9 @@
#include "spte.h"
+int kvm_mmu_init_tdp_mmu(struct kvm *kvm); +void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); + hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
__must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) @@ -68,8 +71,6 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, u64 *spte);
#ifdef CONFIG_X86_64 -int kvm_mmu_init_tdp_mmu(struct kvm *kvm); -void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; }
static inline bool is_tdp_mmu(struct kvm_mmu *mmu) @@ -89,8 +90,6 @@ static inline bool is_tdp_mmu(struct kvm_mmu *mmu) return sp && is_tdp_mmu_page(sp) && sp->root_count; } #else -static inline int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return 0; } -static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {} static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } static inline bool is_tdp_mmu(struct kvm_mmu *mmu) { return false; } #endif
From: Sean Christopherson seanjc@google.com
[ Upstream commit aeb568a1a6041e3d69def54046747bbd989bc4ed ]
Use is_tdp_mmu_page() instead of querying sp->tdp_mmu_page directly so that all users benefit if KVM ever finds a way to optimize the logic.
No functional change intended.
Signed-off-by: Sean Christopherson seanjc@google.com Message-Id: 20221012181702.3663607-10-seanjc@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Stable-dep-of: edbdb43fc96b ("KVM: x86: Preserve TDP MMU roots until they are explicitly invalidated") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8666e8ff48a6e..dcca08a08bd0c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1942,7 +1942,7 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp) return true;
/* TDP MMU pages do not use the MMU generation. */ - return !sp->tdp_mmu_page && + return !is_tdp_mmu_page(sp) && unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen); }
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7e5952e95d3bf..cc1fb9a656201 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -133,7 +133,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, if (!refcount_dec_and_test(&root->tdp_mmu_root_count)) return;
- WARN_ON(!root->tdp_mmu_page); + WARN_ON(!is_tdp_mmu_page(root));
/* * The root now has refcount=0. It is valid, but readers already
From: Sean Christopherson seanjc@google.com
[ Upstream commit edbdb43fc96b11b3bfa531be306a1993d9fe89ec ]
Preserve TDP MMU roots until they are explicitly invalidated by gifting the TDP MMU itself a reference to a root when it is allocated. Keeping a reference in the TDP MMU fixes a flaw where the TDP MMU exhibits terrible performance, and can potentially even soft-hang a vCPU, if a vCPU frequently unloads its roots, e.g. when KVM is emulating SMI+RSM.
When KVM emulates something that invalidates _all_ TLB entries, e.g. SMI and RSM, KVM unloads all of the vCPUs roots (KVM keeps a small per-vCPU cache of previous roots). Unloading roots is a simple way to ensure KVM flushes and synchronizes all roots for the vCPU, as KVM flushes and syncs when allocating a "new" root (from the vCPU's perspective).
In the shadow MMU, KVM keeps track of all shadow pages, roots included, in a per-VM hash table. Unloading a shadow MMU root just wipes it from the per-vCPU cache; the root is still tracked in the per-VM hash table. When KVM loads a "new" root for the vCPU, KVM will find the old, unloaded root in the per-VM hash table.
Unlike the shadow MMU, the TDP MMU doesn't track "inactive" roots in a per-VM structure, where "active" in this case means a root is either in-use or cached as a previous root by at least one vCPU. When a TDP MMU root becomes inactive, i.e. the last vCPU reference to the root is put, KVM immediately frees the root (asterisk on "immediately" as the actual freeing may be done by a worker, but for all intents and purposes the root is gone).
The TDP MMU behavior is especially problematic for 1-vCPU setups, as unloading all roots effectively frees all roots. The issue is mitigated to some degree in multi-vCPU setups as a different vCPU usually holds a reference to an unloaded root and thus keeps the root alive, allowing the vCPU to reuse its old root after unloading (with a flush+sync).
The TDP MMU flaw has been known for some time, as until very recently, KVM's handling of CR0.WP also triggered unloading of all roots. The CR0.WP toggling scenario was eventually addressed by not unloading roots when _only_ CR0.WP is toggled, but such an approach doesn't Just Work for emulating SMM as KVM must emulate a full TLB flush on entry and exit to/from SMM. Given that the shadow MMU plays nice with unloading roots at will, teaching the TDP MMU to do the same is far less complex than modifying KVM to track which roots need to be flushed before reuse.
Note, preserving all possible TDP MMU roots is not a concern with respect to memory consumption. Now that the role for direct MMUs doesn't include information about the guest, e.g. CR0.PG, CR0.WP, CR4.SMEP, etc., there are _at most_ six possible roots (where "guest_mode" here means L2):
1. 4-level !SMM !guest_mode 2. 4-level SMM !guest_mode 3. 5-level !SMM !guest_mode 4. 5-level SMM !guest_mode 5. 4-level !SMM guest_mode 6. 5-level !SMM guest_mode
And because each vCPU can track 4 valid roots, a VM can already have all 6 root combinations live at any given time. Not to mention that, in practice, no sane VMM will advertise different guest.MAXPHYADDR values across vCPUs, i.e. KVM won't ever use both 4-level and 5-level roots for a single VM. Furthermore, the vast majority of modern hypervisors will utilize EPT/NPT when available, thus the guest_mode=%true cases are also unlikely to be utilized.
Reported-by: Jeremi Piotrowski jpiotrowski@linux.microsoft.com Link: https://lore.kernel.org/all/959c5bce-beb5-b463-7158-33fc4a4f910c@linux.micro... Link: https://lkml.kernel.org/r/20220209170020.1775368-1-pbonzini%40redhat.com Link: https://lore.kernel.org/all/20230322013731.102955-1-minipli@grsecurity.net Link: https://lore.kernel.org/all/000000000000a0bc2b05f9dd7fab@google.com Link: https://lore.kernel.org/all/000000000000eca0b905fa0f7756@google.com Cc: Ben Gardon bgardon@google.com Cc: David Matlack dmatlack@google.com Cc: stable@vger.kernel.org Tested-by: Jeremi Piotrowski jpiotrowski@linux.microsoft.com Link: https://lore.kernel.org/r/20230426220323.3079789-1-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/mmu/tdp_mmu.c | 121 +++++++++++++++++-------------------- 1 file changed, 56 insertions(+), 65 deletions(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index cc1fb9a656201..c649a333792b8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -39,7 +39,17 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm,
void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - /* Also waits for any queued work items. */ + /* + * Invalidate all roots, which besides the obvious, schedules all roots + * for zapping and thus puts the TDP MMU's reference to each root, i.e. + * ultimately frees all roots. + */ + kvm_tdp_mmu_invalidate_all_roots(kvm); + + /* + * Destroying a workqueue also first flushes the workqueue, i.e. no + * need to invoke kvm_tdp_mmu_zap_invalidated_roots(). + */ destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
WARN_ON(atomic64_read(&kvm->arch.tdp_mmu_pages)); @@ -115,16 +125,6 @@ static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work); }
-static inline bool kvm_tdp_root_mark_invalid(struct kvm_mmu_page *page) -{ - union kvm_mmu_page_role role = page->role; - role.invalid = true; - - /* No need to use cmpxchg, only the invalid bit can change. */ - role.word = xchg(&page->role.word, role.word); - return role.invalid; -} - void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, bool shared) { @@ -133,45 +133,12 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, if (!refcount_dec_and_test(&root->tdp_mmu_root_count)) return;
- WARN_ON(!is_tdp_mmu_page(root)); - /* - * The root now has refcount=0. It is valid, but readers already - * cannot acquire a reference to it because kvm_tdp_mmu_get_root() - * rejects it. This remains true for the rest of the execution - * of this function, because readers visit valid roots only - * (except for tdp_mmu_zap_root_work(), which however - * does not acquire any reference itself). - * - * Even though there are flows that need to visit all roots for - * correctness, they all take mmu_lock for write, so they cannot yet - * run concurrently. The same is true after kvm_tdp_root_mark_invalid, - * since the root still has refcount=0. - * - * However, tdp_mmu_zap_root can yield, and writers do not expect to - * see refcount=0 (see for example kvm_tdp_mmu_invalidate_all_roots()). - * So the root temporarily gets an extra reference, going to refcount=1 - * while staying invalid. Readers still cannot acquire any reference; - * but writers are now allowed to run if tdp_mmu_zap_root yields and - * they might take an extra reference if they themselves yield. - * Therefore, when the reference is given back by the worker, - * there is no guarantee that the refcount is still 1. If not, whoever - * puts the last reference will free the page, but they will not have to - * zap the root because a root cannot go from invalid to valid. + * The TDP MMU itself holds a reference to each root until the root is + * explicitly invalidated, i.e. the final reference should be never be + * put for a valid root. */ - if (!kvm_tdp_root_mark_invalid(root)) { - refcount_set(&root->tdp_mmu_root_count, 1); - - /* - * Zapping the root in a worker is not just "nice to have"; - * it is required because kvm_tdp_mmu_invalidate_all_roots() - * skips already-invalid roots. If kvm_tdp_mmu_put_root() did - * not add the root to the workqueue, kvm_tdp_mmu_zap_all_fast() - * might return with some roots not zapped yet. - */ - tdp_mmu_schedule_zap_root(kvm, root); - return; - } + KVM_BUG_ON(!is_tdp_mmu_page(root) || !root->role.invalid, kvm);
spin_lock(&kvm->arch.tdp_mmu_pages_lock); list_del_rcu(&root->link); @@ -319,7 +286,14 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) root = tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_sp(root, NULL, 0, role);
- refcount_set(&root->tdp_mmu_root_count, 1); + /* + * TDP MMU roots are kept until they are explicitly invalidated, either + * by a memslot update or by the destruction of the VM. Initialize the + * refcount to two; one reference for the vCPU, and one reference for + * the TDP MMU itself, which is held until the root is invalidated and + * is ultimately put by tdp_mmu_zap_root_work(). + */ + refcount_set(&root->tdp_mmu_root_count, 2);
spin_lock(&kvm->arch.tdp_mmu_pages_lock); list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots); @@ -1022,32 +996,49 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) /* * Mark each TDP MMU root as invalid to prevent vCPUs from reusing a root that * is about to be zapped, e.g. in response to a memslots update. The actual - * zapping is performed asynchronously, so a reference is taken on all roots. - * Using a separate workqueue makes it easy to ensure that the destruction is - * performed before the "fast zap" completes, without keeping a separate list - * of invalidated roots; the list is effectively the list of work items in - * the workqueue. - * - * Get a reference even if the root is already invalid, the asynchronous worker - * assumes it was gifted a reference to the root it processes. Because mmu_lock - * is held for write, it should be impossible to observe a root with zero refcount, - * i.e. the list of roots cannot be stale. + * zapping is performed asynchronously. Using a separate workqueue makes it + * easy to ensure that the destruction is performed before the "fast zap" + * completes, without keeping a separate list of invalidated roots; the list is + * effectively the list of work items in the workqueue. * - * This has essentially the same effect for the TDP MMU - * as updating mmu_valid_gen does for the shadow MMU. + * Note, the asynchronous worker is gifted the TDP MMU's reference. + * See kvm_tdp_mmu_get_vcpu_root_hpa(). */ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm) { struct kvm_mmu_page *root;
- lockdep_assert_held_write(&kvm->mmu_lock); - list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) { - if (!root->role.invalid && - !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) { + /* + * mmu_lock must be held for write to ensure that a root doesn't become + * invalid while there are active readers (invalidating a root while + * there are active readers may or may not be problematic in practice, + * but it's uncharted territory and not supported). + * + * Waive the assertion if there are no users of @kvm, i.e. the VM is + * being destroyed after all references have been put, or if no vCPUs + * have been created (which means there are no roots), i.e. the VM is + * being destroyed in an error path of KVM_CREATE_VM. + */ + if (IS_ENABLED(CONFIG_PROVE_LOCKING) && + refcount_read(&kvm->users_count) && kvm->created_vcpus) + lockdep_assert_held_write(&kvm->mmu_lock); + + /* + * As above, mmu_lock isn't held when destroying the VM! There can't + * be other references to @kvm, i.e. nothing else can invalidate roots + * or be consuming roots, but walking the list of roots does need to be + * guarded against roots being deleted by the asynchronous zap worker. + */ + rcu_read_lock(); + + list_for_each_entry_rcu(root, &kvm->arch.tdp_mmu_roots, link) { + if (!root->role.invalid) { root->role.invalid = true; tdp_mmu_schedule_zap_root(kvm, root); } } + + rcu_read_unlock(); }
/*
From: Dawei Li set_pte_at@outlook.com
[ Upstream commit 1d9c4172110e645b383ff13eee759728d74f1a5d ]
For some ops on channel: 1. lookup_chann_list(), possibly on high frequency. 2. ksmbd_chann_del().
Connection is used as indexing key to lookup channel, in that case, linear search based on list may suffer a bit for performance.
Implements sess->ksmbd_chann_list as xarray.
Signed-off-by: Dawei Li set_pte_at@outlook.com Acked-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Stable-dep-of: f5c779b7ddbd ("ksmbd: fix racy issue from session setup and logoff") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ksmbd/mgmt/user_session.c | 61 ++++++++++++++---------------------- fs/ksmbd/mgmt/user_session.h | 4 +-- fs/ksmbd/smb2pdu.c | 34 +++----------------- 3 files changed, 30 insertions(+), 69 deletions(-)
diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c index 92b1603b5abeb..a2b128dedcfcf 100644 --- a/fs/ksmbd/mgmt/user_session.c +++ b/fs/ksmbd/mgmt/user_session.c @@ -30,15 +30,15 @@ struct ksmbd_session_rpc {
static void free_channel_list(struct ksmbd_session *sess) { - struct channel *chann, *tmp; + struct channel *chann; + unsigned long index;
- write_lock(&sess->chann_lock); - list_for_each_entry_safe(chann, tmp, &sess->ksmbd_chann_list, - chann_list) { - list_del(&chann->chann_list); + xa_for_each(&sess->ksmbd_chann_list, index, chann) { + xa_erase(&sess->ksmbd_chann_list, index); kfree(chann); } - write_unlock(&sess->chann_lock); + + xa_destroy(&sess->ksmbd_chann_list); }
static void __session_rpc_close(struct ksmbd_session *sess, @@ -190,21 +190,15 @@ int ksmbd_session_register(struct ksmbd_conn *conn,
static int ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) { - struct channel *chann, *tmp; - - write_lock(&sess->chann_lock); - list_for_each_entry_safe(chann, tmp, &sess->ksmbd_chann_list, - chann_list) { - if (chann->conn == conn) { - list_del(&chann->chann_list); - kfree(chann); - write_unlock(&sess->chann_lock); - return 0; - } - } - write_unlock(&sess->chann_lock); + struct channel *chann; + + chann = xa_erase(&sess->ksmbd_chann_list, (long)conn); + if (!chann) + return -ENOENT;
- return -ENOENT; + kfree(chann); + + return 0; }
void ksmbd_sessions_deregister(struct ksmbd_conn *conn) @@ -234,7 +228,7 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn) return;
sess_destroy: - if (list_empty(&sess->ksmbd_chann_list)) { + if (xa_empty(&sess->ksmbd_chann_list)) { xa_erase(&conn->sessions, sess->id); ksmbd_session_destroy(sess); } @@ -320,6 +314,9 @@ static struct ksmbd_session *__session_create(int protocol) struct ksmbd_session *sess; int ret;
+ if (protocol != CIFDS_SESSION_FLAG_SMB2) + return NULL; + sess = kzalloc(sizeof(struct ksmbd_session), GFP_KERNEL); if (!sess) return NULL; @@ -329,30 +326,20 @@ static struct ksmbd_session *__session_create(int protocol)
set_session_flag(sess, protocol); xa_init(&sess->tree_conns); - INIT_LIST_HEAD(&sess->ksmbd_chann_list); + xa_init(&sess->ksmbd_chann_list); INIT_LIST_HEAD(&sess->rpc_handle_list); sess->sequence_number = 1; - rwlock_init(&sess->chann_lock); - - switch (protocol) { - case CIFDS_SESSION_FLAG_SMB2: - ret = __init_smb2_session(sess); - break; - default: - ret = -EINVAL; - break; - }
+ ret = __init_smb2_session(sess); if (ret) goto error;
ida_init(&sess->tree_conn_ida);
- if (protocol == CIFDS_SESSION_FLAG_SMB2) { - down_write(&sessions_table_lock); - hash_add(sessions_table, &sess->hlist, sess->id); - up_write(&sessions_table_lock); - } + down_write(&sessions_table_lock); + hash_add(sessions_table, &sess->hlist, sess->id); + up_write(&sessions_table_lock); + return sess;
error: diff --git a/fs/ksmbd/mgmt/user_session.h b/fs/ksmbd/mgmt/user_session.h index 8934b8ee275ba..44a3c67b2bd92 100644 --- a/fs/ksmbd/mgmt/user_session.h +++ b/fs/ksmbd/mgmt/user_session.h @@ -21,7 +21,6 @@ struct ksmbd_file_table; struct channel { __u8 smb3signingkey[SMB3_SIGN_KEY_SIZE]; struct ksmbd_conn *conn; - struct list_head chann_list; };
struct preauth_session { @@ -50,8 +49,7 @@ struct ksmbd_session { char sess_key[CIFS_KEY_SIZE];
struct hlist_node hlist; - rwlock_t chann_lock; - struct list_head ksmbd_chann_list; + struct xarray ksmbd_chann_list; struct xarray tree_conns; struct ida tree_conn_ida; struct list_head rpc_handle_list; diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index decaef3592f43..fe70d36df735b 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -74,14 +74,7 @@ static inline bool check_session_id(struct ksmbd_conn *conn, u64 id)
struct channel *lookup_chann_list(struct ksmbd_session *sess, struct ksmbd_conn *conn) { - struct channel *chann; - - list_for_each_entry(chann, &sess->ksmbd_chann_list, chann_list) { - if (chann->conn == conn) - return chann; - } - - return NULL; + return xa_load(&sess->ksmbd_chann_list, (long)conn); }
/** @@ -592,6 +585,7 @@ static void destroy_previous_session(struct ksmbd_conn *conn, struct ksmbd_session *prev_sess = ksmbd_session_lookup_slowpath(id); struct ksmbd_user *prev_user; struct channel *chann; + long index;
if (!prev_sess) return; @@ -605,10 +599,8 @@ static void destroy_previous_session(struct ksmbd_conn *conn, return;
prev_sess->state = SMB2_SESSION_EXPIRED; - write_lock(&prev_sess->chann_lock); - list_for_each_entry(chann, &prev_sess->ksmbd_chann_list, chann_list) + xa_for_each(&prev_sess->ksmbd_chann_list, index, chann) chann->conn->status = KSMBD_SESS_EXITING; - write_unlock(&prev_sess->chann_lock); }
/** @@ -1521,19 +1513,14 @@ static int ntlm_authenticate(struct ksmbd_work *work)
binding_session: if (conn->dialect >= SMB30_PROT_ID) { - read_lock(&sess->chann_lock); chann = lookup_chann_list(sess, conn); - read_unlock(&sess->chann_lock); if (!chann) { chann = kmalloc(sizeof(struct channel), GFP_KERNEL); if (!chann) return -ENOMEM;
chann->conn = conn; - INIT_LIST_HEAD(&chann->chann_list); - write_lock(&sess->chann_lock); - list_add(&chann->chann_list, &sess->ksmbd_chann_list); - write_unlock(&sess->chann_lock); + xa_store(&sess->ksmbd_chann_list, (long)conn, chann, GFP_KERNEL); } }
@@ -1608,19 +1595,14 @@ static int krb5_authenticate(struct ksmbd_work *work) }
if (conn->dialect >= SMB30_PROT_ID) { - read_lock(&sess->chann_lock); chann = lookup_chann_list(sess, conn); - read_unlock(&sess->chann_lock); if (!chann) { chann = kmalloc(sizeof(struct channel), GFP_KERNEL); if (!chann) return -ENOMEM;
chann->conn = conn; - INIT_LIST_HEAD(&chann->chann_list); - write_lock(&sess->chann_lock); - list_add(&chann->chann_list, &sess->ksmbd_chann_list); - write_unlock(&sess->chann_lock); + xa_store(&sess->ksmbd_chann_list, (long)conn, chann, GFP_KERNEL); } }
@@ -8434,14 +8416,11 @@ int smb3_check_sign_req(struct ksmbd_work *work) if (le16_to_cpu(hdr->Command) == SMB2_SESSION_SETUP_HE) { signing_key = work->sess->smb3signingkey; } else { - read_lock(&work->sess->chann_lock); chann = lookup_chann_list(work->sess, conn); if (!chann) { - read_unlock(&work->sess->chann_lock); return 0; } signing_key = chann->smb3signingkey; - read_unlock(&work->sess->chann_lock); }
if (!signing_key) { @@ -8501,14 +8480,11 @@ void smb3_set_sign_rsp(struct ksmbd_work *work) le16_to_cpu(hdr->Command) == SMB2_SESSION_SETUP_HE) { signing_key = work->sess->smb3signingkey; } else { - read_lock(&work->sess->chann_lock); chann = lookup_chann_list(work->sess, work->conn); if (!chann) { - read_unlock(&work->sess->chann_lock); return; } signing_key = chann->smb3signingkey; - read_unlock(&work->sess->chann_lock); }
if (!signing_key)
From: Namjae Jeon linkinjeon@kernel.org
[ Upstream commit f5c779b7ddbda30866cf2a27c63e34158f858c73 ]
This racy issue is triggered by sending concurrent session setup and logoff requests. This patch does not set connection status as KSMBD_SESS_GOOD if state is KSMBD_SESS_NEED_RECONNECT in session setup. And relookup session to validate if session is deleted in logoff.
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20481, ZDI-CAN-20590, ZDI-CAN-20596 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ksmbd/connection.c | 14 ++++---- fs/ksmbd/connection.h | 39 ++++++++++++--------- fs/ksmbd/mgmt/user_session.c | 1 + fs/ksmbd/server.c | 3 +- fs/ksmbd/smb2pdu.c | 67 +++++++++++++++++++++++------------- fs/ksmbd/transport_tcp.c | 2 +- 6 files changed, 77 insertions(+), 49 deletions(-)
diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c index b8f9d627f241d..3cb88853d6932 100644 --- a/fs/ksmbd/connection.c +++ b/fs/ksmbd/connection.c @@ -56,7 +56,7 @@ struct ksmbd_conn *ksmbd_conn_alloc(void) return NULL;
conn->need_neg = true; - conn->status = KSMBD_SESS_NEW; + ksmbd_conn_set_new(conn); conn->local_nls = load_nls("utf8"); if (!conn->local_nls) conn->local_nls = load_nls_default(); @@ -149,12 +149,12 @@ int ksmbd_conn_try_dequeue_request(struct ksmbd_work *work) return ret; }
-static void ksmbd_conn_lock(struct ksmbd_conn *conn) +void ksmbd_conn_lock(struct ksmbd_conn *conn) { mutex_lock(&conn->srv_mutex); }
-static void ksmbd_conn_unlock(struct ksmbd_conn *conn) +void ksmbd_conn_unlock(struct ksmbd_conn *conn) { mutex_unlock(&conn->srv_mutex); } @@ -245,7 +245,7 @@ bool ksmbd_conn_alive(struct ksmbd_conn *conn) if (!ksmbd_server_running()) return false;
- if (conn->status == KSMBD_SESS_EXITING) + if (ksmbd_conn_exiting(conn)) return false;
if (kthread_should_stop()) @@ -305,7 +305,7 @@ int ksmbd_conn_handler_loop(void *p) pdu_size = get_rfc1002_len(hdr_buf); ksmbd_debug(CONN, "RFC1002 header %u bytes\n", pdu_size);
- if (conn->status == KSMBD_SESS_GOOD) + if (ksmbd_conn_good(conn)) max_allowed_pdu_size = SMB3_MAX_MSGSIZE + conn->vals->max_write_size; else @@ -314,7 +314,7 @@ int ksmbd_conn_handler_loop(void *p) if (pdu_size > max_allowed_pdu_size) { pr_err_ratelimited("PDU length(%u) excceed maximum allowed pdu size(%u) on connection(%d)\n", pdu_size, max_allowed_pdu_size, - conn->status); + READ_ONCE(conn->status)); break; }
@@ -418,7 +418,7 @@ static void stop_sessions(void) if (task) ksmbd_debug(CONN, "Stop session handler %s/%d\n", task->comm, task_pid_nr(task)); - conn->status = KSMBD_SESS_EXITING; + ksmbd_conn_set_exiting(conn); if (t->ops->shutdown) { read_unlock(&conn_list_lock); t->ops->shutdown(t); diff --git a/fs/ksmbd/connection.h b/fs/ksmbd/connection.h index 0e3a848defaf3..98bb5f199fa24 100644 --- a/fs/ksmbd/connection.h +++ b/fs/ksmbd/connection.h @@ -162,6 +162,8 @@ void ksmbd_conn_init_server_callbacks(struct ksmbd_conn_ops *ops); int ksmbd_conn_handler_loop(void *p); int ksmbd_conn_transport_init(void); void ksmbd_conn_transport_destroy(void); +void ksmbd_conn_lock(struct ksmbd_conn *conn); +void ksmbd_conn_unlock(struct ksmbd_conn *conn);
/* * WARNING @@ -169,43 +171,48 @@ void ksmbd_conn_transport_destroy(void); * This is a hack. We will move status to a proper place once we land * a multi-sessions support. */ -static inline bool ksmbd_conn_good(struct ksmbd_work *work) +static inline bool ksmbd_conn_good(struct ksmbd_conn *conn) { - return work->conn->status == KSMBD_SESS_GOOD; + return READ_ONCE(conn->status) == KSMBD_SESS_GOOD; }
-static inline bool ksmbd_conn_need_negotiate(struct ksmbd_work *work) +static inline bool ksmbd_conn_need_negotiate(struct ksmbd_conn *conn) { - return work->conn->status == KSMBD_SESS_NEED_NEGOTIATE; + return READ_ONCE(conn->status) == KSMBD_SESS_NEED_NEGOTIATE; }
-static inline bool ksmbd_conn_need_reconnect(struct ksmbd_work *work) +static inline bool ksmbd_conn_need_reconnect(struct ksmbd_conn *conn) { - return work->conn->status == KSMBD_SESS_NEED_RECONNECT; + return READ_ONCE(conn->status) == KSMBD_SESS_NEED_RECONNECT; }
-static inline bool ksmbd_conn_exiting(struct ksmbd_work *work) +static inline bool ksmbd_conn_exiting(struct ksmbd_conn *conn) { - return work->conn->status == KSMBD_SESS_EXITING; + return READ_ONCE(conn->status) == KSMBD_SESS_EXITING; }
-static inline void ksmbd_conn_set_good(struct ksmbd_work *work) +static inline void ksmbd_conn_set_new(struct ksmbd_conn *conn) { - work->conn->status = KSMBD_SESS_GOOD; + WRITE_ONCE(conn->status, KSMBD_SESS_NEW); }
-static inline void ksmbd_conn_set_need_negotiate(struct ksmbd_work *work) +static inline void ksmbd_conn_set_good(struct ksmbd_conn *conn) { - work->conn->status = KSMBD_SESS_NEED_NEGOTIATE; + WRITE_ONCE(conn->status, KSMBD_SESS_GOOD); }
-static inline void ksmbd_conn_set_need_reconnect(struct ksmbd_work *work) +static inline void ksmbd_conn_set_need_negotiate(struct ksmbd_conn *conn) { - work->conn->status = KSMBD_SESS_NEED_RECONNECT; + WRITE_ONCE(conn->status, KSMBD_SESS_NEED_NEGOTIATE); }
-static inline void ksmbd_conn_set_exiting(struct ksmbd_work *work) +static inline void ksmbd_conn_set_need_reconnect(struct ksmbd_conn *conn) { - work->conn->status = KSMBD_SESS_EXITING; + WRITE_ONCE(conn->status, KSMBD_SESS_NEED_RECONNECT); +} + +static inline void ksmbd_conn_set_exiting(struct ksmbd_conn *conn) +{ + WRITE_ONCE(conn->status, KSMBD_SESS_EXITING); } #endif /* __CONNECTION_H__ */ diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c index a2b128dedcfcf..69b85a98e2c35 100644 --- a/fs/ksmbd/mgmt/user_session.c +++ b/fs/ksmbd/mgmt/user_session.c @@ -324,6 +324,7 @@ static struct ksmbd_session *__session_create(int protocol) if (ksmbd_init_file_table(&sess->file_table)) goto error;
+ sess->state = SMB2_SESSION_IN_PROGRESS; set_session_flag(sess, protocol); xa_init(&sess->tree_conns); xa_init(&sess->ksmbd_chann_list); diff --git a/fs/ksmbd/server.c b/fs/ksmbd/server.c index cd8a873347a79..dc76d7cf241f0 100644 --- a/fs/ksmbd/server.c +++ b/fs/ksmbd/server.c @@ -93,7 +93,8 @@ static inline int check_conn_state(struct ksmbd_work *work) { struct smb_hdr *rsp_hdr;
- if (ksmbd_conn_exiting(work) || ksmbd_conn_need_reconnect(work)) { + if (ksmbd_conn_exiting(work->conn) || + ksmbd_conn_need_reconnect(work->conn)) { rsp_hdr = work->response_buf; rsp_hdr->Status.CifsError = STATUS_CONNECTION_DISCONNECTED; return 1; diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index fe70d36df735b..ae0610c95e33c 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -247,7 +247,7 @@ int init_smb2_neg_rsp(struct ksmbd_work *work)
rsp = smb2_get_msg(work->response_buf);
- WARN_ON(ksmbd_conn_good(work)); + WARN_ON(ksmbd_conn_good(conn));
rsp->StructureSize = cpu_to_le16(65); ksmbd_debug(SMB, "conn->dialect 0x%x\n", conn->dialect); @@ -277,7 +277,7 @@ int init_smb2_neg_rsp(struct ksmbd_work *work) rsp->SecurityMode |= SMB2_NEGOTIATE_SIGNING_REQUIRED_LE; conn->use_spnego = true;
- ksmbd_conn_set_need_negotiate(work); + ksmbd_conn_set_need_negotiate(conn); return 0; }
@@ -567,7 +567,7 @@ int smb2_check_user_session(struct ksmbd_work *work) cmd == SMB2_SESSION_SETUP_HE) return 0;
- if (!ksmbd_conn_good(work)) + if (!ksmbd_conn_good(conn)) return -EINVAL;
sess_id = le64_to_cpu(req_hdr->SessionId); @@ -600,7 +600,7 @@ static void destroy_previous_session(struct ksmbd_conn *conn,
prev_sess->state = SMB2_SESSION_EXPIRED; xa_for_each(&prev_sess->ksmbd_chann_list, index, chann) - chann->conn->status = KSMBD_SESS_EXITING; + ksmbd_conn_set_exiting(chann->conn); }
/** @@ -1067,7 +1067,7 @@ int smb2_handle_negotiate(struct ksmbd_work *work)
ksmbd_debug(SMB, "Received negotiate request\n"); conn->need_neg = false; - if (ksmbd_conn_good(work)) { + if (ksmbd_conn_good(conn)) { pr_err("conn->tcp_status is already in CifsGood State\n"); work->send_no_response = 1; return rc; @@ -1222,7 +1222,7 @@ int smb2_handle_negotiate(struct ksmbd_work *work) }
conn->srv_sec_mode = le16_to_cpu(rsp->SecurityMode); - ksmbd_conn_set_need_negotiate(work); + ksmbd_conn_set_need_negotiate(conn);
err_out: if (rc < 0) @@ -1645,6 +1645,7 @@ int smb2_sess_setup(struct ksmbd_work *work) rsp->SecurityBufferLength = 0; inc_rfc1001_len(work->response_buf, 9);
+ ksmbd_conn_lock(conn); if (!req->hdr.SessionId) { sess = ksmbd_smb2_session_create(); if (!sess) { @@ -1692,6 +1693,12 @@ int smb2_sess_setup(struct ksmbd_work *work) goto out_err; }
+ if (ksmbd_conn_need_reconnect(conn)) { + rc = -EFAULT; + sess = NULL; + goto out_err; + } + if (ksmbd_session_lookup(conn, sess_id)) { rc = -EACCES; goto out_err; @@ -1716,12 +1723,20 @@ int smb2_sess_setup(struct ksmbd_work *work) rc = -ENOENT; goto out_err; } + + if (sess->state == SMB2_SESSION_EXPIRED) { + rc = -EFAULT; + goto out_err; + } + + if (ksmbd_conn_need_reconnect(conn)) { + rc = -EFAULT; + sess = NULL; + goto out_err; + } } work->sess = sess;
- if (sess->state == SMB2_SESSION_EXPIRED) - sess->state = SMB2_SESSION_IN_PROGRESS; - negblob_off = le16_to_cpu(req->SecurityBufferOffset); negblob_len = le16_to_cpu(req->SecurityBufferLength); if (negblob_off < offsetof(struct smb2_sess_setup_req, Buffer) || @@ -1751,8 +1766,10 @@ int smb2_sess_setup(struct ksmbd_work *work) goto out_err; }
- ksmbd_conn_set_good(work); - sess->state = SMB2_SESSION_VALID; + if (!ksmbd_conn_need_reconnect(conn)) { + ksmbd_conn_set_good(conn); + sess->state = SMB2_SESSION_VALID; + } kfree(sess->Preauth_HashValue); sess->Preauth_HashValue = NULL; } else if (conn->preferred_auth_mech == KSMBD_AUTH_NTLMSSP) { @@ -1774,8 +1791,10 @@ int smb2_sess_setup(struct ksmbd_work *work) if (rc) goto out_err;
- ksmbd_conn_set_good(work); - sess->state = SMB2_SESSION_VALID; + if (!ksmbd_conn_need_reconnect(conn)) { + ksmbd_conn_set_good(conn); + sess->state = SMB2_SESSION_VALID; + } if (conn->binding) { struct preauth_session *preauth_sess;
@@ -1843,14 +1862,13 @@ int smb2_sess_setup(struct ksmbd_work *work) if (sess->user && sess->user->flags & KSMBD_USER_FLAG_DELAY_SESSION) try_delay = true;
- xa_erase(&conn->sessions, sess->id); - ksmbd_session_destroy(sess); - work->sess = NULL; + sess->state = SMB2_SESSION_EXPIRED; if (try_delay) ssleep(5); } }
+ ksmbd_conn_unlock(conn); return rc; }
@@ -2075,21 +2093,24 @@ int smb2_session_logoff(struct ksmbd_work *work) { struct ksmbd_conn *conn = work->conn; struct smb2_logoff_rsp *rsp = smb2_get_msg(work->response_buf); - struct ksmbd_session *sess = work->sess; + struct ksmbd_session *sess; + struct smb2_logoff_req *req = smb2_get_msg(work->request_buf);
rsp->StructureSize = cpu_to_le16(4); inc_rfc1001_len(work->response_buf, 4);
ksmbd_debug(SMB, "request\n");
- /* setting CifsExiting here may race with start_tcp_sess */ - ksmbd_conn_set_need_reconnect(work); + ksmbd_conn_set_need_reconnect(conn); ksmbd_close_session_fds(work); ksmbd_conn_wait_idle(conn);
+ /* + * Re-lookup session to validate if session is deleted + * while waiting request complete + */ + sess = ksmbd_session_lookup(conn, le64_to_cpu(req->hdr.SessionId)); if (ksmbd_tree_conn_session_logoff(sess)) { - struct smb2_logoff_req *req = smb2_get_msg(work->request_buf); - ksmbd_debug(SMB, "Invalid tid %d\n", req->hdr.Id.SyncId.TreeId); rsp->hdr.Status = STATUS_NETWORK_NAME_DELETED; smb2_set_err_rsp(work); @@ -2101,9 +2122,7 @@ int smb2_session_logoff(struct ksmbd_work *work)
ksmbd_free_user(sess->user); sess->user = NULL; - - /* let start_tcp_sess free connection info now */ - ksmbd_conn_set_need_negotiate(work); + ksmbd_conn_set_need_negotiate(conn); return 0; }
diff --git a/fs/ksmbd/transport_tcp.c b/fs/ksmbd/transport_tcp.c index 20e85e2701f26..eff7a1d793f00 100644 --- a/fs/ksmbd/transport_tcp.c +++ b/fs/ksmbd/transport_tcp.c @@ -333,7 +333,7 @@ static int ksmbd_tcp_readv(struct tcp_transport *t, struct kvec *iov_orig, if (length == -EINTR) { total_read = -ESHUTDOWN; break; - } else if (conn->status == KSMBD_SESS_NEED_RECONNECT) { + } else if (ksmbd_conn_need_reconnect(conn)) { total_read = -EAGAIN; break; } else if (length == -ERESTARTSYS || length == -EAGAIN) {
From: Namjae Jeon linkinjeon@kernel.org
[ Upstream commit b096d97f47326b1e2dbdef1c91fab69ffda54d17 ]
ksmbd make a delay of 5 seconds on session setup to avoid dictionary attacks. But the 5 seconds delay can be bypassed by using asynchronous requests. This patch block all requests on current connection when making a delay on sesstion setup failure.
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20482 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ksmbd/smb2pdu.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index ae0610c95e33c..51e95ea37195b 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -1863,8 +1863,11 @@ int smb2_sess_setup(struct ksmbd_work *work) try_delay = true;
sess->state = SMB2_SESSION_EXPIRED; - if (try_delay) + if (try_delay) { + ksmbd_conn_set_need_reconnect(conn); ssleep(5); + ksmbd_conn_set_need_negotiate(conn); + } } }
From: Namjae Jeon linkinjeon@kernel.org
[ Upstream commit ea174a91893956450510945a0c5d1a10b5323656 ]
client can indefinitely send smb2 session setup requests with the SessionId set to 0, thus indefinitely spawning new sessions, and causing indefinite memory usage. This patch limit to the number of sessions using expired timeout and session state.
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20478 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ksmbd/mgmt/user_session.c | 68 ++++++++++++++++++++---------------- fs/ksmbd/mgmt/user_session.h | 1 + fs/ksmbd/smb2pdu.c | 1 + fs/ksmbd/smb2pdu.h | 2 ++ 4 files changed, 41 insertions(+), 31 deletions(-)
diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c index 69b85a98e2c35..b809f7987b9f4 100644 --- a/fs/ksmbd/mgmt/user_session.c +++ b/fs/ksmbd/mgmt/user_session.c @@ -174,70 +174,73 @@ static struct ksmbd_session *__session_lookup(unsigned long long id) struct ksmbd_session *sess;
hash_for_each_possible(sessions_table, sess, hlist, id) { - if (id == sess->id) + if (id == sess->id) { + sess->last_active = jiffies; return sess; + } } return NULL; }
+static void ksmbd_expire_session(struct ksmbd_conn *conn) +{ + unsigned long id; + struct ksmbd_session *sess; + + xa_for_each(&conn->sessions, id, sess) { + if (sess->state != SMB2_SESSION_VALID || + time_after(jiffies, + sess->last_active + SMB2_SESSION_TIMEOUT)) { + xa_erase(&conn->sessions, sess->id); + ksmbd_session_destroy(sess); + continue; + } + } +} + int ksmbd_session_register(struct ksmbd_conn *conn, struct ksmbd_session *sess) { sess->dialect = conn->dialect; memcpy(sess->ClientGUID, conn->ClientGUID, SMB2_CLIENT_GUID_SIZE); + ksmbd_expire_session(conn); return xa_err(xa_store(&conn->sessions, sess->id, sess, GFP_KERNEL)); }
-static int ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) +static void ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) { struct channel *chann;
chann = xa_erase(&sess->ksmbd_chann_list, (long)conn); if (!chann) - return -ENOENT; + return;
kfree(chann); - - return 0; }
void ksmbd_sessions_deregister(struct ksmbd_conn *conn) { struct ksmbd_session *sess; + unsigned long id;
- if (conn->binding) { - int bkt; - - down_write(&sessions_table_lock); - hash_for_each(sessions_table, bkt, sess, hlist) { - if (!ksmbd_chann_del(conn, sess)) { - up_write(&sessions_table_lock); - goto sess_destroy; - } + xa_for_each(&conn->sessions, id, sess) { + ksmbd_chann_del(conn, sess); + if (xa_empty(&sess->ksmbd_chann_list)) { + xa_erase(&conn->sessions, sess->id); + ksmbd_session_destroy(sess); } - up_write(&sessions_table_lock); - } else { - unsigned long id; - - xa_for_each(&conn->sessions, id, sess) { - if (!ksmbd_chann_del(conn, sess)) - goto sess_destroy; - } - } - - return; - -sess_destroy: - if (xa_empty(&sess->ksmbd_chann_list)) { - xa_erase(&conn->sessions, sess->id); - ksmbd_session_destroy(sess); } }
struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn, unsigned long long id) { - return xa_load(&conn->sessions, id); + struct ksmbd_session *sess; + + sess = xa_load(&conn->sessions, id); + if (sess) + sess->last_active = jiffies; + return sess; }
struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id) @@ -246,6 +249,8 @@ struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id)
down_read(&sessions_table_lock); sess = __session_lookup(id); + if (sess) + sess->last_active = jiffies; up_read(&sessions_table_lock);
return sess; @@ -324,6 +329,7 @@ static struct ksmbd_session *__session_create(int protocol) if (ksmbd_init_file_table(&sess->file_table)) goto error;
+ sess->last_active = jiffies; sess->state = SMB2_SESSION_IN_PROGRESS; set_session_flag(sess, protocol); xa_init(&sess->tree_conns); diff --git a/fs/ksmbd/mgmt/user_session.h b/fs/ksmbd/mgmt/user_session.h index 44a3c67b2bd92..51f38e5b61abb 100644 --- a/fs/ksmbd/mgmt/user_session.h +++ b/fs/ksmbd/mgmt/user_session.h @@ -59,6 +59,7 @@ struct ksmbd_session { __u8 smb3signingkey[SMB3_SIGN_KEY_SIZE];
struct ksmbd_file_table file_table; + unsigned long last_active; };
static inline int test_session_flag(struct ksmbd_session *sess, int bit) diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index 51e95ea37195b..d6423f2fae6d9 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -1862,6 +1862,7 @@ int smb2_sess_setup(struct ksmbd_work *work) if (sess->user && sess->user->flags & KSMBD_USER_FLAG_DELAY_SESSION) try_delay = true;
+ sess->last_active = jiffies; sess->state = SMB2_SESSION_EXPIRED; if (try_delay) { ksmbd_conn_set_need_reconnect(conn); diff --git a/fs/ksmbd/smb2pdu.h b/fs/ksmbd/smb2pdu.h index 0c8a770fe3189..df05c9b2504d4 100644 --- a/fs/ksmbd/smb2pdu.h +++ b/fs/ksmbd/smb2pdu.h @@ -61,6 +61,8 @@ struct preauth_integrity_info { #define SMB2_SESSION_IN_PROGRESS BIT(0) #define SMB2_SESSION_VALID BIT(1)
+#define SMB2_SESSION_TIMEOUT (10 * HZ) + struct create_durable_req_v2 { struct create_context ccontext; __u8 Name[8];
From: Namjae Jeon linkinjeon@kernel.org
[ Upstream commit abcc506a9a71976a8b4c9bf3ee6efd13229c1e19 ]
When smb client send concurrent smb2 close and logoff request with multichannel connection, It can cause racy issue. logoff request free tcon and can cause UAF issues in smb2 close. When receiving logoff request with multichannel, ksmbd should wait until all remaning requests complete as well as ones in the current connection, and then make session expired.
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20796 ZDI-CAN-20595 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ksmbd/connection.c | 54 +++++++++++++++++++++++++++--------- fs/ksmbd/connection.h | 19 +++++++++++-- fs/ksmbd/mgmt/tree_connect.c | 3 ++ fs/ksmbd/mgmt/user_session.c | 36 ++++++++++++++++++++---- fs/ksmbd/smb2pdu.c | 21 +++++++------- 5 files changed, 101 insertions(+), 32 deletions(-)
diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c index 3cb88853d6932..e3312fbf4c090 100644 --- a/fs/ksmbd/connection.c +++ b/fs/ksmbd/connection.c @@ -20,7 +20,7 @@ static DEFINE_MUTEX(init_lock); static struct ksmbd_conn_ops default_conn_ops;
LIST_HEAD(conn_list); -DEFINE_RWLOCK(conn_list_lock); +DECLARE_RWSEM(conn_list_lock);
/** * ksmbd_conn_free() - free resources of the connection instance @@ -32,9 +32,9 @@ DEFINE_RWLOCK(conn_list_lock); */ void ksmbd_conn_free(struct ksmbd_conn *conn) { - write_lock(&conn_list_lock); + down_write(&conn_list_lock); list_del(&conn->conns_list); - write_unlock(&conn_list_lock); + up_write(&conn_list_lock);
xa_destroy(&conn->sessions); kvfree(conn->request_buf); @@ -84,9 +84,9 @@ struct ksmbd_conn *ksmbd_conn_alloc(void) spin_lock_init(&conn->llist_lock); INIT_LIST_HEAD(&conn->lock_list);
- write_lock(&conn_list_lock); + down_write(&conn_list_lock); list_add(&conn->conns_list, &conn_list); - write_unlock(&conn_list_lock); + up_write(&conn_list_lock); return conn; }
@@ -95,7 +95,7 @@ bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c) struct ksmbd_conn *t; bool ret = false;
- read_lock(&conn_list_lock); + down_read(&conn_list_lock); list_for_each_entry(t, &conn_list, conns_list) { if (memcmp(t->ClientGUID, c->ClientGUID, SMB2_CLIENT_GUID_SIZE)) continue; @@ -103,7 +103,7 @@ bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c) ret = true; break; } - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); return ret; }
@@ -159,9 +159,37 @@ void ksmbd_conn_unlock(struct ksmbd_conn *conn) mutex_unlock(&conn->srv_mutex); }
-void ksmbd_conn_wait_idle(struct ksmbd_conn *conn) +void ksmbd_all_conn_set_status(u64 sess_id, u32 status) { + struct ksmbd_conn *conn; + + down_read(&conn_list_lock); + list_for_each_entry(conn, &conn_list, conns_list) { + if (conn->binding || xa_load(&conn->sessions, sess_id)) + WRITE_ONCE(conn->status, status); + } + up_read(&conn_list_lock); +} + +void ksmbd_conn_wait_idle(struct ksmbd_conn *conn, u64 sess_id) +{ + struct ksmbd_conn *bind_conn; + wait_event(conn->req_running_q, atomic_read(&conn->req_running) < 2); + + down_read(&conn_list_lock); + list_for_each_entry(bind_conn, &conn_list, conns_list) { + if (bind_conn == conn) + continue; + + if ((bind_conn->binding || xa_load(&bind_conn->sessions, sess_id)) && + !ksmbd_conn_releasing(bind_conn) && + atomic_read(&bind_conn->req_running)) { + wait_event(bind_conn->req_running_q, + atomic_read(&bind_conn->req_running) == 0); + } + } + up_read(&conn_list_lock); }
int ksmbd_conn_write(struct ksmbd_work *work) @@ -362,10 +390,10 @@ int ksmbd_conn_handler_loop(void *p) }
out: + ksmbd_conn_set_releasing(conn); /* Wait till all reference dropped to the Server object*/ wait_event(conn->r_count_q, atomic_read(&conn->r_count) == 0);
- if (IS_ENABLED(CONFIG_UNICODE)) utf8_unload(conn->um); unload_nls(conn->local_nls); @@ -409,7 +437,7 @@ static void stop_sessions(void) struct ksmbd_transport *t;
again: - read_lock(&conn_list_lock); + down_read(&conn_list_lock); list_for_each_entry(conn, &conn_list, conns_list) { struct task_struct *task;
@@ -420,12 +448,12 @@ static void stop_sessions(void) task->comm, task_pid_nr(task)); ksmbd_conn_set_exiting(conn); if (t->ops->shutdown) { - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); t->ops->shutdown(t); - read_lock(&conn_list_lock); + down_read(&conn_list_lock); } } - read_unlock(&conn_list_lock); + up_read(&conn_list_lock);
if (!list_empty(&conn_list)) { schedule_timeout_interruptible(HZ / 10); /* 100ms */ diff --git a/fs/ksmbd/connection.h b/fs/ksmbd/connection.h index 98bb5f199fa24..ad8dfaa48ffb3 100644 --- a/fs/ksmbd/connection.h +++ b/fs/ksmbd/connection.h @@ -26,7 +26,8 @@ enum { KSMBD_SESS_GOOD, KSMBD_SESS_EXITING, KSMBD_SESS_NEED_RECONNECT, - KSMBD_SESS_NEED_NEGOTIATE + KSMBD_SESS_NEED_NEGOTIATE, + KSMBD_SESS_RELEASING };
struct ksmbd_stats { @@ -140,10 +141,10 @@ struct ksmbd_transport { #define KSMBD_TCP_PEER_SOCKADDR(c) ((struct sockaddr *)&((c)->peer_addr))
extern struct list_head conn_list; -extern rwlock_t conn_list_lock; +extern struct rw_semaphore conn_list_lock;
bool ksmbd_conn_alive(struct ksmbd_conn *conn); -void ksmbd_conn_wait_idle(struct ksmbd_conn *conn); +void ksmbd_conn_wait_idle(struct ksmbd_conn *conn, u64 sess_id); struct ksmbd_conn *ksmbd_conn_alloc(void); void ksmbd_conn_free(struct ksmbd_conn *conn); bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c); @@ -191,6 +192,11 @@ static inline bool ksmbd_conn_exiting(struct ksmbd_conn *conn) return READ_ONCE(conn->status) == KSMBD_SESS_EXITING; }
+static inline bool ksmbd_conn_releasing(struct ksmbd_conn *conn) +{ + return READ_ONCE(conn->status) == KSMBD_SESS_RELEASING; +} + static inline void ksmbd_conn_set_new(struct ksmbd_conn *conn) { WRITE_ONCE(conn->status, KSMBD_SESS_NEW); @@ -215,4 +221,11 @@ static inline void ksmbd_conn_set_exiting(struct ksmbd_conn *conn) { WRITE_ONCE(conn->status, KSMBD_SESS_EXITING); } + +static inline void ksmbd_conn_set_releasing(struct ksmbd_conn *conn) +{ + WRITE_ONCE(conn->status, KSMBD_SESS_RELEASING); +} + +void ksmbd_all_conn_set_status(u64 sess_id, u32 status); #endif /* __CONNECTION_H__ */ diff --git a/fs/ksmbd/mgmt/tree_connect.c b/fs/ksmbd/mgmt/tree_connect.c index f19de20c2960c..f07a05f376513 100644 --- a/fs/ksmbd/mgmt/tree_connect.c +++ b/fs/ksmbd/mgmt/tree_connect.c @@ -137,6 +137,9 @@ int ksmbd_tree_conn_session_logoff(struct ksmbd_session *sess) struct ksmbd_tree_connect *tc; unsigned long id;
+ if (!sess) + return -EINVAL; + xa_for_each(&sess->tree_conns, id, tc) ret |= ksmbd_tree_conn_disconnect(sess, tc); xa_destroy(&sess->tree_conns); diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c index b809f7987b9f4..ea4b56d570fbb 100644 --- a/fs/ksmbd/mgmt/user_session.c +++ b/fs/ksmbd/mgmt/user_session.c @@ -153,10 +153,6 @@ void ksmbd_session_destroy(struct ksmbd_session *sess) if (!sess) return;
- down_write(&sessions_table_lock); - hash_del(&sess->hlist); - up_write(&sessions_table_lock); - if (sess->user) ksmbd_free_user(sess->user);
@@ -187,15 +183,18 @@ static void ksmbd_expire_session(struct ksmbd_conn *conn) unsigned long id; struct ksmbd_session *sess;
+ down_write(&sessions_table_lock); xa_for_each(&conn->sessions, id, sess) { if (sess->state != SMB2_SESSION_VALID || time_after(jiffies, sess->last_active + SMB2_SESSION_TIMEOUT)) { xa_erase(&conn->sessions, sess->id); + hash_del(&sess->hlist); ksmbd_session_destroy(sess); continue; } } + up_write(&sessions_table_lock); }
int ksmbd_session_register(struct ksmbd_conn *conn, @@ -207,15 +206,16 @@ int ksmbd_session_register(struct ksmbd_conn *conn, return xa_err(xa_store(&conn->sessions, sess->id, sess, GFP_KERNEL)); }
-static void ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) +static int ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) { struct channel *chann;
chann = xa_erase(&sess->ksmbd_chann_list, (long)conn); if (!chann) - return; + return -ENOENT;
kfree(chann); + return 0; }
void ksmbd_sessions_deregister(struct ksmbd_conn *conn) @@ -223,13 +223,37 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn) struct ksmbd_session *sess; unsigned long id;
+ down_write(&sessions_table_lock); + if (conn->binding) { + int bkt; + struct hlist_node *tmp; + + hash_for_each_safe(sessions_table, bkt, tmp, sess, hlist) { + if (!ksmbd_chann_del(conn, sess) && + xa_empty(&sess->ksmbd_chann_list)) { + hash_del(&sess->hlist); + ksmbd_session_destroy(sess); + } + } + } + xa_for_each(&conn->sessions, id, sess) { + unsigned long chann_id; + struct channel *chann; + + xa_for_each(&sess->ksmbd_chann_list, chann_id, chann) { + if (chann->conn != conn) + ksmbd_conn_set_exiting(chann->conn); + } + ksmbd_chann_del(conn, sess); if (xa_empty(&sess->ksmbd_chann_list)) { xa_erase(&conn->sessions, sess->id); + hash_del(&sess->hlist); ksmbd_session_destroy(sess); } } + up_write(&sessions_table_lock); }
struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn, diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index d6423f2fae6d9..53badff17efaa 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -2099,21 +2099,22 @@ int smb2_session_logoff(struct ksmbd_work *work) struct smb2_logoff_rsp *rsp = smb2_get_msg(work->response_buf); struct ksmbd_session *sess; struct smb2_logoff_req *req = smb2_get_msg(work->request_buf); + u64 sess_id = le64_to_cpu(req->hdr.SessionId);
rsp->StructureSize = cpu_to_le16(4); inc_rfc1001_len(work->response_buf, 4);
ksmbd_debug(SMB, "request\n");
- ksmbd_conn_set_need_reconnect(conn); + ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_RECONNECT); ksmbd_close_session_fds(work); - ksmbd_conn_wait_idle(conn); + ksmbd_conn_wait_idle(conn, sess_id);
/* * Re-lookup session to validate if session is deleted * while waiting request complete */ - sess = ksmbd_session_lookup(conn, le64_to_cpu(req->hdr.SessionId)); + sess = ksmbd_session_lookup_all(conn, sess_id); if (ksmbd_tree_conn_session_logoff(sess)) { ksmbd_debug(SMB, "Invalid tid %d\n", req->hdr.Id.SyncId.TreeId); rsp->hdr.Status = STATUS_NETWORK_NAME_DELETED; @@ -2126,7 +2127,7 @@ int smb2_session_logoff(struct ksmbd_work *work)
ksmbd_free_user(sess->user); sess->user = NULL; - ksmbd_conn_set_need_negotiate(conn); + ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_NEGOTIATE); return 0; }
@@ -6958,7 +6959,7 @@ int smb2_lock(struct ksmbd_work *work)
nolock = 1; /* check locks in connection list */ - read_lock(&conn_list_lock); + down_read(&conn_list_lock); list_for_each_entry(conn, &conn_list, conns_list) { spin_lock(&conn->llist_lock); list_for_each_entry_safe(cmp_lock, tmp2, &conn->lock_list, clist) { @@ -6975,7 +6976,7 @@ int smb2_lock(struct ksmbd_work *work) list_del(&cmp_lock->flist); list_del(&cmp_lock->clist); spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock);
locks_free_lock(cmp_lock->fl); kfree(cmp_lock); @@ -6997,7 +6998,7 @@ int smb2_lock(struct ksmbd_work *work) cmp_lock->start > smb_lock->start && cmp_lock->start < smb_lock->end) { spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); pr_err("previous lock conflict with zero byte lock range\n"); goto out; } @@ -7006,7 +7007,7 @@ int smb2_lock(struct ksmbd_work *work) smb_lock->start > cmp_lock->start && smb_lock->start < cmp_lock->end) { spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); pr_err("current lock conflict with zero byte lock range\n"); goto out; } @@ -7017,14 +7018,14 @@ int smb2_lock(struct ksmbd_work *work) cmp_lock->end >= smb_lock->end)) && !cmp_lock->zero_len && !smb_lock->zero_len) { spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); pr_err("Not allow lock operation on exclusive lock range\n"); goto out; } } spin_unlock(&conn->llist_lock); } - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); out_check_cl: if (smb_lock->fl->fl_type == F_UNLCK && nolock) { pr_err("Try to unlock nolocked range\n");
From: Johannes Berg johannes.berg@intel.com
[ Upstream commit 457d7fb03e6c3d73fbb509bd85fc4b02d1ab405e ]
If we do get multiple notifications from firmware, then we might have allocated 'notif', but don't free it. Fix that by checking for duplicates before allocation.
Fixes: 4da46a06d443 ("wifi: iwlwifi: mvm: Add support for wowlan info notification") Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Gregory Greenman gregory.greenman@intel.com Link: https://lore.kernel.org/r/20230418122405.116758321cc4.I8bdbcbb38c89ac637eaa2... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/mvm/d3.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c index 29f75948ab00c..fe2de813fbf49 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c @@ -2715,6 +2715,7 @@ static bool iwl_mvm_wait_d3_notif(struct iwl_notif_wait_data *notif_wait, break; }
+ d3_data->notif_received |= IWL_D3_NOTIF_WOWLAN_INFO; len = iwl_rx_packet_payload_len(pkt); iwl_mvm_parse_wowlan_info_notif(mvm, notif, d3_data->status,
From: Shyam Prasad N sprasad@microsoft.com
[ Upstream commit 2f0e4f0342201fe2228fcc2301cc2b42ae04b8e3 ]
We had a couple of checks for session in cifs_tree_connect and cifs_mark_open_files_invalid, which were unnecessary. And that was done with ses_lock. Changed that to tc_lock too.
Signed-off-by: Shyam Prasad N sprasad@microsoft.com Reviewed-by: Paulo Alcantara (SUSE) pc@manguebit.com Signed-off-by: Steve French stfrench@microsoft.com Stable-dep-of: 6be2ea33a409 ("cifs: avoid potential races when handling multiple dfs tcons") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/cifs/connect.c | 10 +++++++--- fs/cifs/dfs.c | 10 +++++++--- fs/cifs/dfs_cache.c | 2 +- fs/cifs/file.c | 8 ++++---- 4 files changed, 19 insertions(+), 11 deletions(-)
diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c index 87527512c2660..af491ae70678a 100644 --- a/fs/cifs/connect.c +++ b/fs/cifs/connect.c @@ -4119,9 +4119,13 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
/* only send once per connect */ spin_lock(&tcon->tc_lock); - if (tcon->ses->ses_status != SES_GOOD || - (tcon->status != TID_NEW && - tcon->status != TID_NEED_TCON)) { + if (tcon->status != TID_NEW && + tcon->status != TID_NEED_TCON) { + spin_unlock(&tcon->tc_lock); + return -EHOSTDOWN; + } + + if (tcon->status == TID_GOOD) { spin_unlock(&tcon->tc_lock); return 0; } diff --git a/fs/cifs/dfs.c b/fs/cifs/dfs.c index 4c392bde24066..f02f8d3b92ee8 100644 --- a/fs/cifs/dfs.c +++ b/fs/cifs/dfs.c @@ -571,9 +571,13 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
/* only send once per connect */ spin_lock(&tcon->tc_lock); - if (tcon->ses->ses_status != SES_GOOD || - (tcon->status != TID_NEW && - tcon->status != TID_NEED_TCON)) { + if (tcon->status != TID_NEW && + tcon->status != TID_NEED_TCON) { + spin_unlock(&tcon->tc_lock); + return -EHOSTDOWN; + } + + if (tcon->status == TID_GOOD) { spin_unlock(&tcon->tc_lock); return 0; } diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c index 9ccaa0c7ac943..6557d7b2798a0 100644 --- a/fs/cifs/dfs_cache.c +++ b/fs/cifs/dfs_cache.c @@ -1191,7 +1191,7 @@ static int __refresh_tcon(const char *path, struct cifs_tcon *tcon, bool force_r }
spin_lock(&ipc->tc_lock); - if (ses->ses_status != SES_GOOD || ipc->status != TID_GOOD) { + if (ipc->status != TID_GOOD) { spin_unlock(&ipc->tc_lock); cifs_dbg(FYI, "%s: skip cache refresh due to disconnected ipc\n", __func__); goto out; diff --git a/fs/cifs/file.c b/fs/cifs/file.c index bef7c335ccc6e..d037366fcc5ee 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -48,13 +48,13 @@ cifs_mark_open_files_invalid(struct cifs_tcon *tcon) struct list_head *tmp1;
/* only send once per connect */ - spin_lock(&tcon->ses->ses_lock); - if ((tcon->ses->ses_status != SES_GOOD) || (tcon->status != TID_NEED_RECON)) { - spin_unlock(&tcon->ses->ses_lock); + spin_lock(&tcon->tc_lock); + if (tcon->status != TID_NEED_RECON) { + spin_unlock(&tcon->tc_lock); return; } tcon->status = TID_IN_FILES_INVALIDATE; - spin_unlock(&tcon->ses->ses_lock); + spin_unlock(&tcon->tc_lock);
/* list all files open on tree connection and mark them invalid */ spin_lock(&tcon->open_file_lock);
From: Paulo Alcantara pc@manguebit.com
[ Upstream commit 6be2ea33a4093402252724a00c4af8033725184c ]
Now that a DFS tcon manages its own list of DFS referrals and sessions, there is no point in having a single worker to refresh referrals of all DFS tcons. Make it faster and less prone to race conditions when having several mounts by queueing a worker per DFS tcon that will take care of refreshing only the DFS referrals related to it.
Cc: stable@vger.kernel.org # v6.2+ Signed-off-by: Paulo Alcantara (SUSE) pc@manguebit.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/cifs/cifsglob.h | 2 +- fs/cifs/connect.c | 7 ++- fs/cifs/dfs.c | 4 ++ fs/cifs/dfs_cache.c | 137 +++++++++++++++++++------------------------- fs/cifs/dfs_cache.h | 9 +++ 5 files changed, 80 insertions(+), 79 deletions(-)
diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index ea216e9d0f944..e6d12a6563887 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -1244,8 +1244,8 @@ struct cifs_tcon { struct cached_fids *cfids; /* BB add field for back pointer to sb struct(s)? */ #ifdef CONFIG_CIFS_DFS_UPCALL - struct list_head ulist; /* cache update list */ struct list_head dfs_ses_list; + struct delayed_work dfs_cache_work; #endif struct delayed_work query_interfaces; /* query interfaces workqueue job */ }; diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c index af491ae70678a..d71c2fb117c9e 100644 --- a/fs/cifs/connect.c +++ b/fs/cifs/connect.c @@ -2386,6 +2386,9 @@ cifs_put_tcon(struct cifs_tcon *tcon)
/* cancel polling of interfaces */ cancel_delayed_work_sync(&tcon->query_interfaces); +#ifdef CONFIG_CIFS_DFS_UPCALL + cancel_delayed_work_sync(&tcon->dfs_cache_work); +#endif
if (tcon->use_witness) { int rc; @@ -2633,7 +2636,9 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx) queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, (SMB_INTERFACE_POLL_INTERVAL * HZ)); } - +#ifdef CONFIG_CIFS_DFS_UPCALL + INIT_DELAYED_WORK(&tcon->dfs_cache_work, dfs_cache_refresh); +#endif spin_lock(&cifs_tcp_ses_lock); list_add(&tcon->tcon_list, &ses->tcon_list); spin_unlock(&cifs_tcp_ses_lock); diff --git a/fs/cifs/dfs.c b/fs/cifs/dfs.c index f02f8d3b92ee8..a93dbca1411b2 100644 --- a/fs/cifs/dfs.c +++ b/fs/cifs/dfs.c @@ -157,6 +157,8 @@ static int get_dfs_conn(struct cifs_mount_ctx *mnt_ctx, const char *ref_path, co rc = cifs_is_path_remote(mnt_ctx); }
+ dfs_cache_noreq_update_tgthint(ref_path + 1, tit); + if (rc == -EREMOTE && is_refsrv) { rc2 = add_root_smb_session(mnt_ctx); if (rc2) @@ -259,6 +261,8 @@ static int __dfs_mount_share(struct cifs_mount_ctx *mnt_ctx) if (list_empty(&tcon->dfs_ses_list)) { list_replace_init(&mnt_ctx->dfs_ses_list, &tcon->dfs_ses_list); + queue_delayed_work(dfscache_wq, &tcon->dfs_cache_work, + dfs_cache_get_ttl() * HZ); } else { dfs_put_root_smb_sessions(&mnt_ctx->dfs_ses_list); } diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c index 6557d7b2798a0..1513b2709889b 100644 --- a/fs/cifs/dfs_cache.c +++ b/fs/cifs/dfs_cache.c @@ -20,12 +20,14 @@ #include "cifs_unicode.h" #include "smb2glob.h" #include "dns_resolve.h" +#include "dfs.h"
#include "dfs_cache.h"
-#define CACHE_HTABLE_SIZE 32 -#define CACHE_MAX_ENTRIES 64 -#define CACHE_MIN_TTL 120 /* 2 minutes */ +#define CACHE_HTABLE_SIZE 32 +#define CACHE_MAX_ENTRIES 64 +#define CACHE_MIN_TTL 120 /* 2 minutes */ +#define CACHE_DEFAULT_TTL 300 /* 5 minutes */
#define IS_DFS_INTERLINK(v) (((v) & DFSREF_REFERRAL_SERVER) && !((v) & DFSREF_STORAGE_SERVER))
@@ -50,10 +52,9 @@ struct cache_entry { };
static struct kmem_cache *cache_slab __read_mostly; -static struct workqueue_struct *dfscache_wq __read_mostly; +struct workqueue_struct *dfscache_wq;
-static int cache_ttl; -static DEFINE_SPINLOCK(cache_ttl_lock); +atomic_t dfs_cache_ttl;
static struct nls_table *cache_cp;
@@ -65,10 +66,6 @@ static atomic_t cache_count; static struct hlist_head cache_htable[CACHE_HTABLE_SIZE]; static DECLARE_RWSEM(htable_rw_lock);
-static void refresh_cache_worker(struct work_struct *work); - -static DECLARE_DELAYED_WORK(refresh_task, refresh_cache_worker); - /** * dfs_cache_canonical_path - get a canonical DFS path * @@ -290,7 +287,9 @@ int dfs_cache_init(void) int rc; int i;
- dfscache_wq = alloc_workqueue("cifs-dfscache", WQ_FREEZABLE | WQ_UNBOUND, 1); + dfscache_wq = alloc_workqueue("cifs-dfscache", + WQ_UNBOUND|WQ_FREEZABLE|WQ_MEM_RECLAIM, + 0); if (!dfscache_wq) return -ENOMEM;
@@ -306,6 +305,7 @@ int dfs_cache_init(void) INIT_HLIST_HEAD(&cache_htable[i]);
atomic_set(&cache_count, 0); + atomic_set(&dfs_cache_ttl, CACHE_DEFAULT_TTL); cache_cp = load_nls("utf8"); if (!cache_cp) cache_cp = load_nls_default(); @@ -480,6 +480,7 @@ static struct cache_entry *add_cache_entry_locked(struct dfs_info3_param *refs, int rc; struct cache_entry *ce; unsigned int hash; + int ttl;
WARN_ON(!rwsem_is_locked(&htable_rw_lock));
@@ -496,15 +497,8 @@ static struct cache_entry *add_cache_entry_locked(struct dfs_info3_param *refs, if (IS_ERR(ce)) return ce;
- spin_lock(&cache_ttl_lock); - if (!cache_ttl) { - cache_ttl = ce->ttl; - queue_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ); - } else { - cache_ttl = min_t(int, cache_ttl, ce->ttl); - mod_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ); - } - spin_unlock(&cache_ttl_lock); + ttl = min_t(int, atomic_read(&dfs_cache_ttl), ce->ttl); + atomic_set(&dfs_cache_ttl, ttl);
hlist_add_head(&ce->hlist, &cache_htable[hash]); dump_ce(ce); @@ -616,7 +610,6 @@ static struct cache_entry *lookup_cache_entry(const char *path) */ void dfs_cache_destroy(void) { - cancel_delayed_work_sync(&refresh_task); unload_nls(cache_cp); flush_cache_ents(); kmem_cache_destroy(cache_slab); @@ -1142,6 +1135,7 @@ static bool target_share_equal(struct TCP_Server_Info *server, const char *s1, c * target shares in @refs. */ static void mark_for_reconnect_if_needed(struct TCP_Server_Info *server, + const char *path, struct dfs_cache_tgt_list *old_tl, struct dfs_cache_tgt_list *new_tl) { @@ -1153,8 +1147,10 @@ static void mark_for_reconnect_if_needed(struct TCP_Server_Info *server, nit = dfs_cache_get_next_tgt(new_tl, nit)) { if (target_share_equal(server, dfs_cache_get_tgt_name(oit), - dfs_cache_get_tgt_name(nit))) + dfs_cache_get_tgt_name(nit))) { + dfs_cache_noreq_update_tgthint(path, nit); return; + } } }
@@ -1162,13 +1158,28 @@ static void mark_for_reconnect_if_needed(struct TCP_Server_Info *server, cifs_signal_cifsd_for_reconnect(server, true); }
+static bool is_ses_good(struct cifs_ses *ses) +{ + struct TCP_Server_Info *server = ses->server; + struct cifs_tcon *tcon = ses->tcon_ipc; + bool ret; + + spin_lock(&ses->ses_lock); + spin_lock(&ses->chan_lock); + ret = !cifs_chan_needs_reconnect(ses, server) && + ses->ses_status == SES_GOOD && + !tcon->need_reconnect; + spin_unlock(&ses->chan_lock); + spin_unlock(&ses->ses_lock); + return ret; +} + /* Refresh dfs referral of tcon and mark it for reconnect if needed */ -static int __refresh_tcon(const char *path, struct cifs_tcon *tcon, bool force_refresh) +static int __refresh_tcon(const char *path, struct cifs_ses *ses, bool force_refresh) { struct dfs_cache_tgt_list old_tl = DFS_CACHE_TGT_LIST_INIT(old_tl); struct dfs_cache_tgt_list new_tl = DFS_CACHE_TGT_LIST_INIT(new_tl); - struct cifs_ses *ses = CIFS_DFS_ROOT_SES(tcon->ses); - struct cifs_tcon *ipc = ses->tcon_ipc; + struct TCP_Server_Info *server = ses->server; bool needs_refresh = false; struct cache_entry *ce; unsigned int xid; @@ -1190,20 +1201,19 @@ static int __refresh_tcon(const char *path, struct cifs_tcon *tcon, bool force_r goto out; }
- spin_lock(&ipc->tc_lock); - if (ipc->status != TID_GOOD) { - spin_unlock(&ipc->tc_lock); - cifs_dbg(FYI, "%s: skip cache refresh due to disconnected ipc\n", __func__); + ses = CIFS_DFS_ROOT_SES(ses); + if (!is_ses_good(ses)) { + cifs_dbg(FYI, "%s: skip cache refresh due to disconnected ipc\n", + __func__); goto out; } - spin_unlock(&ipc->tc_lock);
ce = cache_refresh_path(xid, ses, path, true); if (!IS_ERR(ce)) { rc = get_targets(ce, &new_tl); up_read(&htable_rw_lock); cifs_dbg(FYI, "%s: get_targets: %d\n", __func__, rc); - mark_for_reconnect_if_needed(tcon->ses->server, &old_tl, &new_tl); + mark_for_reconnect_if_needed(server, path, &old_tl, &new_tl); }
out: @@ -1216,10 +1226,11 @@ static int __refresh_tcon(const char *path, struct cifs_tcon *tcon, bool force_r static int refresh_tcon(struct cifs_tcon *tcon, bool force_refresh) { struct TCP_Server_Info *server = tcon->ses->server; + struct cifs_ses *ses = tcon->ses;
mutex_lock(&server->refpath_lock); if (server->leaf_fullpath) - __refresh_tcon(server->leaf_fullpath + 1, tcon, force_refresh); + __refresh_tcon(server->leaf_fullpath + 1, ses, force_refresh); mutex_unlock(&server->refpath_lock); return 0; } @@ -1263,60 +1274,32 @@ int dfs_cache_remount_fs(struct cifs_sb_info *cifs_sb) return refresh_tcon(tcon, true); }
-/* - * Worker that will refresh DFS cache from all active mounts based on lowest TTL value - * from a DFS referral. - */ -static void refresh_cache_worker(struct work_struct *work) +/* Refresh all DFS referrals related to DFS tcon */ +void dfs_cache_refresh(struct work_struct *work) { struct TCP_Server_Info *server; - struct cifs_tcon *tcon, *ntcon; - struct list_head tcons; + struct dfs_root_ses *rses; + struct cifs_tcon *tcon; struct cifs_ses *ses;
- INIT_LIST_HEAD(&tcons); + tcon = container_of(work, struct cifs_tcon, dfs_cache_work.work); + ses = tcon->ses; + server = ses->server;
- spin_lock(&cifs_tcp_ses_lock); - list_for_each_entry(server, &cifs_tcp_ses_list, tcp_ses_list) { - spin_lock(&server->srv_lock); - if (!server->leaf_fullpath) { - spin_unlock(&server->srv_lock); - continue; - } - spin_unlock(&server->srv_lock); - - list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) { - if (ses->tcon_ipc) { - ses->ses_count++; - list_add_tail(&ses->tcon_ipc->ulist, &tcons); - } - list_for_each_entry(tcon, &ses->tcon_list, tcon_list) { - if (!tcon->ipc) { - tcon->tc_count++; - list_add_tail(&tcon->ulist, &tcons); - } - } - } - } - spin_unlock(&cifs_tcp_ses_lock); - - list_for_each_entry_safe(tcon, ntcon, &tcons, ulist) { - struct TCP_Server_Info *server = tcon->ses->server; - - list_del_init(&tcon->ulist); + mutex_lock(&server->refpath_lock); + if (server->leaf_fullpath) + __refresh_tcon(server->leaf_fullpath + 1, ses, false); + mutex_unlock(&server->refpath_lock);
+ list_for_each_entry(rses, &tcon->dfs_ses_list, list) { + ses = rses->ses; + server = ses->server; mutex_lock(&server->refpath_lock); if (server->leaf_fullpath) - __refresh_tcon(server->leaf_fullpath + 1, tcon, false); + __refresh_tcon(server->leaf_fullpath + 1, ses, false); mutex_unlock(&server->refpath_lock); - - if (tcon->ipc) - cifs_put_smb_ses(tcon->ses); - else - cifs_put_tcon(tcon); }
- spin_lock(&cache_ttl_lock); - queue_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ); - spin_unlock(&cache_ttl_lock); + queue_delayed_work(dfscache_wq, &tcon->dfs_cache_work, + atomic_read(&dfs_cache_ttl) * HZ); } diff --git a/fs/cifs/dfs_cache.h b/fs/cifs/dfs_cache.h index e0d39393035a9..c6d89cd6d4fd7 100644 --- a/fs/cifs/dfs_cache.h +++ b/fs/cifs/dfs_cache.h @@ -13,6 +13,9 @@ #include <linux/uuid.h> #include "cifsglob.h"
+extern struct workqueue_struct *dfscache_wq; +extern atomic_t dfs_cache_ttl; + #define DFS_CACHE_TGT_LIST_INIT(var) { .tl_numtgts = 0, .tl_list = LIST_HEAD_INIT((var).tl_list), }
struct dfs_cache_tgt_list { @@ -42,6 +45,7 @@ int dfs_cache_get_tgt_share(char *path, const struct dfs_cache_tgt_iterator *it, char **prefix); char *dfs_cache_canonical_path(const char *path, const struct nls_table *cp, int remap); int dfs_cache_remount_fs(struct cifs_sb_info *cifs_sb); +void dfs_cache_refresh(struct work_struct *work);
static inline struct dfs_cache_tgt_iterator * dfs_cache_get_next_tgt(struct dfs_cache_tgt_list *tl, @@ -89,4 +93,9 @@ dfs_cache_get_nr_tgts(const struct dfs_cache_tgt_list *tl) return tl ? tl->tl_numtgts : 0; }
+static inline int dfs_cache_get_ttl(void) +{ + return atomic_read(&dfs_cache_ttl); +} + #endif /* _CIFS_DFS_CACHE_H */
From: Pablo Neira Ayuso pablo@netfilter.org
[ Upstream commit c3c060adc0249355411a93e61888051e6902b8a1 ]
Flowtable and netdev chains are bound to one or several netdevice, extend netlink error reporting to specify the the netdevice that triggers the error.
Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Stable-dep-of: 8509f62b0b07 ("netfilter: nf_tables: hit ENOENT on unexisting chain/flowtable update with missing attributes") Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nf_tables_api.c | 38 ++++++++++++++++++++++------------- 1 file changed, 24 insertions(+), 14 deletions(-)
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 96bc4b8ded423..4b0a84a39b19e 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -1951,7 +1951,8 @@ static struct nft_hook *nft_hook_list_find(struct list_head *hook_list,
static int nf_tables_parse_netdev_hooks(struct net *net, const struct nlattr *attr, - struct list_head *hook_list) + struct list_head *hook_list, + struct netlink_ext_ack *extack) { struct nft_hook *hook, *next; const struct nlattr *tmp; @@ -1965,10 +1966,12 @@ static int nf_tables_parse_netdev_hooks(struct net *net,
hook = nft_netdev_hook_alloc(net, tmp); if (IS_ERR(hook)) { + NL_SET_BAD_ATTR(extack, tmp); err = PTR_ERR(hook); goto err_hook; } if (nft_hook_list_find(hook_list, hook)) { + NL_SET_BAD_ATTR(extack, tmp); kfree(hook); err = -EEXIST; goto err_hook; @@ -2001,20 +2004,23 @@ struct nft_chain_hook {
static int nft_chain_parse_netdev(struct net *net, struct nlattr *tb[], - struct list_head *hook_list) + struct list_head *hook_list, + struct netlink_ext_ack *extack) { struct nft_hook *hook; int err;
if (tb[NFTA_HOOK_DEV]) { hook = nft_netdev_hook_alloc(net, tb[NFTA_HOOK_DEV]); - if (IS_ERR(hook)) + if (IS_ERR(hook)) { + NL_SET_BAD_ATTR(extack, tb[NFTA_HOOK_DEV]); return PTR_ERR(hook); + }
list_add_tail(&hook->list, hook_list); } else if (tb[NFTA_HOOK_DEVS]) { err = nf_tables_parse_netdev_hooks(net, tb[NFTA_HOOK_DEVS], - hook_list); + hook_list, extack); if (err < 0) return err;
@@ -2082,7 +2088,7 @@ static int nft_chain_parse_hook(struct net *net,
INIT_LIST_HEAD(&hook->list); if (nft_base_chain_netdev(family, hook->num)) { - err = nft_chain_parse_netdev(net, ha, &hook->list); + err = nft_chain_parse_netdev(net, ha, &hook->list, extack); if (err < 0) { module_put(type->owner); return err; @@ -7552,7 +7558,8 @@ static const struct nla_policy nft_flowtable_hook_policy[NFTA_FLOWTABLE_HOOK_MAX static int nft_flowtable_parse_hook(const struct nft_ctx *ctx, const struct nlattr *attr, struct nft_flowtable_hook *flowtable_hook, - struct nft_flowtable *flowtable, bool add) + struct nft_flowtable *flowtable, + struct netlink_ext_ack *extack, bool add) { struct nlattr *tb[NFTA_FLOWTABLE_HOOK_MAX + 1]; struct nft_hook *hook; @@ -7599,7 +7606,8 @@ static int nft_flowtable_parse_hook(const struct nft_ctx *ctx, if (tb[NFTA_FLOWTABLE_HOOK_DEVS]) { err = nf_tables_parse_netdev_hooks(ctx->net, tb[NFTA_FLOWTABLE_HOOK_DEVS], - &flowtable_hook->list); + &flowtable_hook->list, + extack); if (err < 0) return err; } @@ -7742,7 +7750,8 @@ static void nft_flowtable_hooks_destroy(struct list_head *hook_list) }
static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh, - struct nft_flowtable *flowtable) + struct nft_flowtable *flowtable, + struct netlink_ext_ack *extack) { const struct nlattr * const *nla = ctx->nla; struct nft_flowtable_hook flowtable_hook; @@ -7753,7 +7762,7 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh, int err;
err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK], - &flowtable_hook, flowtable, false); + &flowtable_hook, flowtable, extack, false); if (err < 0) return err;
@@ -7858,7 +7867,7 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
- return nft_flowtable_update(&ctx, info->nlh, flowtable); + return nft_flowtable_update(&ctx, info->nlh, flowtable, extack); }
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla); @@ -7899,7 +7908,7 @@ static int nf_tables_newflowtable(struct sk_buff *skb, goto err3;
err = nft_flowtable_parse_hook(&ctx, nla[NFTA_FLOWTABLE_HOOK], - &flowtable_hook, flowtable, true); + &flowtable_hook, flowtable, extack, true); if (err < 0) goto err4;
@@ -7951,7 +7960,8 @@ static void nft_flowtable_hook_release(struct nft_flowtable_hook *flowtable_hook }
static int nft_delflowtable_hook(struct nft_ctx *ctx, - struct nft_flowtable *flowtable) + struct nft_flowtable *flowtable, + struct netlink_ext_ack *extack) { const struct nlattr * const *nla = ctx->nla; struct nft_flowtable_hook flowtable_hook; @@ -7961,7 +7971,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx, int err;
err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK], - &flowtable_hook, flowtable, false); + &flowtable_hook, flowtable, extack, false); if (err < 0) return err;
@@ -8039,7 +8049,7 @@ static int nf_tables_delflowtable(struct sk_buff *skb, nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
if (nla[NFTA_FLOWTABLE_HOOK]) - return nft_delflowtable_hook(&ctx, flowtable); + return nft_delflowtable_hook(&ctx, flowtable, extack);
if (flowtable->use > 0) { NL_SET_BAD_ATTR(extack, attr);
From: Pablo Neira Ayuso pablo@netfilter.org
[ Upstream commit cdc32546632354305afdcf399a5431138a31c9e0 ]
Rename nft_flowtable_hooks_destroy() by nft_hooks_destroy() to prepare for netdev chain device updates.
Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Stable-dep-of: 8509f62b0b07 ("netfilter: nf_tables: hit ENOENT on unexisting chain/flowtable update with missing attributes") Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nf_tables_api.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 4b0a84a39b19e..4ffafef46d2e2 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -7739,7 +7739,7 @@ static int nft_register_flowtable_net_hooks(struct net *net, return err; }
-static void nft_flowtable_hooks_destroy(struct list_head *hook_list) +static void nft_hooks_destroy(struct list_head *hook_list) { struct nft_hook *hook, *next;
@@ -7920,7 +7920,7 @@ static int nf_tables_newflowtable(struct sk_buff *skb, &flowtable->hook_list, flowtable); if (err < 0) { - nft_flowtable_hooks_destroy(&flowtable->hook_list); + nft_hooks_destroy(&flowtable->hook_list); goto err4; }
@@ -8695,7 +8695,7 @@ static void nft_commit_release(struct nft_trans *trans) break; case NFT_MSG_DELFLOWTABLE: if (nft_trans_flowtable_update(trans)) - nft_flowtable_hooks_destroy(&nft_trans_flowtable_hooks(trans)); + nft_hooks_destroy(&nft_trans_flowtable_hooks(trans)); else nf_tables_flowtable_destroy(nft_trans_flowtable(trans)); break; @@ -9341,7 +9341,7 @@ static void nf_tables_abort_release(struct nft_trans *trans) break; case NFT_MSG_NEWFLOWTABLE: if (nft_trans_flowtable_update(trans)) - nft_flowtable_hooks_destroy(&nft_trans_flowtable_hooks(trans)); + nft_hooks_destroy(&nft_trans_flowtable_hooks(trans)); else nf_tables_flowtable_destroy(nft_trans_flowtable(trans)); break;
From: Pablo Neira Ayuso pablo@netfilter.org
[ Upstream commit 8509f62b0b07ae8d6dec5aa9613ab1b250ff632f ]
If user does not specify hook number and priority, then assume this is a chain/flowtable update. Therefore, report ENOENT which provides a better hint than EINVAL. Set on extended netlink error report to refer to the chain name.
Fixes: 5b6743fb2c2a ("netfilter: nf_tables: skip flowtable hooknum and priority on device updates") Fixes: 5efe72698a97 ("netfilter: nf_tables: support for adding new devices to an existing netdev chain") Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nf_tables_api.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 4ffafef46d2e2..d64478af0129f 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -2053,8 +2053,10 @@ static int nft_chain_parse_hook(struct net *net, return err;
if (ha[NFTA_HOOK_HOOKNUM] == NULL || - ha[NFTA_HOOK_PRIORITY] == NULL) - return -EINVAL; + ha[NFTA_HOOK_PRIORITY] == NULL) { + NL_SET_BAD_ATTR(extack, nla[NFTA_CHAIN_NAME]); + return -ENOENT; + }
hook->num = ntohl(nla_get_be32(ha[NFTA_HOOK_HOOKNUM])); hook->priority = ntohl(nla_get_be32(ha[NFTA_HOOK_PRIORITY])); @@ -7556,7 +7558,7 @@ static const struct nla_policy nft_flowtable_hook_policy[NFTA_FLOWTABLE_HOOK_MAX };
static int nft_flowtable_parse_hook(const struct nft_ctx *ctx, - const struct nlattr *attr, + const struct nlattr * const nla[], struct nft_flowtable_hook *flowtable_hook, struct nft_flowtable *flowtable, struct netlink_ext_ack *extack, bool add) @@ -7568,15 +7570,18 @@ static int nft_flowtable_parse_hook(const struct nft_ctx *ctx,
INIT_LIST_HEAD(&flowtable_hook->list);
- err = nla_parse_nested_deprecated(tb, NFTA_FLOWTABLE_HOOK_MAX, attr, + err = nla_parse_nested_deprecated(tb, NFTA_FLOWTABLE_HOOK_MAX, + nla[NFTA_FLOWTABLE_HOOK], nft_flowtable_hook_policy, NULL); if (err < 0) return err;
if (add) { if (!tb[NFTA_FLOWTABLE_HOOK_NUM] || - !tb[NFTA_FLOWTABLE_HOOK_PRIORITY]) - return -EINVAL; + !tb[NFTA_FLOWTABLE_HOOK_PRIORITY]) { + NL_SET_BAD_ATTR(extack, nla[NFTA_FLOWTABLE_NAME]); + return -ENOENT; + }
hooknum = ntohl(nla_get_be32(tb[NFTA_FLOWTABLE_HOOK_NUM])); if (hooknum != NF_NETDEV_INGRESS) @@ -7761,8 +7766,8 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh, u32 flags; int err;
- err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK], - &flowtable_hook, flowtable, extack, false); + err = nft_flowtable_parse_hook(ctx, nla, &flowtable_hook, flowtable, + extack, false); if (err < 0) return err;
@@ -7907,8 +7912,8 @@ static int nf_tables_newflowtable(struct sk_buff *skb, if (err < 0) goto err3;
- err = nft_flowtable_parse_hook(&ctx, nla[NFTA_FLOWTABLE_HOOK], - &flowtable_hook, flowtable, extack, true); + err = nft_flowtable_parse_hook(&ctx, nla, &flowtable_hook, flowtable, + extack, true); if (err < 0) goto err4;
@@ -7970,8 +7975,8 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx, struct nft_trans *trans; int err;
- err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK], - &flowtable_hook, flowtable, extack, false); + err = nft_flowtable_parse_hook(ctx, nla, &flowtable_hook, flowtable, + extack, false); if (err < 0) return err;
From: Borislav Petkov (AMD) bp@alien8.de
commit 9a48d604672220545d209e9996c2a1edbb5637f6 upstream.
SYM_FUNC_START_LOCAL_NOALIGN() adds an endbr leading to this layout (leaving only the last 2 bytes of the address):
3bff <zen_untrain_ret>: 3bff: f3 0f 1e fa endbr64 3c03: f6 test $0xcc,%bl
3c04 <__x86_return_thunk>: 3c04: c3 ret 3c05: cc int3 3c06: 0f ae e8 lfence
However, "the RET at __x86_return_thunk must be on a 64 byte boundary, for alignment within the BTB."
Use SYM_START instead.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Reviewed-by: Thomas Gleixner tglx@linutronix.de Cc: stable@kernel.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/lib/retpoline.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -144,8 +144,8 @@ SYM_CODE_END(__x86_indirect_jump_thunk_a */ .align 64 .skip 63, 0xcc -SYM_FUNC_START_NOALIGN(zen_untrain_ret); - +SYM_START(zen_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) + ANNOTATE_NOENDBR /* * As executed from zen_untrain_ret, this is: *
From: Filipe Manana fdmanana@suse.com
commit 6f932d4ef007d6a4ae03badcb749fbb8f49196f6 upstream.
A call to btrfs_prev_leaf() may end up returning a path that points to the same item (key) again. This happens if while btrfs_prev_leaf(), after we release the path, a concurrent insertion happens, which moves items off from a sibling into the front of the previous leaf, and an item with the computed previous key does not exists.
For example, suppose we have the two following leaves:
Leaf A
------------------------------------------------------------- | ... key (300 96 10) key (300 96 15) key (300 96 16) | ------------------------------------------------------------- slot 20 slot 21 slot 22
Leaf B
------------------------------------------------------------- | key (300 96 20) key (300 96 21) key (300 96 22) ... | ------------------------------------------------------------- slot 0 slot 1 slot 2
If we call btrfs_prev_leaf(), from btrfs_previous_item() for example, with a path pointing to leaf B and slot 0 and the following happens:
1) At btrfs_prev_leaf() we compute the previous key to search as: (300 96 19), which is a key that does not exists in the tree;
2) Then we call btrfs_release_path() at btrfs_prev_leaf();
3) Some other task inserts a key at leaf A, that sorts before the key at slot 20, for example it has an objectid of 299. In order to make room for the new key, the key at slot 22 is moved to the front of leaf B. This happens at push_leaf_right(), called from split_leaf().
After this leaf B now looks like:
-------------------------------------------------------------------------------- | key (300 96 16) key (300 96 20) key (300 96 21) key (300 96 22) ... | -------------------------------------------------------------------------------- slot 0 slot 1 slot 2 slot 3
4) At btrfs_prev_leaf() we call btrfs_search_slot() for the computed previous key: (300 96 19). Since the key does not exists, btrfs_search_slot() returns 1 and with a path pointing to leaf B and slot 1, the item with key (300 96 20);
5) This makes btrfs_prev_leaf() return a path that points to slot 1 of leaf B, the same key as before it was called, since the key at slot 0 of leaf B (300 96 16) is less than the computed previous key, which is (300 96 19);
6) As a consequence btrfs_previous_item() returns a path that points again to the item with key (300 96 20).
For some users of btrfs_prev_leaf() or btrfs_previous_item() this may not be functional a problem, despite not making sense to return a new path pointing again to the same item/key. However for a caller such as tree-log.c:log_dir_items(), this has a bad consequence, as it can result in not logging some dir index deletions in case the directory is being logged without holding the inode's VFS lock (logging triggered while logging a child inode for example) - for the example scenario above, in case the dir index keys 17, 18 and 19 were deleted in the current transaction.
CC: stable@vger.kernel.org # 4.14+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Filipe Manana fdmanana@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/ctree.c | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-)
--- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -4493,10 +4493,12 @@ int btrfs_del_items(struct btrfs_trans_h int btrfs_prev_leaf(struct btrfs_root *root, struct btrfs_path *path) { struct btrfs_key key; + struct btrfs_key orig_key; struct btrfs_disk_key found_key; int ret;
btrfs_item_key_to_cpu(path->nodes[0], &key, 0); + orig_key = key;
if (key.offset > 0) { key.offset--; @@ -4513,8 +4515,36 @@ int btrfs_prev_leaf(struct btrfs_root *r
btrfs_release_path(path); ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); - if (ret < 0) + if (ret <= 0) return ret; + + /* + * Previous key not found. Even if we were at slot 0 of the leaf we had + * before releasing the path and calling btrfs_search_slot(), we now may + * be in a slot pointing to the same original key - this can happen if + * after we released the path, one of more items were moved from a + * sibling leaf into the front of the leaf we had due to an insertion + * (see push_leaf_right()). + * If we hit this case and our slot is > 0 and just decrement the slot + * so that the caller does not process the same key again, which may or + * may not break the caller, depending on its logic. + */ + if (path->slots[0] < btrfs_header_nritems(path->nodes[0])) { + btrfs_item_key(path->nodes[0], &found_key, path->slots[0]); + ret = comp_keys(&found_key, &orig_key); + if (ret == 0) { + if (path->slots[0] > 0) { + path->slots[0]--; + return 0; + } + /* + * At slot 0, same key as before, it means orig_key is + * the lowest, leftmost, key in the tree. We're done. + */ + return 1; + } + } + btrfs_item_key(path->nodes[0], &found_key, 0); ret = comp_keys(&found_key, &key); /*
From: Naohiro Aota naohiro.aota@wdc.com
commit 631003e2333c12cc1b52df06a707365b7363a159 upstream.
find_next_bit and find_next_zero_bit take @size as the second parameter and @offset as the third parameter. They are specified opposite in btrfs_ensure_empty_zones(). Thanks to the later loop, it never failed to detect the empty zones. Fix them and (maybe) return the result a bit faster.
Note: the naming is a bit confusing, size has two meanings here, bitmap and our range size.
Fixes: 1cd6121f2a38 ("btrfs: zoned: implement zoned chunk allocator") CC: stable@vger.kernel.org # 5.15+ Signed-off-by: Naohiro Aota naohiro.aota@wdc.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/zoned.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1164,12 +1164,12 @@ int btrfs_ensure_empty_zones(struct btrf return -ERANGE;
/* All the zones are conventional */ - if (find_next_bit(zinfo->seq_zones, begin, end) == end) + if (find_next_bit(zinfo->seq_zones, end, begin) == end) return 0;
/* All the zones are sequential and empty */ - if (find_next_zero_bit(zinfo->seq_zones, begin, end) == end && - find_next_zero_bit(zinfo->empty_zones, begin, end) == end) + if (find_next_zero_bit(zinfo->seq_zones, end, begin) == end && + find_next_zero_bit(zinfo->empty_zones, end, begin) == end) return 0;
for (pos = start; pos < start + size; pos += zinfo->zone_size) {
From: Qu Wenruo wqu@suse.com
commit 64b5d5b2852661284ccbb038c697562cc56231bf upstream.
[BUG] With block-group-tree feature enabled, mounting it with clear_cache would cause the following transaction abort at mount or remount:
BTRFS info (device dm-4): force clearing of disk cache BTRFS info (device dm-4): using free space tree BTRFS info (device dm-4): auto enabling async discard BTRFS info (device dm-4): clearing free space tree BTRFS info (device dm-4): clearing compat-ro feature flag for FREE_SPACE_TREE (0x1) BTRFS info (device dm-4): clearing compat-ro feature flag for FREE_SPACE_TREE_VALID (0x2) BTRFS error (device dm-4): block-group-tree feature requires fres-space-tree and no-holes BTRFS error (device dm-4): super block corruption detected before writing it to disk BTRFS: error (device dm-4) in write_all_supers:4288: errno=-117 Filesystem corrupted (unexpected superblock corruption detected) BTRFS warning (device dm-4: state E): Skipping commit of aborted transaction.
[CAUSE] For block-group-tree feature, we have an artificial dependency on free-space-tree.
This means if we detect block-group-tree without v2 cache, we consider it a corruption and cause the problem.
For clear_cache mount option, it would temporary disable v2 cache, then re-enable it.
But unfortunately for that temporary v2 cache disabled status, we refuse to write a superblock with bg tree only flag, thus leads to the above transaction abortion.
[FIX] For now, just reject clear_cache and v1 cache mount option for block group tree. So now we got a graceful rejection other than a transaction abort:
BTRFS info (device dm-4): force clearing of disk cache BTRFS error (device dm-4): cannot disable free space tree with block-group-tree feature BTRFS error (device dm-4): open_ctree failed
CC: stable@vger.kernel.org # 6.1+ Signed-off-by: Qu Wenruo wqu@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/super.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
--- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -825,7 +825,12 @@ out: !btrfs_test_opt(info, CLEAR_CACHE)) { btrfs_err(info, "cannot disable free space tree"); ret = -EINVAL; - + } + if (btrfs_fs_compat_ro(info, BLOCK_GROUP_TREE) && + (btrfs_test_opt(info, CLEAR_CACHE) || + !btrfs_test_opt(info, FREE_SPACE_TREE))) { + btrfs_err(info, "cannot disable free space tree with block-group-tree feature"); + ret = -EINVAL; } if (!ret) ret = btrfs_check_mountopts_zoned(info);
From: xiaoshoukui xiaoshoukui@gmail.com
commit ac868bc9d136cde6e3eb5de77019a63d57a540ff upstream.
Balance as exclusive state is compatible with paused balance and device add, which makes some things more complicated. The assertion of valid states when starting from paused balance needs to take into account two more states, the combinations can be hit when there are several threads racing to start balance and device add. This won't typically happen when the commands are started from command line.
Scenario 1: With exclusive_operation state == BTRFS_EXCLOP_NONE.
Concurrently adding multiple devices to the same mount point and btrfs_exclop_finish executed finishes before assertion in btrfs_exclop_balance, exclusive_operation will changed to BTRFS_EXCLOP_NONE state which lead to assertion failed:
fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE || fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD, in fs/btrfs/ioctl.c:456 Call Trace: <TASK> btrfs_exclop_balance+0x13c/0x310 ? memdup_user+0xab/0xc0 ? PTR_ERR+0x17/0x20 btrfs_ioctl_add_dev+0x2ee/0x320 btrfs_ioctl+0x9d5/0x10d0 ? btrfs_ioctl_encoded_write+0xb80/0xb80 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x3c/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Scenario 2: With exclusive_operation state == BTRFS_EXCLOP_BALANCE_PAUSED.
Concurrently adding multiple devices to the same mount point and btrfs_exclop_balance executed finish before the latter thread execute assertion in btrfs_exclop_balance, exclusive_operation will changed to BTRFS_EXCLOP_BALANCE_PAUSED state which lead to assertion failed:
fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE || fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD || fs_info->exclusive_operation == BTRFS_EXCLOP_NONE, fs/btrfs/ioctl.c:458 Call Trace: <TASK> btrfs_exclop_balance+0x240/0x410 ? memdup_user+0xab/0xc0 ? PTR_ERR+0x17/0x20 btrfs_ioctl_add_dev+0x2ee/0x320 btrfs_ioctl+0x9d5/0x10d0 ? btrfs_ioctl_encoded_write+0xb80/0xb80 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x3c/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd
An example of the failed assertion is below, which shows that the paused balance is also needed to be checked.
root@syzkaller:/home/xsk# ./repro Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 [ 416.611428][ T7970] BTRFS info (device loop0): fs_info exclusive_operation: 0 Failed to add device /dev/vda, errno 14 [ 416.613973][ T7971] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.615456][ T7972] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.617528][ T7973] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.618359][ T7974] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.622589][ T7975] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.624034][ T7976] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.626420][ T7977] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.627643][ T7978] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.629006][ T7979] BTRFS info (device loop0): fs_info exclusive_operation: 3 [ 416.630298][ T7980] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 Failed to add device /dev/vda, errno 14 [ 416.632787][ T7981] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.634282][ T7982] BTRFS info (device loop0): fs_info exclusive_operation: 3 Failed to add device /dev/vda, errno 14 [ 416.636202][ T7983] BTRFS info (device loop0): fs_info exclusive_operation: 3 [ 416.637012][ T7984] BTRFS info (device loop0): fs_info exclusive_operation: 1 Failed to add device /dev/vda, errno 14 [ 416.637759][ T7984] assertion failed: fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE || fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD || fs_info->exclusive_operation == BTRFS_EXCLOP_NONE, in fs/btrfs/ioctl.c:458 [ 416.639845][ T7984] invalid opcode: 0000 [#1] PREEMPT SMP KASAN [ 416.640485][ T7984] CPU: 0 PID: 7984 Comm: repro Not tainted 6.2.0 #7 [ 416.641172][ T7984] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 [ 416.642090][ T7984] RIP: 0010:btrfs_assertfail+0x2c/0x2e [ 416.644423][ T7984] RSP: 0018:ffffc90003ea7e28 EFLAGS: 00010282 [ 416.645018][ T7984] RAX: 00000000000000cc RBX: 0000000000000000 RCX: 0000000000000000 [ 416.645763][ T7984] RDX: ffff88801d030000 RSI: ffffffff81637e7c RDI: fffff520007d4fb7 [ 416.646554][ T7984] RBP: ffffffff8a533de0 R08: 00000000000000cc R09: 0000000000000000 [ 416.647299][ T7984] R10: 0000000000000001 R11: 0000000000000001 R12: ffffffff8a533da0 [ 416.648041][ T7984] R13: 00000000000001ca R14: 000000005000940a R15: 0000000000000000 [ 416.648785][ T7984] FS: 00007fa2985d4640(0000) GS:ffff88802cc00000(0000) knlGS:0000000000000000 [ 416.649616][ T7984] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 416.650238][ T7984] CR2: 0000000000000000 CR3: 0000000018e5e000 CR4: 0000000000750ef0 [ 416.650980][ T7984] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 416.651725][ T7984] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 416.652502][ T7984] PKRU: 55555554 [ 416.652888][ T7984] Call Trace: [ 416.653241][ T7984] <TASK> [ 416.653527][ T7984] btrfs_exclop_balance+0x240/0x410 [ 416.654036][ T7984] ? memdup_user+0xab/0xc0 [ 416.654465][ T7984] ? PTR_ERR+0x17/0x20 [ 416.654874][ T7984] btrfs_ioctl_add_dev+0x2ee/0x320 [ 416.655380][ T7984] btrfs_ioctl+0x9d5/0x10d0 [ 416.655822][ T7984] ? btrfs_ioctl_encoded_write+0xb80/0xb80 [ 416.656400][ T7984] __x64_sys_ioctl+0x197/0x210 [ 416.656874][ T7984] do_syscall_64+0x3c/0xb0 [ 416.657346][ T7984] entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 416.657922][ T7984] RIP: 0033:0x4546af [ 416.660170][ T7984] RSP: 002b:00007fa2985d4150 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 416.660972][ T7984] RAX: ffffffffffffffda RBX: 00007fa2985d4640 RCX: 00000000004546af [ 416.661714][ T7984] RDX: 0000000000000000 RSI: 000000005000940a RDI: 0000000000000003 [ 416.662449][ T7984] RBP: 00007fa2985d41d0 R08: 0000000000000000 R09: 00007ffee37a4c4f [ 416.663195][ T7984] R10: 0000000000000000 R11: 0000000000000246 R12: 00007fa2985d4640 [ 416.663951][ T7984] R13: 0000000000000009 R14: 000000000041b320 R15: 00007fa297dd4000 [ 416.664703][ T7984] </TASK> [ 416.665040][ T7984] Modules linked in: [ 416.665590][ T7984] ---[ end trace 0000000000000000 ]--- [ 416.666176][ T7984] RIP: 0010:btrfs_assertfail+0x2c/0x2e [ 416.668775][ T7984] RSP: 0018:ffffc90003ea7e28 EFLAGS: 00010282 [ 416.669425][ T7984] RAX: 00000000000000cc RBX: 0000000000000000 RCX: 0000000000000000 [ 416.670235][ T7984] RDX: ffff88801d030000 RSI: ffffffff81637e7c RDI: fffff520007d4fb7 [ 416.671050][ T7984] RBP: ffffffff8a533de0 R08: 00000000000000cc R09: 0000000000000000 [ 416.671867][ T7984] R10: 0000000000000001 R11: 0000000000000001 R12: ffffffff8a533da0 [ 416.672685][ T7984] R13: 00000000000001ca R14: 000000005000940a R15: 0000000000000000 [ 416.673501][ T7984] FS: 00007fa2985d4640(0000) GS:ffff88802cc00000(0000) knlGS:0000000000000000 [ 416.674425][ T7984] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 416.675114][ T7984] CR2: 0000000000000000 CR3: 0000000018e5e000 CR4: 0000000000750ef0 [ 416.675933][ T7984] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 416.676760][ T7984] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Link: https://lore.kernel.org/linux-btrfs/20230324031611.98986-1-xiaoshoukui@gmail... CC: stable@vger.kernel.org # 6.1+ Signed-off-by: xiaoshoukui xiaoshoukui@ruijie.com.cn Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/ioctl.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -454,7 +454,9 @@ void btrfs_exclop_balance(struct btrfs_f case BTRFS_EXCLOP_BALANCE_PAUSED: spin_lock(&fs_info->super_lock); ASSERT(fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE || - fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD); + fs_info->exclusive_operation == BTRFS_EXCLOP_DEV_ADD || + fs_info->exclusive_operation == BTRFS_EXCLOP_NONE || + fs_info->exclusive_operation == BTRFS_EXCLOP_BALANCE_PAUSED); fs_info->exclusive_operation = BTRFS_EXCLOP_BALANCE_PAUSED; spin_unlock(&fs_info->super_lock); break;
From: Boris Burkov boris@bur.io
commit e7db9e5c6b9615b287d01f0231904fbc1fbde9c5 upstream.
We have observed a btrfs filesystem corruption on workloads using no-holes and encoded writes via send stream v2. The symptom is that a file appears to be truncated to the end of its last aligned extent, even though the final unaligned extent and even the file extent and otherwise correctly updated inode item have been written.
So if we were writing out a 1MiB+X file via 8 128K extents and one extent of length X, i_size would be set to 1MiB, but the ninth extent, nbyte, etc. would all appear correct otherwise.
The source of the race is a narrow (one line of code) window in which a no-holes fs has read in an updated i_size, but has not yet set a shared disk_i_size variable to write. Therefore, if two ordered extents run in parallel (par for the course for receive workloads), the following sequence can play out: (following "threads" a bit loosely, since there are callbacks involved for endio but extra threads aren't needed to cause the issue)
ENC-WR1 (second to last) ENC-WR2 (last) ------- ------- btrfs_do_encoded_write set i_size = 1M submit bio B1 ending at 1M endio B1 btrfs_inode_safe_disk_i_size_write local i_size = 1M falls off a cliff for some reason btrfs_do_encoded_write set i_size = 1M+X submit bio B2 ending at 1M+X endio B2 btrfs_inode_safe_disk_i_size_write local i_size = 1M+X disk_i_size = 1M+X disk_i_size = 1M btrfs_delayed_update_inode btrfs_delayed_update_inode
And the delayed inode ends up filled with nbytes=1M+X and isize=1M, and writes respect i_size and present a corrupted file missing its last extents.
Fix this by holding the inode lock in the no-holes case so that a thread can't sneak in a write to disk_i_size that gets overwritten with an out of date i_size.
Fixes: 41a2ee75aab0 ("btrfs: introduce per-inode file extent tree") CC: stable@vger.kernel.org # 5.10+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Boris Burkov boris@bur.io Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/file-item.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/fs/btrfs/file-item.c +++ b/fs/btrfs/file-item.c @@ -52,13 +52,13 @@ void btrfs_inode_safe_disk_i_size_write( u64 start, end, i_size; int ret;
+ spin_lock(&inode->lock); i_size = new_i_size ?: i_size_read(&inode->vfs_inode); if (btrfs_fs_incompat(fs_info, NO_HOLES)) { inode->disk_i_size = i_size; - return; + goto out_unlock; }
- spin_lock(&inode->lock); ret = find_contiguous_extent_bit(&inode->file_extent_tree, 0, &start, &end, EXTENT_DIRTY); if (!ret && start == 0) @@ -66,6 +66,7 @@ void btrfs_inode_safe_disk_i_size_write( else i_size = 0; inode->disk_i_size = i_size; +out_unlock: spin_unlock(&inode->lock); }
From: Josef Bacik josef@toxicpanda.com
commit d246331b78cbef86237f9c22389205bc9b4e1cc1 upstream.
Boris noticed in his simple quotas testing that he was getting a leak with Sweet Tea's change to subvol create that stopped doing a transaction commit. This was just a side effect of that change.
In the delayed inode code we have an optimization that will free extra reservations if we think we can pack a dir item into an already modified leaf. Previously this wouldn't be triggered in the subvolume create case because we'd commit the transaction, it was still possible but much harder to trigger. It could actually be triggered if we did a mkdir && subvol create with qgroups enabled.
This occurs because in btrfs_insert_delayed_dir_index(), which gets called when we're adding the dir item, we do the following:
btrfs_block_rsv_release(fs_info, trans->block_rsv, bytes, NULL);
if we're able to skip reserving space.
The problem here is that trans->block_rsv points at the temporary block rsv for the subvolume create, which has qgroup reservations in the block rsv.
This is a problem because btrfs_block_rsv_release() will do the following:
if (block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) { qgroup_to_release = block_rsv->qgroup_rsv_reserved - block_rsv->qgroup_rsv_size; block_rsv->qgroup_rsv_reserved = block_rsv->qgroup_rsv_size; }
The temporary block rsv just has ->qgroup_rsv_reserved set, ->qgroup_rsv_size == 0. The optimization in btrfs_insert_delayed_dir_index() sets ->qgroup_rsv_reserved = 0. Then later on when we call btrfs_subvolume_release_metadata() which has
btrfs_block_rsv_release(fs_info, rsv, (u64)-1, &qgroup_to_release); btrfs_qgroup_convert_reserved_meta(root, qgroup_to_release);
qgroup_to_release is set to 0, and we do not convert the reserved metadata space.
The problem here is that the block rsv code has been unconditionally messing with ->qgroup_rsv_reserved, because the main place this is used is delalloc, and any time we call btrfs_block_rsv_release() we do it with qgroup_to_release set, and thus do the proper accounting.
The subvolume code is the only other code that uses the qgroup reservation stuff, but it's intermingled with the above optimization, and thus was getting its reservation freed out from underneath it and thus leaking the reserved space.
The solution is to simply not mess with the qgroup reservations if we don't have qgroup_to_release set. This works with the existing code as anything that messes with the delalloc reservations always have qgroup_to_release set. This fixes the leak that Boris was observing.
Reviewed-by: Qu Wenruo wqu@suse.com CC: stable@vger.kernel.org # 5.4+ Signed-off-by: Josef Bacik josef@toxicpanda.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/block-rsv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/fs/btrfs/block-rsv.c +++ b/fs/btrfs/block-rsv.c @@ -124,7 +124,8 @@ static u64 block_rsv_release_bytes(struc } else { num_bytes = 0; } - if (block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) { + if (qgroup_to_release_ret && + block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) { qgroup_to_release = block_rsv->qgroup_rsv_reserved - block_rsv->qgroup_rsv_size; block_rsv->qgroup_rsv_reserved = block_rsv->qgroup_rsv_size;
From: Christoph Hellwig hch@lst.de
commit c83b56d1dd87cf67492bb770c26d6f87aee70ed6 upstream.
btrfs_redirty_list_add zeroes the buffer data and sets the EXTENT_BUFFER_NO_CHECK to make sure writeback is fine with a bogus header. But it does that after already marking the buffer dirty, which means that writeback could already be looking at the buffer.
Switch the order of operations around so that the buffer is only marked dirty when we're ready to write it.
Fixes: d3575156f662 ("btrfs: zoned: redirty released extent buffers") CC: stable@vger.kernel.org # 5.15+ Signed-off-by: Christoph Hellwig hch@lst.de Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/zoned.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1606,11 +1606,11 @@ void btrfs_redirty_list_add(struct btrfs !list_empty(&eb->release_list)) return;
+ memzero_extent_buffer(eb, 0, eb->len); + set_bit(EXTENT_BUFFER_NO_CHECK, &eb->bflags); set_extent_buffer_dirty(eb); set_extent_bits_nowait(&trans->dirty_pages, eb->start, eb->start + eb->len - 1, EXTENT_DIRTY); - memzero_extent_buffer(eb, 0, eb->len); - set_bit(EXTENT_BUFFER_NO_CHECK, &eb->bflags);
spin_lock(&trans->releasing_ebs_lock); list_add_tail(&eb->release_list, &trans->releasing_ebs);
From: Qu Wenruo wqu@suse.com
commit 1d6a4fc85717677e00fefffd847a50fc5928ce69 upstream.
Previously clear_cache mount option would simply disable free-space-tree feature temporarily then re-enable it to rebuild the whole free space tree.
But this is problematic for block-group-tree feature, as we have an artificial dependency on free-space-tree feature.
If we go the existing method, after clearing the free-space-tree feature, we would flip the filesystem to read-only mode, as we detect a super block write with block-group-tree but no free-space-tree feature.
This patch would change the behavior by properly rebuilding the free space tree without disabling this feature, thus allowing clear_cache mount option to work with block group tree.
Now we can mount a filesystem with block-group-tree feature and clear_mount option:
$ mkfs.btrfs -O block-group-tree /dev/test/scratch1 -f $ sudo mount /dev/test/scratch1 /mnt/btrfs -o clear_cache $ sudo dmesg -t | head -n 5 BTRFS info (device dm-1): force clearing of disk cache BTRFS info (device dm-1): using free space tree BTRFS info (device dm-1): auto enabling async discard BTRFS info (device dm-1): rebuilding free space tree BTRFS info (device dm-1): checking UUID tree
CC: stable@vger.kernel.org # 6.1+ Signed-off-by: Qu Wenruo wqu@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/disk-io.c | 25 ++++++++++++++++------ fs/btrfs/free-space-tree.c | 50 ++++++++++++++++++++++++++++++++++++++++++++- fs/btrfs/free-space-tree.h | 3 +- fs/btrfs/super.c | 3 -- 4 files changed, 70 insertions(+), 11 deletions(-)
--- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -3306,23 +3306,34 @@ int btrfs_start_pre_rw_mount(struct btrf { int ret; const bool cache_opt = btrfs_test_opt(fs_info, SPACE_CACHE); - bool clear_free_space_tree = false; + bool rebuild_free_space_tree = false;
if (btrfs_test_opt(fs_info, CLEAR_CACHE) && btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) { - clear_free_space_tree = true; + rebuild_free_space_tree = true; } else if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE) && !btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID)) { btrfs_warn(fs_info, "free space tree is invalid"); - clear_free_space_tree = true; + rebuild_free_space_tree = true; }
- if (clear_free_space_tree) { - btrfs_info(fs_info, "clearing free space tree"); - ret = btrfs_clear_free_space_tree(fs_info); + if (rebuild_free_space_tree) { + btrfs_info(fs_info, "rebuilding free space tree"); + ret = btrfs_rebuild_free_space_tree(fs_info); if (ret) { btrfs_warn(fs_info, - "failed to clear free space tree: %d", ret); + "failed to rebuild free space tree: %d", ret); + goto out; + } + } + + if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE) && + !btrfs_test_opt(fs_info, FREE_SPACE_TREE)) { + btrfs_info(fs_info, "disabling free space tree"); + ret = btrfs_delete_free_space_tree(fs_info); + if (ret) { + btrfs_warn(fs_info, + "failed to disable free space tree: %d", ret); goto out; } } --- a/fs/btrfs/free-space-tree.c +++ b/fs/btrfs/free-space-tree.c @@ -1252,7 +1252,7 @@ out: return ret; }
-int btrfs_clear_free_space_tree(struct btrfs_fs_info *fs_info) +int btrfs_delete_free_space_tree(struct btrfs_fs_info *fs_info) { struct btrfs_trans_handle *trans; struct btrfs_root *tree_root = fs_info->tree_root; @@ -1295,6 +1295,54 @@ int btrfs_clear_free_space_tree(struct b abort: btrfs_abort_transaction(trans, ret); btrfs_end_transaction(trans); + return ret; +} + +int btrfs_rebuild_free_space_tree(struct btrfs_fs_info *fs_info) +{ + struct btrfs_trans_handle *trans; + struct btrfs_key key = { + .objectid = BTRFS_FREE_SPACE_TREE_OBJECTID, + .type = BTRFS_ROOT_ITEM_KEY, + .offset = 0, + }; + struct btrfs_root *free_space_root = btrfs_global_root(fs_info, &key); + struct rb_node *node; + int ret; + + trans = btrfs_start_transaction(free_space_root, 1); + if (IS_ERR(trans)) + return PTR_ERR(trans); + + set_bit(BTRFS_FS_CREATING_FREE_SPACE_TREE, &fs_info->flags); + set_bit(BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED, &fs_info->flags); + + ret = clear_free_space_tree(trans, free_space_root); + if (ret) + goto abort; + + node = rb_first_cached(&fs_info->block_group_cache_tree); + while (node) { + struct btrfs_block_group *block_group; + + block_group = rb_entry(node, struct btrfs_block_group, + cache_node); + ret = populate_free_space_tree(trans, block_group); + if (ret) + goto abort; + node = rb_next(node); + } + + btrfs_set_fs_compat_ro(fs_info, FREE_SPACE_TREE); + btrfs_set_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID); + clear_bit(BTRFS_FS_CREATING_FREE_SPACE_TREE, &fs_info->flags); + + ret = btrfs_commit_transaction(trans); + clear_bit(BTRFS_FS_FREE_SPACE_TREE_UNTRUSTED, &fs_info->flags); + return ret; +abort: + btrfs_abort_transaction(trans, ret); + btrfs_end_transaction(trans); return ret; }
--- a/fs/btrfs/free-space-tree.h +++ b/fs/btrfs/free-space-tree.h @@ -18,7 +18,8 @@ struct btrfs_caching_control;
void set_free_space_tree_thresholds(struct btrfs_block_group *block_group); int btrfs_create_free_space_tree(struct btrfs_fs_info *fs_info); -int btrfs_clear_free_space_tree(struct btrfs_fs_info *fs_info); +int btrfs_delete_free_space_tree(struct btrfs_fs_info *fs_info); +int btrfs_rebuild_free_space_tree(struct btrfs_fs_info *fs_info); int load_free_space_tree(struct btrfs_caching_control *caching_ctl); int add_block_group_free_space(struct btrfs_trans_handle *trans, struct btrfs_block_group *block_group); --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -827,8 +827,7 @@ out: ret = -EINVAL; } if (btrfs_fs_compat_ro(info, BLOCK_GROUP_TREE) && - (btrfs_test_opt(info, CLEAR_CACHE) || - !btrfs_test_opt(info, FREE_SPACE_TREE))) { + !btrfs_test_opt(info, FREE_SPACE_TREE)) { btrfs_err(info, "cannot disable free space tree with block-group-tree feature"); ret = -EINVAL; }
From: Anastasia Belova abelova@astralinux.ru
commit c87f318e6f47696b4040b58f460d5c17ea0280e6 upstream.
Check nodesize to sectorsize in alignment check in print_extent_item. The comment states that and this is correct, similar check is done elsewhere in the functions.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: ea57788eb76d ("btrfs: require only sector size alignment for parent eb bytenr") CC: stable@vger.kernel.org # 4.14+ Reviewed-by: Qu Wenruo wqu@suse.com Signed-off-by: Anastasia Belova abelova@astralinux.ru Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/print-tree.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/fs/btrfs/print-tree.c +++ b/fs/btrfs/print-tree.c @@ -151,10 +151,10 @@ static void print_extent_item(struct ext pr_cont("shared data backref parent %llu count %u\n", offset, btrfs_shared_data_ref_count(eb, sref)); /* - * offset is supposed to be a tree block which - * must be aligned to nodesize. + * Offset is supposed to be a tree block which must be + * aligned to sectorsize. */ - if (!IS_ALIGNED(offset, eb->fs_info->nodesize)) + if (!IS_ALIGNED(offset, eb->fs_info->sectorsize)) pr_info( "\t\t\t(parent %llu not aligned to sectorsize %u)\n", offset, eb->fs_info->sectorsize);
From: Filipe Manana fdmanana@suse.com
commit 0004ff15ea26015a0a3a6182dca3b9d1df32e2b7 upstream.
When loading a free space cache from disk, at __load_free_space_cache(), if we fail to insert a bitmap entry, we still increment the number of total bitmaps in the btrfs_free_space_ctl structure, which is incorrect since we failed to add the bitmap entry. On error we then empty the cache by calling __btrfs_remove_free_space_cache(), which will result in getting the total bitmaps counter set to 1.
A failure to load a free space cache is not critical, so if a failure happens we just rebuild the cache by scanning the extent tree, which happens at block-group.c:caching_thread(). Yet the failure will result in having the total bitmaps of the btrfs_free_space_ctl always bigger by 1 then the number of bitmap entries we have. So fix this by having the total bitmaps counter be incremented only if we successfully added the bitmap entry.
Fixes: a67509c30079 ("Btrfs: add a io_ctl struct and helpers for dealing with the space cache") Reviewed-by: Anand Jain anand.jain@oracle.com CC: stable@vger.kernel.org # 4.4+ Signed-off-by: Filipe Manana fdmanana@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/free-space-cache.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
--- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -870,15 +870,16 @@ static int __load_free_space_cache(struc } spin_lock(&ctl->tree_lock); ret = link_free_space(ctl, e); - ctl->total_bitmaps++; - recalculate_thresholds(ctl); - spin_unlock(&ctl->tree_lock); if (ret) { + spin_unlock(&ctl->tree_lock); btrfs_err(fs_info, "Duplicate entries in free space cache, dumping"); kmem_cache_free(btrfs_free_space_cachep, e); goto free_cache; } + ctl->total_bitmaps++; + recalculate_thresholds(ctl); + spin_unlock(&ctl->tree_lock); list_add_tail(&e->list, &bitmaps); }
From: Naohiro Aota Naohiro.Aota@wdc.com
commit f84353c7c20536ea7e01eca79430eccdf3cc7348 upstream.
For data block groups, we zone finish a zone (or, just deactivate it) when seeing the last IO in btrfs_finish_ordered_io(). That is only called for IOs using ZONE_APPEND, but we use a regular WRITE command for data relocation IOs. Detect it and call btrfs_zone_finish_endio() properly.
Fixes: be1a1d7a5d24 ("btrfs: zoned: finish fully written block group") CC: stable@vger.kernel.org # 6.1+ Reviewed-by: Johannes Thumshirn johannes.thumshirn@wdc.com Signed-off-by: Naohiro Aota naohiro.aota@wdc.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/inode.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -3264,6 +3264,9 @@ int btrfs_finish_ordered_io(struct btrfs btrfs_rewrite_logical_zoned(ordered_extent); btrfs_zone_finish_endio(fs_info, ordered_extent->disk_bytenr, ordered_extent->disk_num_bytes); + } else if (btrfs_is_data_reloc_root(inode->root)) { + btrfs_zone_finish_endio(fs_info, ordered_extent->disk_bytenr, + ordered_extent->disk_num_bytes); }
btrfs_free_io_failure_record(inode, start, end);
From: Naohiro Aota Naohiro.Aota@wdc.com
commit 02ca9e6fb5f66a031df4fac508b8e477ca69e918 upstream.
When both of the superblock zones are full, we need to check which superblock is newer. The calculation of last superblock position is wrong as it does not consider zone_capacity and uses the length.
Fixes: 9658b72ef300 ("btrfs: zoned: locate superblock position using zone capacity") CC: stable@vger.kernel.org # 6.1+ Reviewed-by: Johannes Thumshirn johannes.thumshirn@wdc.com Signed-off-by: Naohiro Aota naohiro.aota@wdc.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/zoned.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -121,10 +121,9 @@ static int sb_write_pointer(struct block int i;
for (i = 0; i < BTRFS_NR_SB_LOG_ZONES; i++) { - u64 bytenr; - - bytenr = ((zones[i].start + zones[i].len) - << SECTOR_SHIFT) - BTRFS_SUPER_INFO_SIZE; + u64 zone_end = (zones[i].start + zones[i].capacity) << SECTOR_SHIFT; + u64 bytenr = ALIGN_DOWN(zone_end, BTRFS_SUPER_INFO_SIZE) - + BTRFS_SUPER_INFO_SIZE;
page[i] = read_cache_page_gfp(mapping, bytenr >> PAGE_SHIFT, GFP_NOFS);
From: Filipe Manana fdmanana@suse.com
commit 0cad8f14d70cfeb5173dce93cafeba665a95430e upstream.
When using the logical to ino ioctl v2, if the flag to ignore offsets of file extent items (BTRFS_LOGICAL_INO_ARGS_IGNORE_OFFSET) is given, the backref walking code ends up not returning references for all file offsets of an inode that point to the given logical bytenr. This happens since kernel 6.2, commit 6ce6ba534418 ("btrfs: use a single argument for extent offset in backref walking functions") because:
1) It mistakenly skipped the search for file extent items in a leaf that point to the target extent if that flag is given. Instead it should only skip the filtering done by check_extent_in_eb() - that is, it should not avoid the calls to that function (or find_extent_in_eb(), which uses it).
2) It was also not building a list of inode extent elements (struct extent_inode_elem) if we have multiple inode references for an extent when the ignore offset flag is given to the logical to ino ioctl - it would leave a single element, only the last one that was found.
These stem from the confusing old interface for backref walking functions where we had an extent item offset argument that was a pointer to a u64 and another boolean argument that indicated if the offset should be ignored, but the pointer could be NULL. That NULL case is used by relocation, qgroup extent accounting and fiemap, simply to avoid building the inode extent list for each reference, as it's not necessary for those use cases and therefore avoids memory allocations and some computations.
Fix this by adding a boolean argument to the backref walk context structure to indicate that the inode extent list should not be built, make relocation set that argument to true and fix the backref walking logic to skip the calls to check_extent_in_eb() and find_extent_in_eb() only if this new argument is true, instead of 'ignore_extent_item_pos' being true.
A test case for fstests will be added soon, to provide cover not only for these cases but to the logical to ino ioctl in general as well, as currently we do not have a test case for it.
Reported-by: Vladimir Panteleev git@vladimir.panteleev.md Link: https://lore.kernel.org/linux-btrfs/CAHhfkvwo=nmzrJSqZ2qMfF-rZB-ab6ahHnCD_sq... Fixes: 6ce6ba534418 ("btrfs: use a single argument for extent offset in backref walking functions") CC: stable@vger.kernel.org # 6.2+ Tested-by: Vladimir Panteleev git@vladimir.panteleev.md Signed-off-by: Filipe Manana fdmanana@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/backref.c | 19 ++++++++++--------- fs/btrfs/backref.h | 6 ++++++ fs/btrfs/relocation.c | 2 +- 3 files changed, 17 insertions(+), 10 deletions(-)
--- a/fs/btrfs/backref.c +++ b/fs/btrfs/backref.c @@ -45,7 +45,8 @@ static int check_extent_in_eb(struct btr int root_count; bool cached;
- if (!btrfs_file_extent_compression(eb, fi) && + if (!ctx->ignore_extent_item_pos && + !btrfs_file_extent_compression(eb, fi) && !btrfs_file_extent_encryption(eb, fi) && !btrfs_file_extent_other_encoding(eb, fi)) { u64 data_offset; @@ -552,7 +553,7 @@ static int add_all_parents(struct btrfs_ count++; else goto next; - if (!ctx->ignore_extent_item_pos) { + if (!ctx->skip_inode_ref_list) { ret = check_extent_in_eb(ctx, &key, eb, fi, &eie); if (ret == BTRFS_ITERATE_EXTENT_INODES_STOP || ret < 0) @@ -564,7 +565,7 @@ static int add_all_parents(struct btrfs_ eie, (void **)&old, GFP_NOFS); if (ret < 0) break; - if (!ret && !ctx->ignore_extent_item_pos) { + if (!ret && !ctx->skip_inode_ref_list) { while (old->next) old = old->next; old->next = eie; @@ -1598,7 +1599,7 @@ again: goto out; } if (ref->count && ref->parent) { - if (!ctx->ignore_extent_item_pos && !ref->inode_list && + if (!ctx->skip_inode_ref_list && !ref->inode_list && ref->level == 0) { struct btrfs_tree_parent_check check = { 0 }; struct extent_buffer *eb; @@ -1639,7 +1640,7 @@ again: (void **)&eie, GFP_NOFS); if (ret < 0) goto out; - if (!ret && !ctx->ignore_extent_item_pos) { + if (!ret && !ctx->skip_inode_ref_list) { /* * We've recorded that parent, so we must extend * its inode list here. @@ -1735,7 +1736,7 @@ int btrfs_find_all_leafs(struct btrfs_ba static int btrfs_find_all_roots_safe(struct btrfs_backref_walk_ctx *ctx) { const u64 orig_bytenr = ctx->bytenr; - const bool orig_ignore_extent_item_pos = ctx->ignore_extent_item_pos; + const bool orig_skip_inode_ref_list = ctx->skip_inode_ref_list; bool roots_ulist_allocated = false; struct ulist_iterator uiter; int ret = 0; @@ -1756,7 +1757,7 @@ static int btrfs_find_all_roots_safe(str roots_ulist_allocated = true; }
- ctx->ignore_extent_item_pos = true; + ctx->skip_inode_ref_list = true;
ULIST_ITER_INIT(&uiter); while (1) { @@ -1781,7 +1782,7 @@ static int btrfs_find_all_roots_safe(str ulist_free(ctx->refs); ctx->refs = NULL; ctx->bytenr = orig_bytenr; - ctx->ignore_extent_item_pos = orig_ignore_extent_item_pos; + ctx->skip_inode_ref_list = orig_skip_inode_ref_list;
return ret; } @@ -1885,7 +1886,7 @@ int btrfs_is_data_extent_shared(struct b walk_ctx.time_seq = elem.seq; }
- walk_ctx.ignore_extent_item_pos = true; + walk_ctx.skip_inode_ref_list = true; walk_ctx.trans = trans; walk_ctx.fs_info = fs_info; walk_ctx.refs = &ctx->refs; --- a/fs/btrfs/backref.h +++ b/fs/btrfs/backref.h @@ -60,6 +60,12 @@ struct btrfs_backref_walk_ctx { * @extent_item_pos is ignored. */ bool ignore_extent_item_pos; + /* + * If true and bytenr corresponds to a data extent, then the inode list + * (each member describing inode number, file offset and root) is not + * added to each reference added to the @refs ulist. + */ + bool skip_inode_ref_list; /* A valid transaction handle or NULL. */ struct btrfs_trans_handle *trans; /* --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -3422,7 +3422,7 @@ int add_data_references(struct reloc_con btrfs_release_path(path);
ctx.bytenr = extent_key->objectid; - ctx.ignore_extent_item_pos = true; + ctx.skip_inode_ref_list = true; ctx.fs_info = rc->extent_root->fs_info;
ret = btrfs_find_all_leafs(&ctx);
From: Pawel Witek pawel.ireneusz.witek@gmail.com
commit d66cde50c3c868af7abddafce701bb86e4a93039 upstream.
Change type of pcchunk->Length from u32 to u64 to match smb2_copychunk_range arguments type. Fixes the problem where performing server-side copy with CIFS_IOC_COPYCHUNK_FILE ioctl resulted in incomplete copy of large files while returning -EINVAL.
Fixes: 9bf0c9cd4314 ("CIFS: Fix SMB2/SMB3 Copy offload support (refcopy) for large files") Cc: stable@vger.kernel.org Signed-off-by: Pawel Witek pawel.ireneusz.witek@gmail.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/cifs/smb2ops.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -1682,7 +1682,7 @@ smb2_copychunk_range(const unsigned int pcchunk->SourceOffset = cpu_to_le64(src_off); pcchunk->TargetOffset = cpu_to_le64(dest_off); pcchunk->Length = - cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk)); + cpu_to_le32(min_t(u64, len, tcon->max_bytes_chunk));
/* Request server copy to target from src identified by key */ kfree(retbuf);
From: Steve French stfrench@microsoft.com
commit d39fc592ef8ae9a89c5e85c8d9f760937a57d5ba upstream.
We should not be caching closed files when freeze is invoked on an fs (so we can release resources more gracefully).
Fixes xfstests generic/068 generic/390 generic/491
Reviewed-by: David Howells dhowells@redhat.com Cc: stable@vger.kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/cifs/cifsfs.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+)
--- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -758,6 +758,20 @@ static void cifs_umount_begin(struct sup return; }
+static int cifs_freeze(struct super_block *sb) +{ + struct cifs_sb_info *cifs_sb = CIFS_SB(sb); + struct cifs_tcon *tcon; + + if (cifs_sb == NULL) + return 0; + + tcon = cifs_sb_master_tcon(cifs_sb); + + cifs_close_all_deferred_files(tcon); + return 0; +} + #ifdef CONFIG_CIFS_STATS2 static int cifs_show_stats(struct seq_file *s, struct dentry *root) { @@ -796,6 +810,7 @@ static const struct super_operations cif as opens */ .show_options = cifs_show_options, .umount_begin = cifs_umount_begin, + .freeze_fs = cifs_freeze, #ifdef CONFIG_CIFS_STATS2 .show_stats = cifs_show_stats, #endif
From: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com
commit 75e406b540c3eca67625d97bbefd4e3787eafbfe upstream.
Currently when the uncore_write() returns error, it is silently ignored. Return error to user space when uncore_write() fails.
Fixes: 49a474c7ba51 ("platform/x86: Add support for Uncore frequency control") Signed-off-by: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com Reviewed-by: Zhang Rui rui.zhang@intel.com Tested-by: Wendy Wang wendy.wang@intel.com Reviewed-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Link: https://lore.kernel.org/r/20230418153230.679094-1-srinivas.pandruvada@linux.... Cc: stable@vger.kernel.org Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c +++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c @@ -44,14 +44,18 @@ static ssize_t store_min_max_freq_khz(st int min_max) { unsigned int input; + int ret;
if (kstrtouint(buf, 10, &input)) return -EINVAL;
mutex_lock(&uncore_lock); - uncore_write(data, input, min_max); + ret = uncore_write(data, input, min_max); mutex_unlock(&uncore_lock);
+ if (ret) + return ret; + return count; }
From: Hans de Goede hdegoede@redhat.com
commit 6abfa99ce52f61a31bcfc2aaaae09006f5665495 upstream.
The Juno Computers Juno Tablet has an upside-down mounted Goodix touchscreen. Add a quirk to invert both axis to correct for this.
Link: https://junocomputers.com/us/product/juno-tablet/ Cc: stable@vger.kernel.org Signed-off-by: Hans de Goede hdegoede@redhat.com Link: https://lore.kernel.org/r/20230505210323.43177-1-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/touchscreen_dmi.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+)
--- a/drivers/platform/x86/touchscreen_dmi.c +++ b/drivers/platform/x86/touchscreen_dmi.c @@ -378,6 +378,11 @@ static const struct ts_dmi_data gdix1001 .properties = gdix1001_upside_down_props, };
+static const struct ts_dmi_data gdix1002_00_upside_down_data = { + .acpi_name = "GDIX1002:00", + .properties = gdix1001_upside_down_props, +}; + static const struct property_entry gp_electronic_t701_props[] = { PROPERTY_ENTRY_U32("touchscreen-size-x", 960), PROPERTY_ENTRY_U32("touchscreen-size-y", 640), @@ -1296,6 +1301,18 @@ const struct dmi_system_id touchscreen_d }, }, { + /* Juno Tablet */ + .driver_data = (void *)&gdix1002_00_upside_down_data, + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Default string"), + /* Both product- and board-name being "Default string" is somewhat rare */ + DMI_MATCH(DMI_PRODUCT_NAME, "Default string"), + DMI_MATCH(DMI_BOARD_NAME, "Default string"), + /* Above matches are too generic, add partial bios-version match */ + DMI_MATCH(DMI_BIOS_VERSION, "JP2V1."), + }, + }, + { /* Mediacom WinPad 7.0 W700 (same hw as Wintron surftab 7") */ .driver_data = (void *)&trekstor_surftab_wintron70_data, .matches = {
From: Mark Pearson mpearson-lenovo@squebb.ca
commit 0c0cd3e25a5b64b541dd83ba6e032475a9d77432 upstream.
I had incorrectly thought that PSC profiles were not usable on Intel platforms so had blocked them in the driver initialistion. This broke platform profiles on the T490.
After discussion with the FW team PSC does work on Intel platforms and should be allowed.
Note - it's possible this may impact other platforms where it is advertised but special driver support that only Windows has is needed. But if it does then they will need fixing via quirks. Please report any issues to me so I can get them addressed - but I haven't found any problems in testing...yet
Fixes: bce6243f767f ("platform/x86: thinkpad_acpi: do not use PSC mode on Intel platforms") Link: https://bugzilla.redhat.com/show_bug.cgi?id=2177962 Cc: stable@vger.kernel.org Signed-off-by: Mark Pearson mpearson-lenovo@squebb.ca Link: https://lore.kernel.org/r/20230505132523.214338-1-mpearson-lenovo@squebb.ca Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/thinkpad_acpi.c | 5 ----- 1 file changed, 5 deletions(-)
--- a/drivers/platform/x86/thinkpad_acpi.c +++ b/drivers/platform/x86/thinkpad_acpi.c @@ -10593,11 +10593,6 @@ static int tpacpi_dytc_profile_init(stru dytc_mmc_get_available = true; } } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { /* PSC MODE */ - /* Support for this only works on AMD platforms */ - if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) { - dbg_printk(TPACPI_DBG_INIT, "PSC not support on Intel platforms\n"); - return -ENODEV; - } pr_debug("PSC is supported\n"); } else { dbg_printk(TPACPI_DBG_INIT, "No DYTC support available\n");
From: Fae faenkhauser@gmail.com
commit decab2825c3ef9b154c6f76bce40872ffb41c36f upstream.
Fixes micmute key of HP Envy X360 ey0xxx.
Signed-off-by: Fae faenkhauser@gmail.com Link: https://lore.kernel.org/r/20230425063644.11828-1-faenkhauser@gmail.com Cc: stable@vger.kernel.org Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/hp/hp-wmi.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/platform/x86/hp/hp-wmi.c +++ b/drivers/platform/x86/hp/hp-wmi.c @@ -211,6 +211,7 @@ struct bios_rfkill2_state { static const struct key_entry hp_wmi_keymap[] = { { KE_KEY, 0x02, { KEY_BRIGHTNESSUP } }, { KE_KEY, 0x03, { KEY_BRIGHTNESSDOWN } }, + { KE_KEY, 0x270, { KEY_MICMUTE } }, { KE_KEY, 0x20e6, { KEY_PROG1 } }, { KE_KEY, 0x20e8, { KEY_MEDIA } }, { KE_KEY, 0x2142, { KEY_MEDIA } },
From: Andrey Avdeev jamesstoun@gmail.com
commit 4b65f95c87c35699bc6ad540d6b9dd7f950d0924 upstream.
Add touchscreen info for the Dexp Ursus KX210i
Signed-off-by: Andrey Avdeev jamesstoun@gmail.com Link: https://lore.kernel.org/r/ZE4gRgzRQCjXFYD0@avdeevavpc Cc: stable@vger.kernel.org Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/touchscreen_dmi.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
--- a/drivers/platform/x86/touchscreen_dmi.c +++ b/drivers/platform/x86/touchscreen_dmi.c @@ -336,6 +336,22 @@ static const struct ts_dmi_data dexp_urs .properties = dexp_ursus_7w_props, };
+static const struct property_entry dexp_ursus_kx210i_props[] = { + PROPERTY_ENTRY_U32("touchscreen-min-x", 5), + PROPERTY_ENTRY_U32("touchscreen-min-y", 2), + PROPERTY_ENTRY_U32("touchscreen-size-x", 1720), + PROPERTY_ENTRY_U32("touchscreen-size-y", 1137), + PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-dexp-ursus-kx210i.fw"), + PROPERTY_ENTRY_U32("silead,max-fingers", 10), + PROPERTY_ENTRY_BOOL("silead,home-button"), + { } +}; + +static const struct ts_dmi_data dexp_ursus_kx210i_data = { + .acpi_name = "MSSL1680:00", + .properties = dexp_ursus_kx210i_props, +}; + static const struct property_entry digma_citi_e200_props[] = { PROPERTY_ENTRY_U32("touchscreen-size-x", 1980), PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), @@ -1191,6 +1207,14 @@ const struct dmi_system_id touchscreen_d }, }, { + /* DEXP Ursus KX210i */ + .driver_data = (void *)&dexp_ursus_kx210i_data, + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "INSYDE Corp."), + DMI_MATCH(DMI_PRODUCT_NAME, "S107I"), + }, + }, + { /* Digma Citi E200 */ .driver_data = (void *)&digma_citi_e200_data, .matches = {
From: Mark Pearson mpearson-lenovo@squebb.ca
commit 1684878952929e20a864af5df7b498941c750f45 upstream.
There has been a lot of confusion around which platform profiles are supported on various platforms and it would be useful to have a debug method to be able to override the profile mode that is selected.
I don't expect this to be used in anything other than debugging in conjunction with Lenovo engineers - but it does give a way to get a system working whilst we wait for either FW fixes, or a driver fix to land upstream, if something is wonky in the mode detection logic
Signed-off-by: Mark Pearson mpearson-lenovo@squebb.ca Link: https://lore.kernel.org/r/20230505132523.214338-2-mpearson-lenovo@squebb.ca Cc: stable@vger.kernel.org Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/thinkpad_acpi.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
--- a/drivers/platform/x86/thinkpad_acpi.c +++ b/drivers/platform/x86/thinkpad_acpi.c @@ -10318,6 +10318,7 @@ static atomic_t dytc_ignore_event = ATOM static DEFINE_MUTEX(dytc_mutex); static int dytc_capabilities; static bool dytc_mmc_get_available; +static int profile_force;
static int convert_dytc_to_profile(int funcmode, int dytcmode, enum platform_profile_option *profile) @@ -10580,6 +10581,21 @@ static int tpacpi_dytc_profile_init(stru if (err) return err;
+ /* Check if user wants to override the profile selection */ + if (profile_force) { + switch (profile_force) { + case -1: + dytc_capabilities = 0; + break; + case 1: + dytc_capabilities = BIT(DYTC_FC_MMC); + break; + case 2: + dytc_capabilities = BIT(DYTC_FC_PSC); + break; + } + pr_debug("Profile selection forced: 0x%x\n", dytc_capabilities); + } if (dytc_capabilities & BIT(DYTC_FC_MMC)) { /* MMC MODE */ pr_debug("MMC is supported\n"); /* @@ -11641,6 +11657,9 @@ MODULE_PARM_DESC(uwb_state, "Initial state of the emulated UWB switch"); #endif
+module_param(profile_force, int, 0444); +MODULE_PARM_DESC(profile_force, "Force profile mode. -1=off, 1=MMC, 2=PSC"); + static void thinkpad_acpi_module_exit(void) { struct ibm_struct *ibm, *itmp;
From: Jan Kara jack@suse.cz
commit c915d8f5918bea7c3962b09b8884ca128bfd9b0c upstream.
When inotify_freeing_mark() races with inotify_handle_inode_event() it can happen that inotify_handle_inode_event() sees that i_mark->wd got already reset to -1 and reports this value to userspace which can confuse the inotify listener. Avoid the problem by validating that wd is sensible (and pretend the mark got removed before the event got generated otherwise).
CC: stable@vger.kernel.org Fixes: 7e790dd5fc93 ("inotify: fix error paths in inotify_update_watch") Message-Id: 20230424163219.9250-1-jack@suse.cz Reported-by: syzbot+4a06d4373fd52f0b2f9c@syzkaller.appspotmail.com Reviewed-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/notify/inotify/inotify_fsnotify.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
--- a/fs/notify/inotify/inotify_fsnotify.c +++ b/fs/notify/inotify/inotify_fsnotify.c @@ -65,7 +65,7 @@ int inotify_handle_inode_event(struct fs struct fsnotify_event *fsn_event; struct fsnotify_group *group = inode_mark->group; int ret; - int len = 0; + int len = 0, wd; int alloc_len = sizeof(struct inotify_event_info); struct mem_cgroup *old_memcg;
@@ -81,6 +81,13 @@ int inotify_handle_inode_event(struct fs fsn_mark);
/* + * We can be racing with mark being detached. Don't report event with + * invalid wd. + */ + wd = READ_ONCE(i_mark->wd); + if (wd == -1) + return 0; + /* * Whoever is interested in the event, pays for the allocation. Do not * trigger OOM killer in the target monitoring memcg as it may have * security repercussion. @@ -110,7 +117,7 @@ int inotify_handle_inode_event(struct fs fsn_event = &event->fse; fsnotify_init_event(fsn_event); event->mask = mask; - event->wd = i_mark->wd; + event->wd = wd; event->sync_cookie = cookie; event->name_len = len; if (len)
From: Steve French stfrench@microsoft.com
commit 716a3cf317456fa01d54398bb14ab354f50ed6a2 upstream.
xfstests generic/392 showed a problem where even after a shutdown call was made on a mount, we would still attempt to use the (now inaccessible) superblock if another mount was attempted for the same share.
Reported-by: David Howells dhowells@redhat.com Reviewed-by: David Howells dhowells@redhat.com Cc: stable@vger.kernel.org Fixes: 087f757b0129 ("cifs: add shutdown support") Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/cifs/connect.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/fs/cifs/connect.c +++ b/fs/cifs/connect.c @@ -2754,6 +2754,13 @@ cifs_match_super(struct super_block *sb,
spin_lock(&cifs_tcp_ses_lock); cifs_sb = CIFS_SB(sb); + + /* We do not want to use a superblock that has been shutdown */ + if (CIFS_MOUNT_SHUTDOWN & cifs_sb->mnt_cifs_flags) { + spin_unlock(&cifs_tcp_ses_lock); + return 0; + } + tlink = cifs_get_tlink(cifs_sb_master_tlink(cifs_sb)); if (tlink == NULL) { /* can not match superblock if tlink were ever null */
From: Steve French stfrench@microsoft.com
commit 2cb6f968775a9fd60c90a6042b9550bcec3ea087 upstream.
In investigating a failure with xfstest generic/392 it was noticed that mounts were reusing a superblock that should already have been freed. This turned out to be related to deferred close files keeping a reference count until the closetimeo expired.
Currently the only way an fs knows that mount is beginning is when force unmount is called, but when this, ie umount_begin(), is called all deferred close files on the share (tree connection) should be closed immediately (unless shared by another mount) to avoid using excess resources on the server and to avoid reusing a superblock which should already be freed.
In umount_begin, close all deferred close handles for that share if this is the last mount using that share on this client (ie send the SMB3 close request over the wire for those that have been already closed by the app but that we have kept a handle lease open for and have not sent closes to the server for yet).
Reported-by: David Howells dhowells@redhat.com Acked-by: Bharath SM bharathsm@microsoft.com Cc: stable@vger.kernel.org Fixes: 78c09634f7dc ("Cifs: Fix kernel oops caused by deferred close for files.") Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/cifs/cifsfs.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -743,6 +743,7 @@ static void cifs_umount_begin(struct sup spin_unlock(&tcon->tc_lock); spin_unlock(&cifs_tcp_ses_lock);
+ cifs_close_all_deferred_files(tcon); /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */ /* cancel_notify_requests(tcon); */ if (tcon->ses && tcon->ses->server) {
From: Randy Dunlap rdunlap@infradead.org
commit 58a49ad90939386a8682e842c474a0d2c00ec39c upstream.
Fix a warning that was reported by the kernel test robot:
In file included from ../include/math-emu/soft-fp.h:27, from ../arch/sh/math-emu/math.c:22: ../arch/sh/include/asm/sfp-machine.h:17: warning: "__BYTE_ORDER" redefined 17 | #define __BYTE_ORDER __BIG_ENDIAN In file included from ../arch/sh/math-emu/math.c:21: ../arch/sh/math-emu/sfp-util.h:71: note: this is the location of the previous definition 71 | #define __BYTE_ORDER __LITTLE_ENDIAN
Fixes: b929926f01f2 ("sh: define __BIG_ENDIAN for math-emu") Signed-off-by: Randy Dunlap rdunlap@infradead.org Reported-by: kernel test robot lkp@intel.com Link: lore.kernel.org/r/202111121827.6v6SXtVv-lkp@intel.com Cc: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Rich Felker dalias@libc.org Cc: linux-sh@vger.kernel.org Reviewed-by: Geert Uytterhoeven geert+renesas@glider.be Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Link: https://lore.kernel.org/r/20230306040037.20350-5-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/sh/math-emu/sfp-util.h | 4 ---- 1 file changed, 4 deletions(-)
--- a/arch/sh/math-emu/sfp-util.h +++ b/arch/sh/math-emu/sfp-util.h @@ -67,7 +67,3 @@ } while (0)
#define abort() return 0 - -#define __BYTE_ORDER __LITTLE_ENDIAN - -
From: Randy Dunlap rdunlap@infradead.org
commit c2bd1e18c6f85c0027da2e5e7753b9bfd9f8e6dc upstream.
Fix a build error in mcount.S when CONFIG_PRINTK is not enabled. Fixes this build error:
sh2-linux-ld: arch/sh/lib/mcount.o: in function `stack_panic': (.text+0xec): undefined reference to `dump_stack'
Fixes: e460ab27b6c3 ("sh: Fix up stack overflow check with ftrace disabled.") Signed-off-by: Randy Dunlap rdunlap@infradead.org Cc: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Rich Felker dalias@libc.org Suggested-by: Geert Uytterhoeven geert@linux-m68k.org Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Link: https://lore.kernel.org/r/20230306040037.20350-8-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/sh/Kconfig.debug | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/sh/Kconfig.debug +++ b/arch/sh/Kconfig.debug @@ -15,7 +15,7 @@ config SH_STANDARD_BIOS
config STACK_DEBUG bool "Check for stack overflows" - depends on DEBUG_KERNEL + depends on DEBUG_KERNEL && PRINTK help This option will cause messages to be printed if free stack space drops below a certain limit. Saying Y here will add overhead to
From: Randy Dunlap rdunlap@infradead.org
commit 6cba655543c7959f8a6d2979b9d40a6a66b7ed4f upstream.
When CONFIG_OF_EARLY_FLATTREE and CONFIG_SH_DEVICE_TREE are not set, SH3 build fails with a call to early_init_dt_scan(), so in arch/sh/kernel/setup.c and arch/sh/kernel/head_32.S, use CONFIG_OF_EARLY_FLATTREE instead of CONFIG_OF_FLATTREE.
Fixes this build error: ../arch/sh/kernel/setup.c: In function 'sh_fdt_init': ../arch/sh/kernel/setup.c:262:26: error: implicit declaration of function 'early_init_dt_scan' [-Werror=implicit-function-declaration] 262 | if (!dt_virt || !early_init_dt_scan(dt_virt)) {
Fixes: 03767daa1387 ("sh: fix build regression with CONFIG_OF && !CONFIG_OF_FLATTREE") Fixes: eb6b6930a70f ("sh: fix memory corruption of unflattened device tree") Signed-off-by: Randy Dunlap rdunlap@infradead.org Suggested-by: Rob Herring robh+dt@kernel.org Cc: Frank Rowand frowand.list@gmail.com Cc: devicetree@vger.kernel.org Cc: Rich Felker dalias@libc.org Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Geert Uytterhoeven geert+renesas@glider.be Cc: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Cc: linux-sh@vger.kernel.org Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Link: https://lore.kernel.org/r/20230306040037.20350-4-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/sh/kernel/head_32.S | 6 +++--- arch/sh/kernel/setup.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-)
--- a/arch/sh/kernel/head_32.S +++ b/arch/sh/kernel/head_32.S @@ -64,7 +64,7 @@ ENTRY(_stext) ldc r0, r6_bank #endif
-#ifdef CONFIG_OF_FLATTREE +#ifdef CONFIG_OF_EARLY_FLATTREE mov r4, r12 ! Store device tree blob pointer in r12 #endif @@ -315,7 +315,7 @@ ENTRY(_stext) 10: #endif
-#ifdef CONFIG_OF_FLATTREE +#ifdef CONFIG_OF_EARLY_FLATTREE mov.l 8f, r0 ! Make flat device tree available early. jsr @r0 mov r12, r4 @@ -346,7 +346,7 @@ ENTRY(stack_start) 5: .long start_kernel 6: .long cpu_init 7: .long init_thread_union -#if defined(CONFIG_OF_FLATTREE) +#if defined(CONFIG_OF_EARLY_FLATTREE) 8: .long sh_fdt_init #endif
--- a/arch/sh/kernel/setup.c +++ b/arch/sh/kernel/setup.c @@ -244,7 +244,7 @@ void __init __weak plat_early_device_set { }
-#ifdef CONFIG_OF_FLATTREE +#ifdef CONFIG_OF_EARLY_FLATTREE void __ref sh_fdt_init(phys_addr_t dt_phys) { static int done = 0; @@ -326,7 +326,7 @@ void __init setup_arch(char **cmdline_p) /* Let earlyprintk output early console messages */ sh_early_platform_driver_probe("earlyprintk", 1, 1);
-#ifdef CONFIG_OF_FLATTREE +#ifdef CONFIG_OF_EARLY_FLATTREE #ifdef CONFIG_USE_BUILTIN_DTB unflatten_and_copy_device_tree(); #else
From: Randy Dunlap rdunlap@infradead.org
commit d1155e4132de712a9d3066e2667ceaad39a539c5 upstream.
__setup() handlers should return 1 to obsolete_checksetup() in init/main.c to indicate that the boot option has been handled. A return of 0 causes the boot option/value to be listed as an Unknown kernel parameter and added to init's (limited) argument or environment strings. Also, error return codes don't mean anything to obsolete_checksetup() -- only non-zero (usually 1) or zero. So return 1 from nmi_debug_setup().
Fixes: 1e1030dccb10 ("sh: nmi_debug support.") Signed-off-by: Randy Dunlap rdunlap@infradead.org Reported-by: Igor Zhbanov izh1979@gmail.com Link: lore.kernel.org/r/64644a2f-4a20-bab3-1e15-3b2cdd0defe3@omprussia.ru Cc: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Rich Felker dalias@libc.org Cc: linux-sh@vger.kernel.org Cc: stable@vger.kernel.org Reviewed-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Link: https://lore.kernel.org/r/20230306040037.20350-3-rdunlap@infradead.org Signed-off-by: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/sh/kernel/nmi_debug.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/sh/kernel/nmi_debug.c +++ b/arch/sh/kernel/nmi_debug.c @@ -49,7 +49,7 @@ static int __init nmi_debug_setup(char * register_die_notifier(&nmi_debug_nb);
if (*str != '=') - return 0; + return 1;
for (p = str + 1; *p; p = sep + 1) { sep = strchr(p, ','); @@ -70,6 +70,6 @@ static int __init nmi_debug_setup(char * break; }
- return 0; + return 1; } __setup("nmi_debug", nmi_debug_setup);
From: Luis Chamberlain mcgrof@kernel.org
commit 67ff32289acad9ed338cd9f2351b44939e55163e upstream.
Update the docs for __register_sysctl_table() to make it clear no child entries can be passed. When the child is true these are non-leaf entries on the ctl table and sysctl treats these as directories. The point to __register_sysctl_table() is to deal only with directories not part of the ctl table where thay may riside, to be simple and avoid recursion.
While at it, hint towards using long on extra1 and extra2 later.
Cc: stable@vger.kernel.org # v5.17 Cc: Christian Brauner brauner@kernel.org Cc: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Luis Chamberlain mcgrof@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/proc/proc_sysctl.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-)
--- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c @@ -1287,7 +1287,7 @@ out: * __register_sysctl_table - register a leaf sysctl table * @set: Sysctl tree to register on * @path: The path to the directory the sysctl table is in. - * @table: the top-level table structure + * @table: the top-level table structure without any child * * Register a sysctl table hierarchy. @table should be a filled in ctl_table * array. A completely 0 filled entry terminates the table. @@ -1308,9 +1308,12 @@ out: * proc_handler - the text handler routine (described below) * * extra1, extra2 - extra pointers usable by the proc handler routines + * XXX: we should eventually modify these to use long min / max [0] + * [0] https://lkml.kernel.org/87zgpte9o4.fsf@email.froward.int.ebiederm.org * * Leaf nodes in the sysctl tree will be represented by a single file - * under /proc; non-leaf nodes will be represented by directories. + * under /proc; non-leaf nodes (where child is not NULL) are not allowed, + * sysctl_check_table() verifies this. * * There must be a proc_handler routine for any terminal nodes. * Several default handlers are available to cover common cases - @@ -1352,7 +1355,7 @@ struct ctl_table_header *__register_sysc
spin_lock(&sysctl_lock); dir = &set->dir; - /* Reference moved down the diretory tree get_subdir */ + /* Reference moved down the directory tree get_subdir */ dir->header.nreg++; spin_unlock(&sysctl_lock);
@@ -1369,6 +1372,11 @@ struct ctl_table_header *__register_sysc if (namelen == 0) continue;
+ /* + * namelen ensures if name is "foo/bar/yay" only foo is + * registered first. We traverse as if using mkdir -p and + * return a ctl_dir for the last directory entry. + */ dir = get_subdir(dir, name, namelen); if (IS_ERR(dir)) goto fail;
From: Luis Chamberlain mcgrof@kernel.org
commit 1dc8689e4cc651e21566e10206a84c4006e81fb1 upstream.
Expand documentation to clarify:
o that paths don't need to exist for the new API callers o clarify that we *require* callers to keep the memory of the table around during the lifetime of the sysctls o annotate routines we are trying to deprecate and later remove
Cc: stable@vger.kernel.org # v5.17 Cc: Christian Brauner brauner@kernel.org Cc: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Luis Chamberlain mcgrof@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/proc/proc_sysctl.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-)
--- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c @@ -1287,7 +1287,10 @@ out: * __register_sysctl_table - register a leaf sysctl table * @set: Sysctl tree to register on * @path: The path to the directory the sysctl table is in. - * @table: the top-level table structure without any child + * @table: the top-level table structure without any child. This table + * should not be free'd after registration. So it should not be + * used on stack. It can either be a global or dynamically allocated + * by the caller and free'd later after sysctl unregistration. * * Register a sysctl table hierarchy. @table should be a filled in ctl_table * array. A completely 0 filled entry terminates the table. @@ -1402,8 +1405,15 @@ fail:
/** * register_sysctl - register a sysctl table - * @path: The path to the directory the sysctl table is in. - * @table: the table structure + * @path: The path to the directory the sysctl table is in. If the path + * doesn't exist we will create it for you. + * @table: the table structure. The calller must ensure the life of the @table + * will be kept during the lifetime use of the syctl. It must not be freed + * until unregister_sysctl_table() is called with the given returned table + * with this registration. If your code is non modular then you don't need + * to call unregister_sysctl_table() and can instead use something like + * register_sysctl_init() which does not care for the result of the syctl + * registration. * * Register a sysctl table. @table should be a filled in ctl_table * array. A completely 0 filled entry terminates the table. @@ -1419,8 +1429,11 @@ EXPORT_SYMBOL(register_sysctl);
/** * __register_sysctl_init() - register sysctl table to path - * @path: path name for sysctl base - * @table: This is the sysctl table that needs to be registered to the path + * @path: path name for sysctl base. If that path doesn't exist we will create + * it for you. + * @table: This is the sysctl table that needs to be registered to the path. + * The caller must ensure the life of the @table will be kept during the + * lifetime use of the sysctl. * @table_name: The name of sysctl table, only used for log printing when * registration fails * @@ -1565,6 +1578,7 @@ out: * * Register a sysctl table hierarchy. @table should be a filled in ctl_table * array. A completely 0 filled entry terminates the table. + * We are slowly deprecating this call so avoid its use. * * See __register_sysctl_table for more details. */ @@ -1636,6 +1650,7 @@ err_register_leaves: * * Register a sysctl table hierarchy. @table should be a filled in ctl_table * array. A completely 0 filled entry terminates the table. + * We are slowly deprecating this caller so avoid future uses of it. * * See __register_sysctl_paths for more details. */
From: Mathieu Poirier mathieu.poirier@linaro.org
commit ccadca5baf5124a880f2bb50ed1ec265415f025b upstream.
Function of_phandle_iterator_next() calls of_node_put() on the last device_node it iterated over, but when the loop exits prematurely it has to be called explicitly.
Fixes: 13140de09cc2 ("remoteproc: stm32: add an ST stm32_rproc driver") Cc: stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Reviewed-by: Arnaud Pouliquen arnaud.pouliquen@foss.st.com Link: https://lore.kernel.org/r/20230320221826.2728078-2-mathieu.poirier@linaro.or... Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/remoteproc/stm32_rproc.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/remoteproc/stm32_rproc.c +++ b/drivers/remoteproc/stm32_rproc.c @@ -223,11 +223,13 @@ static int stm32_rproc_prepare(struct rp while (of_phandle_iterator_next(&it) == 0) { rmem = of_reserved_mem_lookup(it.node); if (!rmem) { + of_node_put(it.node); dev_err(dev, "unable to acquire memory-region\n"); return -EINVAL; }
if (stm32_rproc_pa_to_da(rproc, rmem->base, &da) < 0) { + of_node_put(it.node); dev_err(dev, "memory region not valid %pa\n", &rmem->base); return -EINVAL; @@ -254,8 +256,10 @@ static int stm32_rproc_prepare(struct rp it.node->name); }
- if (!mem) + if (!mem) { + of_node_put(it.node); return -ENOMEM; + }
rproc_add_carveout(rproc, mem); index++;
From: Mathieu Poirier mathieu.poirier@linaro.org
commit 8a74918948b40317a5b5bab9739d13dcb5de2784 upstream.
Function of_phandle_iterator_next() calls of_node_put() on the last device_node it iterated over, but when the loop exits prematurely it has to be called explicitly.
Fixes: 3df52ed7f269 ("remoteproc: st: add reserved memory support") Cc: stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Reviewed-by: Arnaud Pouliquen arnaud.pouliquen@foss.st.com Link: https://lore.kernel.org/r/20230320221826.2728078-3-mathieu.poirier@linaro.or... Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/remoteproc/st_remoteproc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/remoteproc/st_remoteproc.c +++ b/drivers/remoteproc/st_remoteproc.c @@ -129,6 +129,7 @@ static int st_rproc_parse_fw(struct rpro while (of_phandle_iterator_next(&it) == 0) { rmem = of_reserved_mem_lookup(it.node); if (!rmem) { + of_node_put(it.node); dev_err(dev, "unable to acquire memory-region\n"); return -EINVAL; } @@ -150,8 +151,10 @@ static int st_rproc_parse_fw(struct rpro it.node->name); }
- if (!mem) + if (!mem) { + of_node_put(it.node); return -ENOMEM; + }
rproc_add_carveout(rproc, mem); index++;
From: Mathieu Poirier mathieu.poirier@linaro.org
commit e0e01de8ee146986872e54e8365f4b4654819412 upstream.
Function of_phandle_iterator_next() calls of_node_put() on the last device_node it iterated over, but when the loop exits prematurely it has to be called explicitly.
Fixes: ec0e5549f358 ("remoteproc: imx_dsp_rproc: Add remoteproc driver for DSP on i.MX") Cc: stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Acked-by: Shengjiu Wang shengjiu.wang@gmail.com Link: https://lore.kernel.org/r/20230320221826.2728078-6-mathieu.poirier@linaro.or... Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/remoteproc/imx_dsp_rproc.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-)
--- a/drivers/remoteproc/imx_dsp_rproc.c +++ b/drivers/remoteproc/imx_dsp_rproc.c @@ -627,15 +627,19 @@ static int imx_dsp_rproc_add_carveout(st
rmem = of_reserved_mem_lookup(it.node); if (!rmem) { + of_node_put(it.node); dev_err(dev, "unable to acquire memory-region\n"); return -EINVAL; }
- if (imx_dsp_rproc_sys_to_da(priv, rmem->base, rmem->size, &da)) + if (imx_dsp_rproc_sys_to_da(priv, rmem->base, rmem->size, &da)) { + of_node_put(it.node); return -EINVAL; + }
cpu_addr = devm_ioremap_wc(dev, rmem->base, rmem->size); if (!cpu_addr) { + of_node_put(it.node); dev_err(dev, "failed to map memory %p\n", &rmem->base); return -ENOMEM; } @@ -644,10 +648,12 @@ static int imx_dsp_rproc_add_carveout(st mem = rproc_mem_entry_init(dev, (void __force *)cpu_addr, (dma_addr_t)rmem->base, rmem->size, da, NULL, NULL, it.node->name);
- if (mem) + if (mem) { rproc_coredump_add_segment(rproc, da, rmem->size); - else + } else { + of_node_put(it.node); return -ENOMEM; + }
rproc_add_carveout(rproc, mem); }
From: Mathieu Poirier mathieu.poirier@linaro.org
commit 5ef074e805ecfd9a16dbb7b6b88bbfa8abad7054 upstream.
Function of_phandle_iterator_next() calls of_node_put() on the last device_node it iterated over, but when the loop exits prematurely it has to be called explicitly.
Fixes: b29b4249f8f0 ("remoteproc: imx_rproc: add i.MX specific parse fw hook") Cc: stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Reviewed-by: Peng Fan peng.fan@nxp.com Link: https://lore.kernel.org/r/20230320221826.2728078-5-mathieu.poirier@linaro.or... Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/remoteproc/imx_rproc.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
--- a/drivers/remoteproc/imx_rproc.c +++ b/drivers/remoteproc/imx_rproc.c @@ -541,6 +541,7 @@ static int imx_rproc_prepare(struct rpro
rmem = of_reserved_mem_lookup(it.node); if (!rmem) { + of_node_put(it.node); dev_err(priv->dev, "unable to acquire memory-region\n"); return -EINVAL; } @@ -553,10 +554,12 @@ static int imx_rproc_prepare(struct rpro imx_rproc_mem_alloc, imx_rproc_mem_release, it.node->name);
- if (mem) + if (mem) { rproc_coredump_add_segment(rproc, da, rmem->size); - else + } else { + of_node_put(it.node); return -ENOMEM; + }
rproc_add_carveout(rproc, mem); }
From: Mathieu Poirier mathieu.poirier@linaro.org
commit f8bae637d3d5e082b4ced71e28b16eb3ee0683c1 upstream.
Function of_phandle_iterator_next() calls of_node_put() on the last device_node it iterated over, but when the loop exits prematurely it has to be called explicitly.
Fixes: 285892a74f13 ("remoteproc: Add Renesas rcar driver") Cc: stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Link: https://lore.kernel.org/r/20230320221826.2728078-4-mathieu.poirier@linaro.or... Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/remoteproc/rcar_rproc.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/drivers/remoteproc/rcar_rproc.c +++ b/drivers/remoteproc/rcar_rproc.c @@ -62,13 +62,16 @@ static int rcar_rproc_prepare(struct rpr
rmem = of_reserved_mem_lookup(it.node); if (!rmem) { + of_node_put(it.node); dev_err(&rproc->dev, "unable to acquire memory-region\n"); return -EINVAL; }
- if (rmem->base > U32_MAX) + if (rmem->base > U32_MAX) { + of_node_put(it.node); return -EINVAL; + }
/* No need to translate pa to da, R-Car use same map */ da = rmem->base; @@ -79,8 +82,10 @@ static int rcar_rproc_prepare(struct rpr rcar_rproc_mem_release, it.node->name);
- if (!mem) + if (!mem) { + of_node_put(it.node); return -ENOMEM; + }
rproc_add_carveout(rproc, mem); }
From: Luis Chamberlain mcgrof@kernel.org
commit 228b09de936395ddd740df3522ea35ae934830d8 upstream.
Relatively new docs which I added which hinted the base directories needed to be created before is wrong, remove that incorrect comment. This has been hinted before by Eric twice already [0] [1], I had just not verified that until now. Now that I've verified that updates the docs to relax the context described.
[0] https://lkml.kernel.org/r/875ys0azt8.fsf@email.froward.int.ebiederm.org [1] https://lkml.kernel.org/r/87ftbiud6s.fsf@x220.int.ebiederm.org
Cc: stable@vger.kernel.org # v5.17 Cc: Christian Brauner brauner@kernel.org Cc: Kefeng Wang wangkefeng.wang@huawei.com Suggested-by: Eric W. Biederman ebiederm@xmission.com Signed-off-by: Luis Chamberlain mcgrof@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/proc/proc_sysctl.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
--- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c @@ -1445,10 +1445,7 @@ EXPORT_SYMBOL(register_sysctl); * register_sysctl() failing on init are extremely low, and so for both reasons * this function does not return any error as it is used by initialization code. * - * Context: Can only be called after your respective sysctl base path has been - * registered. So for instance, most base directories are registered early on - * init before init levels are processed through proc_sys_init() and - * sysctl_init_bases(). + * Context: if your base directory does not exist it will be created for you. */ void __init __register_sysctl_init(const char *path, struct ctl_table *table, const char *table_name)
From: Zev Weiss zev@bewilderbeest.net
commit 9dedb724446913ea7b1591b4b3d2e3e909090980 upstream.
While I'm not aware of any problems that have occurred running these at 100 MHz, the official word from ASRock is that 50 MHz is the correct speed to use, so let's be safe and use that instead.
Signed-off-by: Zev Weiss zev@bewilderbeest.net Cc: stable@vger.kernel.org Fixes: 2b81613ce417 ("ARM: dts: aspeed: Add ASRock E3C246D4I BMC") Fixes: a9a3d60b937a ("ARM: dts: aspeed: Add ASRock ROMED8HM3 BMC") Link: https://lore.kernel.org/r/20230224000400.12226-4-zev@bewilderbeest.net Signed-off-by: Joel Stanley joel@jms.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts | 2 +- arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
--- a/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts +++ b/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts @@ -63,7 +63,7 @@ status = "okay"; m25p,fast-read; label = "bmc"; - spi-max-frequency = <100000000>; /* 100 MHz */ + spi-max-frequency = <50000000>; /* 50 MHz */ #include "openbmc-flash-layout.dtsi" }; }; --- a/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts +++ b/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts @@ -51,7 +51,7 @@ status = "okay"; m25p,fast-read; label = "bmc"; - spi-max-frequency = <100000000>; /* 100 MHz */ + spi-max-frequency = <50000000>; /* 50 MHz */ #include "openbmc-flash-layout-64.dtsi" }; };
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
commit 6c950c20da38debf1ed531e0b972bd8b53d1c11f upstream.
The WM8960 Linux driver expects the clock to be named "mclk". Otherwise the clock will be ignored and not prepared/enabled by the driver.
Cc: stable@vger.kernel.org Fixes: 339b2fb36a67 ("ARM: dts: exynos: Add TOPEET itop elite based board") Link: https://lore.kernel.org/r/20230217150627.779764-3-krzysztof.kozlowski@linaro... Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm/boot/dts/exynos4412-itop-elite.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm/boot/dts/exynos4412-itop-elite.dts +++ b/arch/arm/boot/dts/exynos4412-itop-elite.dts @@ -182,7 +182,7 @@ compatible = "wlf,wm8960"; reg = <0x1a>; clocks = <&pmu_system_controller 0>; - clock-names = "MCLK1"; + clock-names = "mclk"; wlf,shared-lrclk; #sound-dai-cells = <0>; };
From: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org
commit 665b9459bb53b8f19bd1541567e1fe9782c83c4b upstream.
The Samsung S5P/Exynos MIPI CSIS bindings and Linux driver expect first clock name to be "csis". Otherwise the driver fails to probe.
Fixes: 94ad0f6d9278 ("ARM: dts: Add Device tree for s5pv210 SoC") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230212185818.43503-2-krzysztof.kozlowski@linaro.... Signed-off-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm/boot/dts/s5pv210.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm/boot/dts/s5pv210.dtsi +++ b/arch/arm/boot/dts/s5pv210.dtsi @@ -566,7 +566,7 @@ interrupts = <29>; clocks = <&clocks CLK_CSIS>, <&clocks SCLK_CSIS>; - clock-names = "clk_csis", + clock-names = "csis", "sclk_csis"; bus-width = <4>; status = "disabled";
From: Zev Weiss zev@bewilderbeest.net
commit a3fd10732d276d7cf372c6746a78a1c8b6aa7541 upstream.
Turns out it's in fact not the same as the heartbeat LED.
Signed-off-by: Zev Weiss zev@bewilderbeest.net Cc: stable@vger.kernel.org # v5.18+ Fixes: a9a3d60b937a ("ARM: dts: aspeed: Add ASRock ROMED8HM3 BMC") Link: https://lore.kernel.org/r/20230224000400.12226-2-zev@bewilderbeest.net Signed-off-by: Joel Stanley joel@jms.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts +++ b/arch/arm/boot/dts/aspeed-bmc-asrock-romed8hm3.dts @@ -31,7 +31,7 @@ };
system-fault { - gpios = <&gpio ASPEED_GPIO(Z, 2) GPIO_ACTIVE_LOW>; + gpios = <&gpio ASPEED_GPIO(Z, 2) GPIO_ACTIVE_HIGH>; panic-indicator; }; };
From: Johan Hovold johan+linaro@kernel.org
commit 0d997f95b70f98987ae031a89677c13e0e223670 upstream.
A recent commit moved enabling of runtime PM to GPU load time (first open()) but failed to update the error paths so that runtime PM is disabled if initialisation of the GPU fails. This would trigger a warning about the unbalanced disable count on the next open() attempt.
Note that pm_runtime_put_noidle() is sufficient to balance the usage count when pm_runtime_put_sync() fails (and is chosen over pm_runtime_resume_and_get() for consistency reasons).
Fixes: 4b18299b3365 ("drm/msm/adreno: Defer enabling runpm until hw_init()") Cc: stable@vger.kernel.org # 6.0 Signed-off-by: Johan Hovold johan+linaro@kernel.org Patchwork: https://patchwork.freedesktop.org/patch/524971/ Link: https://lore.kernel.org/r/20230303164807.13124-3-johan+linaro@kernel.org Signed-off-by: Rob Clark robdclark@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/adreno/adreno_device.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)
--- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -440,20 +440,21 @@ struct msm_gpu *adreno_load_gpu(struct d
ret = pm_runtime_get_sync(&pdev->dev); if (ret < 0) { - pm_runtime_put_sync(&pdev->dev); + pm_runtime_put_noidle(&pdev->dev); DRM_DEV_ERROR(dev->dev, "Couldn't power up the GPU: %d\n", ret); - return NULL; + goto err_disable_rpm; }
mutex_lock(&gpu->lock); ret = msm_gpu_hw_init(gpu); mutex_unlock(&gpu->lock); - pm_runtime_put_autosuspend(&pdev->dev); if (ret) { DRM_DEV_ERROR(dev->dev, "gpu hw init failed: %d\n", ret); - return NULL; + goto err_put_rpm; }
+ pm_runtime_put_autosuspend(&pdev->dev); + #ifdef CONFIG_DEBUG_FS if (gpu->funcs->debugfs_init) { gpu->funcs->debugfs_init(gpu, dev->primary); @@ -462,6 +463,13 @@ struct msm_gpu *adreno_load_gpu(struct d #endif
return gpu; + +err_put_rpm: + pm_runtime_put_sync(&pdev->dev); +err_disable_rpm: + pm_runtime_disable(&pdev->dev); + + return NULL; }
static int find_chipid(struct device *dev, struct adreno_rev *rev)
From: Francesco Dolcini francesco.dolcini@toradex.com
commit f435b7ef3b360d689df2ffa8326352cd07940d92 upstream.
LT8912 DSI port supports only Non-Burst mode video operation with Sync Events and continuous clock on clock lane, correct dsi mode flags according to that removing MIPI_DSI_MODE_VIDEO_BURST flag.
Cc: stable@vger.kernel.org Fixes: 30e2ae943c26 ("drm/bridge: Introduce LT8912B DSI to HDMI bridge") Signed-off-by: Francesco Dolcini francesco.dolcini@toradex.com Reviewed-by: Robert Foss rfoss@kernel.org Signed-off-by: Robert Foss rfoss@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20230330093131.424828-1-france... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/bridge/lontium-lt8912b.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/gpu/drm/bridge/lontium-lt8912b.c +++ b/drivers/gpu/drm/bridge/lontium-lt8912b.c @@ -504,7 +504,6 @@ static int lt8912_attach_dsi(struct lt89 dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | - MIPI_DSI_MODE_VIDEO_BURST | MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET;
From: Chaitanya Kumar Borah chaitanya.kumar.borah@intel.com
commit 2efc8e1001acfdc143cf2d25a08a4974c322e2a8 upstream.
Replace _PLANE_INPUT_CSC_RY_GY_2_* with _PLANE_CSC_RY_GY_2_* for Plane CSC
Fixes: 6eba56f64d5d ("drm/i915/pxp: black pixels on pxp disabled")
Signed-off-by: Chaitanya Kumar Borah chaitanya.kumar.borah@intel.com Reviewed-by: Uma Shankar uma.shankar@intel.com Signed-off-by: Animesh Manna animesh.manna@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230330150104.2923519-1-chait... (cherry picked from commit e39c76b2160bbd005587f978d29603ef790aefcd) Signed-off-by: Joonas Lahtinen joonas.lahtinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/i915_reg.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -7612,8 +7612,8 @@ enum skl_power_gate {
#define _PLANE_CSC_RY_GY_1(pipe) _PIPE(pipe, _PLANE_CSC_RY_GY_1_A, \ _PLANE_CSC_RY_GY_1_B) -#define _PLANE_CSC_RY_GY_2(pipe) _PIPE(pipe, _PLANE_INPUT_CSC_RY_GY_2_A, \ - _PLANE_INPUT_CSC_RY_GY_2_B) +#define _PLANE_CSC_RY_GY_2(pipe) _PIPE(pipe, _PLANE_CSC_RY_GY_2_A, \ + _PLANE_CSC_RY_GY_2_B) #define PLANE_CSC_COEFF(pipe, plane, index) _MMIO_PLANE(plane, \ _PLANE_CSC_RY_GY_1(pipe) + (index) * 4, \ _PLANE_CSC_RY_GY_2(pipe) + (index) * 4)
From: Johan Hovold johan+linaro@kernel.org
commit a465353b9250802f87b97123e33a17f51277f0b1 upstream.
In case of early initialisation errors and on platforms that do not use the DPU controller, the deinitilisation code can be called with the kms pointer set to NULL.
Fixes: 98659487b845 ("drm/msm: add support to take dpu snapshot") Cc: stable@vger.kernel.org # 5.14 Cc: Abhinav Kumar quic_abhinavk@quicinc.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525099/ Link: https://lore.kernel.org/r/20230306100722.28485-4-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -241,7 +241,8 @@ static int msm_drm_uninit(struct device msm_fbdev_free(ddev); #endif
- msm_disp_snapshot_destroy(ddev); + if (kms) + msm_disp_snapshot_destroy(ddev);
drm_mode_config_cleanup(ddev);
From: Johan Hovold johan+linaro@kernel.org
commit cd459c005de3e2b855a8cc7768e633ce9d018e9f upstream.
In case of early initialisation errors and on platforms that do not use the DPU controller, the deinitilisation code can be called with the kms pointer set to NULL.
Fixes: f026e431cf86 ("drm/msm: Convert to Linux IRQ interfaces") Cc: stable@vger.kernel.org # 5.14 Cc: Thomas Zimmermann tzimmermann@suse.de Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525104/ Link: https://lore.kernel.org/r/20230306100722.28485-5-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -250,9 +250,11 @@ static int msm_drm_uninit(struct device drm_bridge_remove(priv->bridges[i]); priv->num_bridges = 0;
- pm_runtime_get_sync(dev); - msm_irq_uninstall(ddev); - pm_runtime_put_sync(dev); + if (kms) { + pm_runtime_get_sync(dev); + msm_irq_uninstall(ddev); + pm_runtime_put_sync(dev); + }
if (kms && kms->funcs) kms->funcs->destroy(kms);
From: Johan Hovold johan+linaro@kernel.org
commit 214b09db61978497df24efcb3959616814bca46b upstream.
Make sure to free the DRM device also in case of early errors during bind().
Fixes: 2027e5b3413d ("drm/msm: Initialize MDSS irq domain at probe time") Cc: stable@vger.kernel.org # 5.17 Cc: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525097/ Link: https://lore.kernel.org/r/20230306100722.28485-6-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -443,12 +443,12 @@ static int msm_drm_init(struct device *d
ret = msm_init_vram(ddev); if (ret) - return ret; + goto err_put_dev;
/* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); if (ret) - return ret; + goto err_put_dev;
dma_set_max_seg_size(dev, UINT_MAX);
@@ -543,6 +543,12 @@ static int msm_drm_init(struct device *d
err_msm_uninit: msm_drm_uninit(dev); + + return ret; + +err_put_dev: + drm_dev_put(ddev); + return ret; }
From: Johan Hovold johan+linaro@kernel.org
commit 60d476af96015891c7959f30838ae7a9749932bf upstream.
Make sure to release the VRAM buffer also in a case a subcomponent fails to bind.
Fixes: d863f0c7b536 ("drm/msm: Call msm_init_vram before binding the gpu") Cc: stable@vger.kernel.org # 5.11 Cc: Craig Tatlor ctatlor97@gmail.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525094/ Link: https://lore.kernel.org/r/20230306100722.28485-7-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -50,6 +50,8 @@ #define MSM_VERSION_MINOR 9 #define MSM_VERSION_PATCHLEVEL 0
+static void msm_deinit_vram(struct drm_device *ddev); + static const struct drm_mode_config_funcs mode_config_funcs = { .fb_create = msm_framebuffer_create, .output_poll_changed = drm_fb_helper_output_poll_changed, @@ -259,12 +261,7 @@ static int msm_drm_uninit(struct device if (kms && kms->funcs) kms->funcs->destroy(kms);
- if (priv->vram.paddr) { - unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING; - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(dev, priv->vram.size, NULL, - priv->vram.paddr, attrs); - } + msm_deinit_vram(ddev);
component_unbind_all(dev, ddev);
@@ -402,6 +399,19 @@ static int msm_init_vram(struct drm_devi return ret; }
+static void msm_deinit_vram(struct drm_device *ddev) +{ + struct msm_drm_private *priv = ddev->dev_private; + unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING; + + if (!priv->vram.paddr) + return; + + drm_mm_takedown(&priv->vram.mm); + dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, + attrs); +} + static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv = dev_get_drvdata(dev); @@ -448,7 +458,7 @@ static int msm_drm_init(struct device *d /* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); if (ret) - goto err_put_dev; + goto err_deinit_vram;
dma_set_max_seg_size(dev, UINT_MAX);
@@ -546,6 +556,8 @@ err_msm_uninit:
return ret;
+err_deinit_vram: + msm_deinit_vram(ddev); err_put_dev: drm_dev_put(ddev);
From: Johan Hovold johan+linaro@kernel.org
commit ca090c837b430752038b24e56dd182010d77f6f6 upstream.
Add the missing sanity check to handle workqueue allocation failures.
Fixes: c8afe684c95c ("drm/msm: basic KMS driver for snapdragon") Cc: stable@vger.kernel.org # 3.12 Cc: Rob Clark robdclark@gmail.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525102/ Link: https://lore.kernel.org/r/20230306100722.28485-8-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -431,6 +431,10 @@ static int msm_drm_init(struct device *d priv->dev = ddev;
priv->wq = alloc_ordered_workqueue("msm", 0); + if (!priv->wq) { + ret = -ENOMEM; + goto err_put_dev; + }
INIT_LIST_HEAD(&priv->objects); mutex_init(&priv->obj_lock);
From: Johan Hovold johan+linaro@kernel.org
commit a75b49db6529b2af049eafd938fae888451c3685 upstream.
Make sure to destroy the workqueue also in case of early errors during bind (e.g. a subcomponent failing to bind).
Since commit c3b790ea07a1 ("drm: Manage drm_mode_config_init with drmm_") the mode config will be freed when the drm device is released also when using the legacy interface, but add an explicit cleanup for consistency and to facilitate backporting.
Fixes: 060530f1ea67 ("drm/msm: use componentised device support") Cc: stable@vger.kernel.org # 3.15 Cc: Rob Clark robdclark@gmail.com Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Patchwork: https://patchwork.freedesktop.org/patch/525093/ Link: https://lore.kernel.org/r/20230306100722.28485-9-johan+linaro@kernel.org Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/msm_drv.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -457,7 +457,7 @@ static int msm_drm_init(struct device *d
ret = msm_init_vram(ddev); if (ret) - goto err_put_dev; + goto err_cleanup_mode_config;
/* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); @@ -562,6 +562,9 @@ err_msm_uninit:
err_deinit_vram: msm_deinit_vram(ddev); +err_cleanup_mode_config: + drm_mode_config_cleanup(ddev); + destroy_workqueue(priv->wq); err_put_dev: drm_dev_put(ddev);
From: Hans de Goede hdegoede@redhat.com
commit c8c2969bfcba5fcba3a5b078315c1b586d927d9f upstream.
The intel_dsi_msleep() helper skips sleeping if the MIPI-sequences have a version of 3 or newer and the panel is in vid-mode.
This is based on the big comment around line 730 which starts with "Panel enable/disable sequences from the VBT spec.", where the "v3 video mode seq" column does not have any wait t# entries.
Checking the Windows driver shows that it does always honor the VBT delays independent of the version of the VBT sequences.
Commit 6fdb335f1c9c ("drm/i915/dsi: Use unconditional msleep for the panel_on_delay when there is no reset-deassert MIPI-sequence") switched to a direct msleep() instead of intel_dsi_msleep() when there is no MIPI_SEQ_DEASSERT_RESET sequence, to fix the panel on an Acer Aspire Switch 10 E SW3-016 not turning on.
And now testing on a Nextbook Ares 8A shows that panel_on_delay must always be honored otherwise the panel will not turn on.
Instead of only always using regular msleep() for panel_on_delay do as Windows does and always use regular msleep() everywhere were intel_dsi_msleep() is used and drop the intel_dsi_msleep() helper.
Changes in v2: - Replace all intel_dsi_msleep() calls instead of just the intel_dsi_msleep(panel_on_delay) call
Cc: stable@vger.kernel.org Fixes: 6fdb335f1c9c ("drm/i915/dsi: Use unconditional msleep for the panel_on_delay when there is no reset-deassert MIPI-sequence") Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230425194441.68086-1-hdegoed... (cherry picked from commit fa83c12132f71302f7d4b02758dc0d46048d3f5f) Signed-off-by: Joonas Lahtinen joonas.lahtinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/display/icl_dsi.c | 2 +- drivers/gpu/drm/i915/display/intel_dsi_vbt.c | 11 ----------- drivers/gpu/drm/i915/display/intel_dsi_vbt.h | 1 - drivers/gpu/drm/i915/display/vlv_dsi.c | 22 +++++----------------- 4 files changed, 6 insertions(+), 30 deletions(-)
--- a/drivers/gpu/drm/i915/display/icl_dsi.c +++ b/drivers/gpu/drm/i915/display/icl_dsi.c @@ -1211,7 +1211,7 @@ static void gen11_dsi_powerup_panel(stru
/* panel power on related mipi dsi vbt sequences */ intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_ON); - intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay); + msleep(intel_dsi->panel_on_delay); intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET); intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_INIT_OTP); intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON); --- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c +++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c @@ -762,17 +762,6 @@ void intel_dsi_vbt_exec_sequence(struct gpiod_set_value_cansleep(intel_dsi->gpio_backlight, 0); }
-void intel_dsi_msleep(struct intel_dsi *intel_dsi, int msec) -{ - struct intel_connector *connector = intel_dsi->attached_connector; - - /* For v3 VBTs in vid-mode the delays are part of the VBT sequences */ - if (is_vid_mode(intel_dsi) && connector->panel.vbt.dsi.seq_version >= 3) - return; - - msleep(msec); -} - void intel_dsi_log_params(struct intel_dsi *intel_dsi) { struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev); --- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.h +++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.h @@ -16,7 +16,6 @@ void intel_dsi_vbt_gpio_init(struct inte void intel_dsi_vbt_gpio_cleanup(struct intel_dsi *intel_dsi); void intel_dsi_vbt_exec_sequence(struct intel_dsi *intel_dsi, enum mipi_seq seq_id); -void intel_dsi_msleep(struct intel_dsi *intel_dsi, int msec); void intel_dsi_log_params(struct intel_dsi *intel_dsi);
#endif /* __INTEL_DSI_VBT_H__ */ --- a/drivers/gpu/drm/i915/display/vlv_dsi.c +++ b/drivers/gpu/drm/i915/display/vlv_dsi.c @@ -783,7 +783,6 @@ static void intel_dsi_pre_enable(struct { struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc); - struct intel_connector *connector = to_intel_connector(conn_state->connector); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); enum pipe pipe = crtc->pipe; enum port port; @@ -831,21 +830,10 @@ static void intel_dsi_pre_enable(struct if (!IS_GEMINILAKE(dev_priv)) intel_dsi_prepare(encoder, pipe_config);
+ /* Give the panel time to power-on and then deassert its reset */ intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_ON); - - /* - * Give the panel time to power-on and then deassert its reset. - * Depending on the VBT MIPI sequences version the deassert-seq - * may contain the necessary delay, intel_dsi_msleep() will skip - * the delay in that case. If there is no deassert-seq, then an - * unconditional msleep is used to give the panel time to power-on. - */ - if (connector->panel.vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET]) { - intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay); - intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET); - } else { - msleep(intel_dsi->panel_on_delay); - } + msleep(intel_dsi->panel_on_delay); + intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET);
if (IS_GEMINILAKE(dev_priv)) { glk_cold_boot = glk_dsi_enable_io(encoder); @@ -879,7 +867,7 @@ static void intel_dsi_pre_enable(struct msleep(20); /* XXX */ for_each_dsi_port(port, intel_dsi->ports) dpi_send_cmd(intel_dsi, TURN_ON, false, port); - intel_dsi_msleep(intel_dsi, 100); + msleep(100);
intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON);
@@ -1007,7 +995,7 @@ static void intel_dsi_post_disable(struc /* Assert reset */ intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_ASSERT_RESET);
- intel_dsi_msleep(intel_dsi, intel_dsi->panel_off_delay); + msleep(intel_dsi->panel_off_delay); intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_OFF);
intel_dsi->panel_power_off_time = ktime_get_boottime();
From: Jaegeuk Kim jaegeuk@kernel.org
commit 043d2d00b44310f84c0593c63e51fae88c829cdd upstream.
Let's reduce the complexity of mixed use of rb_tree in victim_entry from extent_cache and discard_cmd.
This should fix arm32 memory alignment issue caused by shared rb_entry.
[struct victim_entry] [struct rb_entry] [0] struct rb_node rb_node; [0] struct rb_node rb_node; union { struct { unsigned int ofs; unsigned int len; }; [16] unsigned long long mtime; [12] unsigned long long key; } __packed;
Cc: stable@vger.kernel.org Fixes: 093749e296e2 ("f2fs: support age threshold based garbage collection") Reviewed-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/f2fs/extent_cache.c | 36 ------------ fs/f2fs/f2fs.h | 15 +---- fs/f2fs/gc.c | 139 +++++++++++++++++++++++++++++-------------------- fs/f2fs/gc.h | 14 ---- fs/f2fs/segment.c | 4 - 5 files changed, 93 insertions(+), 115 deletions(-)
--- a/fs/f2fs/extent_cache.c +++ b/fs/f2fs/extent_cache.c @@ -179,29 +179,6 @@ struct rb_entry *f2fs_lookup_rb_tree(str return re; }
-struct rb_node **f2fs_lookup_rb_tree_ext(struct f2fs_sb_info *sbi, - struct rb_root_cached *root, - struct rb_node **parent, - unsigned long long key, bool *leftmost) -{ - struct rb_node **p = &root->rb_root.rb_node; - struct rb_entry *re; - - while (*p) { - *parent = *p; - re = rb_entry(*parent, struct rb_entry, rb_node); - - if (key < re->key) { - p = &(*p)->rb_left; - } else { - p = &(*p)->rb_right; - *leftmost = false; - } - } - - return p; -} - struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi, struct rb_root_cached *root, struct rb_node **parent, @@ -310,7 +287,7 @@ lookup_neighbors: }
bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi, - struct rb_root_cached *root, bool check_key) + struct rb_root_cached *root) { #ifdef CONFIG_F2FS_CHECK_FS struct rb_node *cur = rb_first_cached(root), *next; @@ -327,23 +304,12 @@ bool f2fs_check_rb_tree_consistence(stru cur_re = rb_entry(cur, struct rb_entry, rb_node); next_re = rb_entry(next, struct rb_entry, rb_node);
- if (check_key) { - if (cur_re->key > next_re->key) { - f2fs_info(sbi, "inconsistent rbtree, " - "cur(%llu) next(%llu)", - cur_re->key, next_re->key); - return false; - } - goto next; - } - if (cur_re->ofs + cur_re->len > next_re->ofs) { f2fs_info(sbi, "inconsistent rbtree, cur(%u, %u) next(%u, %u)", cur_re->ofs, cur_re->len, next_re->ofs, next_re->len); return false; } -next: cur = next; } #endif --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -629,13 +629,8 @@ enum extent_type {
struct rb_entry { struct rb_node rb_node; /* rb node located in rb-tree */ - union { - struct { - unsigned int ofs; /* start offset of the entry */ - unsigned int len; /* length of the entry */ - }; - unsigned long long key; /* 64-bits key */ - } __packed; + unsigned int ofs; /* start offset of the entry */ + unsigned int len; /* length of the entry */ };
struct extent_info { @@ -4164,10 +4159,6 @@ void f2fs_leave_shrinker(struct f2fs_sb_ */ struct rb_entry *f2fs_lookup_rb_tree(struct rb_root_cached *root, struct rb_entry *cached_re, unsigned int ofs); -struct rb_node **f2fs_lookup_rb_tree_ext(struct f2fs_sb_info *sbi, - struct rb_root_cached *root, - struct rb_node **parent, - unsigned long long key, bool *left_most); struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi, struct rb_root_cached *root, struct rb_node **parent, @@ -4178,7 +4169,7 @@ struct rb_entry *f2fs_lookup_rb_tree_ret struct rb_node ***insert_p, struct rb_node **insert_parent, bool force, bool *leftmost); bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi, - struct rb_root_cached *root, bool check_key); + struct rb_root_cached *root); void f2fs_init_extent_tree(struct inode *inode); void f2fs_drop_extent_tree(struct inode *inode); void f2fs_destroy_extent_node(struct inode *inode); --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -392,40 +392,95 @@ static unsigned int count_bits(const uns return sum; }
-static struct victim_entry *attach_victim_entry(struct f2fs_sb_info *sbi, - unsigned long long mtime, unsigned int segno, - struct rb_node *parent, struct rb_node **p, - bool left_most) +static bool f2fs_check_victim_tree(struct f2fs_sb_info *sbi, + struct rb_root_cached *root) +{ +#ifdef CONFIG_F2FS_CHECK_FS + struct rb_node *cur = rb_first_cached(root), *next; + struct victim_entry *cur_ve, *next_ve; + + while (cur) { + next = rb_next(cur); + if (!next) + return true; + + cur_ve = rb_entry(cur, struct victim_entry, rb_node); + next_ve = rb_entry(next, struct victim_entry, rb_node); + + if (cur_ve->mtime > next_ve->mtime) { + f2fs_info(sbi, "broken victim_rbtree, " + "cur_mtime(%llu) next_mtime(%llu)", + cur_ve->mtime, next_ve->mtime); + return false; + } + cur = next; + } +#endif + return true; +} + +static struct victim_entry *__lookup_victim_entry(struct f2fs_sb_info *sbi, + unsigned long long mtime) +{ + struct atgc_management *am = &sbi->am; + struct rb_node *node = am->root.rb_root.rb_node; + struct victim_entry *ve = NULL; + + while (node) { + ve = rb_entry(node, struct victim_entry, rb_node); + + if (mtime < ve->mtime) + node = node->rb_left; + else + node = node->rb_right; + } + return ve; +} + +static struct victim_entry *__create_victim_entry(struct f2fs_sb_info *sbi, + unsigned long long mtime, unsigned int segno) { struct atgc_management *am = &sbi->am; struct victim_entry *ve;
- ve = f2fs_kmem_cache_alloc(victim_entry_slab, - GFP_NOFS, true, NULL); + ve = f2fs_kmem_cache_alloc(victim_entry_slab, GFP_NOFS, true, NULL);
ve->mtime = mtime; ve->segno = segno;
- rb_link_node(&ve->rb_node, parent, p); - rb_insert_color_cached(&ve->rb_node, &am->root, left_most); - list_add_tail(&ve->list, &am->victim_list); - am->victim_count++;
return ve; }
-static void insert_victim_entry(struct f2fs_sb_info *sbi, +static void __insert_victim_entry(struct f2fs_sb_info *sbi, unsigned long long mtime, unsigned int segno) { struct atgc_management *am = &sbi->am; - struct rb_node **p; + struct rb_root_cached *root = &am->root; + struct rb_node **p = &root->rb_root.rb_node; struct rb_node *parent = NULL; + struct victim_entry *ve; bool left_most = true;
- p = f2fs_lookup_rb_tree_ext(sbi, &am->root, &parent, mtime, &left_most); - attach_victim_entry(sbi, mtime, segno, parent, p, left_most); + /* look up rb tree to find parent node */ + while (*p) { + parent = *p; + ve = rb_entry(parent, struct victim_entry, rb_node); + + if (mtime < ve->mtime) { + p = &(*p)->rb_left; + } else { + p = &(*p)->rb_right; + left_most = false; + } + } + + ve = __create_victim_entry(sbi, mtime, segno); + + rb_link_node(&ve->rb_node, parent, p); + rb_insert_color_cached(&ve->rb_node, root, left_most); }
static void add_victim_entry(struct f2fs_sb_info *sbi, @@ -461,19 +516,7 @@ static void add_victim_entry(struct f2fs if (sit_i->dirty_max_mtime - mtime < p->age_threshold) return;
- insert_victim_entry(sbi, mtime, segno); -} - -static struct rb_node *lookup_central_victim(struct f2fs_sb_info *sbi, - struct victim_sel_policy *p) -{ - struct atgc_management *am = &sbi->am; - struct rb_node *parent = NULL; - bool left_most; - - f2fs_lookup_rb_tree_ext(sbi, &am->root, &parent, p->age, &left_most); - - return parent; + __insert_victim_entry(sbi, mtime, segno); }
static void atgc_lookup_victim(struct f2fs_sb_info *sbi, @@ -483,7 +526,6 @@ static void atgc_lookup_victim(struct f2 struct atgc_management *am = &sbi->am; struct rb_root_cached *root = &am->root; struct rb_node *node; - struct rb_entry *re; struct victim_entry *ve; unsigned long long total_time; unsigned long long age, u, accu; @@ -510,12 +552,10 @@ static void atgc_lookup_victim(struct f2
node = rb_first_cached(root); next: - re = rb_entry_safe(node, struct rb_entry, rb_node); - if (!re) + ve = rb_entry_safe(node, struct victim_entry, rb_node); + if (!ve) return;
- ve = (struct victim_entry *)re; - if (ve->mtime >= max_mtime || ve->mtime < min_mtime) goto skip;
@@ -557,8 +597,6 @@ static void atssr_lookup_victim(struct f { struct sit_info *sit_i = SIT_I(sbi); struct atgc_management *am = &sbi->am; - struct rb_node *node; - struct rb_entry *re; struct victim_entry *ve; unsigned long long age; unsigned long long max_mtime = sit_i->dirty_max_mtime; @@ -568,25 +606,22 @@ static void atssr_lookup_victim(struct f unsigned int dirty_threshold = max(am->max_candidate_count, am->candidate_ratio * am->victim_count / 100); - unsigned int cost; - unsigned int iter = 0; + unsigned int cost, iter; int stage = 0;
if (max_mtime < min_mtime) return; max_mtime += 1; next_stage: - node = lookup_central_victim(sbi, p); + iter = 0; + ve = __lookup_victim_entry(sbi, p->age); next_node: - re = rb_entry_safe(node, struct rb_entry, rb_node); - if (!re) { - if (stage == 0) - goto skip_stage; + if (!ve) { + if (stage++ == 0) + goto next_stage; return; }
- ve = (struct victim_entry *)re; - if (ve->mtime >= max_mtime || ve->mtime < min_mtime) goto skip_node;
@@ -612,24 +647,20 @@ next_node: } skip_node: if (iter < dirty_threshold) { - if (stage == 0) - node = rb_prev(node); - else if (stage == 1) - node = rb_next(node); + ve = rb_entry(stage == 0 ? rb_prev(&ve->rb_node) : + rb_next(&ve->rb_node), + struct victim_entry, rb_node); goto next_node; } -skip_stage: - if (stage < 1) { - stage++; - iter = 0; + + if (stage++ == 0) goto next_stage; - } } + static void lookup_victim_by_age(struct f2fs_sb_info *sbi, struct victim_sel_policy *p) { - f2fs_bug_on(sbi, !f2fs_check_rb_tree_consistence(sbi, - &sbi->am.root, true)); + f2fs_bug_on(sbi, !f2fs_check_victim_tree(sbi, &sbi->am.root));
if (p->gc_mode == GC_AT) atgc_lookup_victim(sbi, p); --- a/fs/f2fs/gc.h +++ b/fs/f2fs/gc.h @@ -55,20 +55,10 @@ struct gc_inode_list { struct radix_tree_root iroot; };
-struct victim_info { - unsigned long long mtime; /* mtime of section */ - unsigned int segno; /* section No. */ -}; - struct victim_entry { struct rb_node rb_node; /* rb node located in rb-tree */ - union { - struct { - unsigned long long mtime; /* mtime of section */ - unsigned int segno; /* segment No. */ - }; - struct victim_info vi; /* victim info */ - }; + unsigned long long mtime; /* mtime of section */ + unsigned int segno; /* segment No. */ struct list_head list; };
--- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -1487,7 +1487,7 @@ retry: goto next; if (unlikely(dcc->rbtree_check)) f2fs_bug_on(sbi, !f2fs_check_rb_tree_consistence(sbi, - &dcc->root, false)); + &dcc->root)); blk_start_plug(&plug); list_for_each_entry_safe(dc, tmp, pend_list, list) { f2fs_bug_on(sbi, dc->state != D_PREP); @@ -3003,7 +3003,7 @@ next: mutex_lock(&dcc->cmd_lock); if (unlikely(dcc->rbtree_check)) f2fs_bug_on(sbi, !f2fs_check_rb_tree_consistence(sbi, - &dcc->root, false)); + &dcc->root));
dc = (struct discard_cmd *)f2fs_lookup_rb_tree_ret(&dcc->root, NULL, start,
From: Jaegeuk Kim jaegeuk@kernel.org
commit da6ea0b050fa720302b56fbb59307e7c7531a342 upstream.
We got a kernel panic if old_addr is NULL.
https://bugzilla.kernel.org/show_bug.cgi?id=217266
BUG: kernel NULL pointer dereference, address: 0000000000000000 Call Trace: <TASK> f2fs_commit_atomic_write+0x619/0x990 [f2fs a1b985b80f5babd6f3ea778384908880812bfa43] __f2fs_ioctl+0xd8e/0x4080 [f2fs a1b985b80f5babd6f3ea778384908880812bfa43] ? vfs_write+0x2ae/0x3f0 ? vfs_write+0x2ae/0x3f0 __x64_sys_ioctl+0x91/0xd0 do_syscall_64+0x5c/0x90 entry_SYSCALL_64_after_hwframe+0x72/0xdc RIP: 0033:0x7f69095fe53f
Fixes: 2f3a9ae990a7 ("f2fs: introduce trace_f2fs_replace_atomic_write_block") Cc: stable@vger.kernel.org Reviewed-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/f2fs/segment.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -263,7 +263,7 @@ retry: f2fs_put_dnode(&dn);
trace_f2fs_replace_atomic_write_block(inode, F2FS_I(inode)->cow_inode, - index, *old_addr, new_addr, recover); + index, old_addr ? *old_addr : 0, new_addr, recover); return 0; }
From: Jaegeuk Kim jaegeuk@kernel.org
commit d94772154e524b329a168678836745d2773a6e02 upstream.
F2FS has the same issue in ext4_rename causing crash revealed by xfstests/generic/707.
See also commit 0813299c586b ("ext4: Fix possible corruption when moving a directory")
CC: stable@vger.kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/f2fs/namei.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-)
--- a/fs/f2fs/namei.c +++ b/fs/f2fs/namei.c @@ -998,12 +998,20 @@ static int f2fs_rename(struct user_names goto out; }
+ /* + * Copied from ext4_rename: we need to protect against old.inode + * directory getting converted from inline directory format into + * a normal one. + */ + if (S_ISDIR(old_inode->i_mode)) + inode_lock_nested(old_inode, I_MUTEX_NONDIR2); + err = -ENOENT; old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_page); if (!old_entry) { if (IS_ERR(old_page)) err = PTR_ERR(old_page); - goto out; + goto out_unlock_old; }
if (S_ISDIR(old_inode->i_mode)) { @@ -1111,6 +1119,9 @@ static int f2fs_rename(struct user_names
f2fs_unlock_op(sbi);
+ if (S_ISDIR(old_inode->i_mode)) + inode_unlock(old_inode); + if (IS_DIRSYNC(old_dir) || IS_DIRSYNC(new_dir)) f2fs_sync_fs(sbi->sb, 1);
@@ -1125,6 +1136,9 @@ out_dir: f2fs_put_page(old_dir_page, 0); out_old: f2fs_put_page(old_page, 0); +out_unlock_old: + if (S_ISDIR(old_inode->i_mode)) + inode_unlock(old_inode); out: iput(whiteout); return err;
From: Jianmin Lv lvjianmin@loongson.cn
commit 48ce2d722f7f108f27bedddf54bee3423a57ce57 upstream.
For dual-bridges scenario, pch_pic_acpi_init() will be called in following path:
cpuintc_acpi_init acpi_cascade_irqdomain_init(in cpuintc driver) acpi_table_parse_madt eiointc_parse_madt eiointc_acpi_init /* this will be called two times correspondingto parsing two eiointc entries in MADT under dual-bridges scenario*/ acpi_cascade_irqdomain_init(in eiointc driver) acpi_table_parse_madt pch_pic_parse_madt pch_pic_acpi_init /* this will be called depend on valid parent IRQ domain handle for one or two times corresponding to parsing two pchpic entries in MADT druring calling eiointc_acpi_init() under dual-bridges scenario*/
During the first eiointc_acpi_init() calling, the pch_pic_acpi_init() will be called just one time since only one valid parent IRQ domain handle will be found for current eiointc IRQ domain.
During the second eiointc_acpi_init() calling, the pch_pic_acpi_init() will be called two times since two valid parent IRQ domain handles will be found. So in pch_pic_acpi_init(), we must have a reasonable way to prevent from creating second same pch_pic IRQ domain.
The patch matches gsi base information in created pch_pic IRQ domains to check if the target domain has been created to avoid the bug mentioned above.
Cc: stable@vger.kernel.org Signed-off-by: Jianmin Lv lvjianmin@loongson.cn Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20230407083453.6305-6-lvjianmin@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/irqchip/irq-loongson-pch-pic.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/irqchip/irq-loongson-pch-pic.c +++ b/drivers/irqchip/irq-loongson-pch-pic.c @@ -403,6 +403,9 @@ int __init pch_pic_acpi_init(struct irq_ int ret, vec_base; struct fwnode_handle *domain_handle;
+ if (find_pch_pic(acpi_pchpic->gsi_base) >= 0) + return 0; + vec_base = acpi_pchpic->gsi_base - GSI_MIN_PCH_IRQ;
domain_handle = irq_domain_alloc_fwnode(&acpi_pchpic->address);
From: Jianmin Lv lvjianmin@loongson.cn
commit c84efbba46901b187994558ee0edb15f7076c9a7 upstream.
When support suspend/resume for loongson-pch-pic, the syscore_ops is registered twice in dual-bridges machines where there are two pch-pic IRQ domains. Repeated registration of an same syscore_ops broke syscore_ops_list, so the patch will corret it.
Fixes: 1ed008a2c331 ("irqchip/loongson-pch-pic: Add suspend/resume support") Cc: stable@vger.kernel.org Signed-off-by: Jianmin Lv lvjianmin@loongson.cn Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20230407083453.6305-5-lvjianmin@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/irqchip/irq-loongson-pch-pic.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/irqchip/irq-loongson-pch-pic.c b/drivers/irqchip/irq-loongson-pch-pic.c index 437f1af693d0..64fa67d4ee7a 100644 --- a/drivers/irqchip/irq-loongson-pch-pic.c +++ b/drivers/irqchip/irq-loongson-pch-pic.c @@ -311,7 +311,8 @@ static int pch_pic_init(phys_addr_t addr, unsigned long size, int vec_base, pch_pic_handle[nr_pics] = domain_handle; pch_pic_priv[nr_pics++] = priv;
- register_syscore_ops(&pch_pic_syscore_ops); + if (nr_pics == 1) + register_syscore_ops(&pch_pic_syscore_ops);
return 0;
From: Jianmin Lv lvjianmin@loongson.cn
commit 112eaa8fec5ea75f1be003ec55760b09a86799f8 upstream.
In pch_pic_parse_madt(), a NULL parent pointer will be returned from acpi_get_vec_parent() for second pch-pic domain related to second bridge while calling eiointc_acpi_init() at first time, where the parent of it has not been initialized yet, and will be initialized during second time calling eiointc_acpi_init(). So, it's reasonable to return zero so that failure of acpi_table_parse_madt() will be avoided, or else acpi_cascade_irqdomain_init() will return and initialization of followed pch_msi domain will be skipped.
Although it does not matter when pch_msi_parse_madt() returns -EINVAL if no invalid parent is found, it's also reasonable to return zero for that.
Cc: stable@vger.kernel.org Signed-off-by: Jianmin Lv lvjianmin@loongson.cn Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20230407083453.6305-2-lvjianmin@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/irqchip/irq-loongson-eiointc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/irqchip/irq-loongson-eiointc.c +++ b/drivers/irqchip/irq-loongson-eiointc.c @@ -343,7 +343,7 @@ static int __init pch_pic_parse_madt(uni if (parent) return pch_pic_acpi_init(parent, pchpic_entry);
- return -EINVAL; + return 0; }
static int __init pch_msi_parse_madt(union acpi_subtable_headers *header, @@ -355,7 +355,7 @@ static int __init pch_msi_parse_madt(uni if (parent) return pch_msi_acpi_init(parent, pchmsi_entry);
- return -EINVAL; + return 0; }
static int __init acpi_cascade_irqdomain_init(void)
From: Jianmin Lv lvjianmin@loongson.cn
commit 64cc451e45e146b2140211b4f45f278b93b24ac0 upstream.
In eiointc_acpi_init(), a *eiointc* node is passed into acpi_get_vec_parent() instead of a required *NUMA* node (on some chip like 3C5000L, a *NUMA* node means a *eiointc* node, but on some chip like 3C5000, a *NUMA* node contains 4 *eiointc* nodes), and node in struct acpi_vector_group is essentially a *NUMA* node, which will lead to no parent matched for passed *eiointc* node. so the patch adjusts code to use *NUMA* node for parameter node of acpi_set_vec_parent/acpi_get_vec_parent.
Cc: stable@vger.kernel.org Signed-off-by: Jianmin Lv lvjianmin@loongson.cn Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20230407083453.6305-3-lvjianmin@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/irqchip/irq-loongson-eiointc.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-)
--- a/drivers/irqchip/irq-loongson-eiointc.c +++ b/drivers/irqchip/irq-loongson-eiointc.c @@ -280,9 +280,6 @@ static void acpi_set_vec_parent(int node { int i;
- if (cpu_has_flatmode) - node = cpu_to_node(node * CORES_PER_EIO_NODE); - for (i = 0; i < MAX_IO_PICS; i++) { if (node == vec_group[i].node) { vec_group[i].parent = parent; @@ -349,8 +346,16 @@ static int __init pch_pic_parse_madt(uni static int __init pch_msi_parse_madt(union acpi_subtable_headers *header, const unsigned long end) { + struct irq_domain *parent; struct acpi_madt_msi_pic *pchmsi_entry = (struct acpi_madt_msi_pic *)header; - struct irq_domain *parent = acpi_get_vec_parent(eiointc_priv[nr_pics - 1]->node, msi_group); + int node; + + if (cpu_has_flatmode) + node = cpu_to_node(eiointc_priv[nr_pics - 1]->node * CORES_PER_EIO_NODE); + else + node = eiointc_priv[nr_pics - 1]->node; + + parent = acpi_get_vec_parent(node, msi_group);
if (parent) return pch_msi_acpi_init(parent, pchmsi_entry); @@ -379,6 +384,7 @@ int __init eiointc_acpi_init(struct irq_ int i, ret, parent_irq; unsigned long node_map; struct eiointc_priv *priv; + int node;
priv = kzalloc(sizeof(*priv), GFP_KERNEL); if (!priv) @@ -421,8 +427,12 @@ int __init eiointc_acpi_init(struct irq_ "irqchip/loongarch/intc:starting", eiointc_router_init, NULL);
- acpi_set_vec_parent(acpi_eiointc->node, priv->eiointc_domain, pch_group); - acpi_set_vec_parent(acpi_eiointc->node, priv->eiointc_domain, msi_group); + if (cpu_has_flatmode) + node = cpu_to_node(acpi_eiointc->node * CORES_PER_EIO_NODE); + else + node = acpi_eiointc->node; + acpi_set_vec_parent(node, priv->eiointc_domain, pch_group); + acpi_set_vec_parent(node, priv->eiointc_domain, msi_group); ret = acpi_cascade_irqdomain_init();
return ret;
From: Jianmin Lv lvjianmin@loongson.cn
commit bdd60211eebb43ba1c4c14704965f4d4b628b931 upstream.
When support suspend/resume for loongson-eiointc, the syscore_ops is registered twice in dual-bridges machines where there are two eiointc IRQ domains. Repeated registration of an same syscore_ops broke syscore_ops_list. Also, cpuhp_setup_state_nocalls is only needed to call for once. So the patch will corret them.
Fixes: a90335c2dfb4 ("irqchip/loongson-eiointc: Add suspend/resume support") Cc: stable@vger.kernel.org Signed-off-by: Jianmin Lv lvjianmin@loongson.cn Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20230407083453.6305-4-lvjianmin@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/irqchip/irq-loongson-eiointc.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/irqchip/irq-loongson-eiointc.c +++ b/drivers/irqchip/irq-loongson-eiointc.c @@ -422,10 +422,12 @@ int __init eiointc_acpi_init(struct irq_ parent_irq = irq_create_mapping(parent, acpi_eiointc->cascade); irq_set_chained_handler_and_data(parent_irq, eiointc_irq_dispatch, priv);
- register_syscore_ops(&eiointc_syscore_ops); - cpuhp_setup_state_nocalls(CPUHP_AP_IRQ_LOONGARCH_STARTING, + if (nr_pics == 1) { + register_syscore_ops(&eiointc_syscore_ops); + cpuhp_setup_state_nocalls(CPUHP_AP_IRQ_LOONGARCH_STARTING, "irqchip/loongarch/intc:starting", eiointc_router_init, NULL); + }
if (cpu_has_flatmode) node = cpu_to_node(acpi_eiointc->node * CORES_PER_EIO_NODE);
From: James Cowgill james.cowgill@blaize.com
commit ab4f869fba6119997f7630d600049762a2b014fa upstream.
This is the logical place to put the backlight device, and it also fixes a kernel crash if the MIPI host is removed. Previously the backlight device would be unregistered twice when this happened - once as a child of the MIPI host through `mipi_dsi_host_unregister`, and once when the panel device is destroyed.
Fixes: 12a6cbd4f3f1 ("drm/panel: otm8009a: Use new backlight API") Signed-off-by: James Cowgill james.cowgill@blaize.com Cc: stable@vger.kernel.org Reviewed-by: Neil Armstrong neil.armstrong@linaro.org Signed-off-by: Neil Armstrong neil.armstrong@linaro.org Link: https://patchwork.freedesktop.org/patch/msgid/20230412173450.199592-1-james.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/panel/panel-orisetech-otm8009a.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c +++ b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c @@ -471,7 +471,7 @@ static int otm8009a_probe(struct mipi_ds DRM_MODE_CONNECTOR_DSI);
ctx->bl_dev = devm_backlight_device_register(dev, dev_name(dev), - dsi->host->dev, ctx, + dev, ctx, &otm8009a_backlight_ops, NULL); if (IS_ERR(ctx->bl_dev)) {
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
commit d29fb7baab09b6a1dc484c9c67933253883e770a upstream.
[Why] While scanning the top_pipe connections we can run into a case where the bottom pipe is still connected to a top_pipe but with a NULL plane_state.
[How] Treat a NULL plane_state the same as the plane being invisible for pipe cursor disable logic.
Cc: stable@vger.kernel.org Cc: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Charlene Liu Charlene.Liu@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c @@ -3383,7 +3383,9 @@ static bool dcn10_can_pipe_disable_curso for (test_pipe = pipe_ctx->top_pipe; test_pipe; test_pipe = test_pipe->top_pipe) { // Skip invisible layer and pipe-split plane on same layer - if (!test_pipe->plane_state->visible || test_pipe->plane_state->layer_index == cur_layer) + if (!test_pipe->plane_state || + !test_pipe->plane_state->visible || + test_pipe->plane_state->layer_index == cur_layer) continue;
r2 = test_pipe->plane_res.scl_data.recout;
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
commit bf224e00a9f54e2bf14b4d720a09c3d2f4aa4aa8 upstream.
[Why] DPP Root clock optimization when combined with 4to1 MPC combine results in the screen turning black.
This is because the DPPCLK is stopped during the middle of an optimize_bandwidth sequence during commit_minimal_transition without going through plane power down/power up.
[How] The intent of a 0Hz DPP clock through update_clocks is to disable the DTO. This differs from the behavior of stopping the DPPCLK entirely (utilizing a 0Hz clock on some ASIC) so it's better to move this logic to reside next to plane power up/power down where we gate the HUBP/DPP DOMAIN.
The new sequence should be: Power down: PG enabled -> RCO on Power up: RCO off -> PG disabled
Rename power_on_plane to power_on_plane_resources to reflect the actual operation that's occurring.
Cc: stable@vger.kernel.org Cc: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 12 ++++++- drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 8 +++- drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c | 13 +------ drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c | 23 ++++++++++++++ drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c | 10 ++++++ drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h | 2 + drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c | 1 drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h | 23 +++++++------- drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h | 4 ++ 9 files changed, 71 insertions(+), 25 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c @@ -728,11 +728,15 @@ void dcn10_hubp_pg_control( } }
-static void power_on_plane( +static void power_on_plane_resources( struct dce_hwseq *hws, int plane_id) { DC_LOGGER_INIT(hws->ctx->logger); + + if (hws->funcs.dpp_root_clock_control) + hws->funcs.dpp_root_clock_control(hws, plane_id, true); + if (REG(DC_IP_REQUEST_CNTL)) { REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1); @@ -1239,11 +1243,15 @@ void dcn10_plane_atomic_power_down(struc hws->funcs.hubp_pg_control(hws, hubp->inst, false);
dpp->funcs->dpp_reset(dpp); + REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 0); DC_LOG_DEBUG( "Power gated front end %d\n", hubp->inst); } + + if (hws->funcs.dpp_root_clock_control) + hws->funcs.dpp_root_clock_control(hws, dpp->inst, false); }
/* disable HW used by plane. @@ -2464,7 +2472,7 @@ static void dcn10_enable_plane(
undo_DEGVIDCN10_253_wa(dc);
- power_on_plane(dc->hwseq, + power_on_plane_resources(dc->hwseq, pipe_ctx->plane_res.hubp->inst);
/* enable DCFCLK current DCHUB */ --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c @@ -1110,11 +1110,15 @@ void dcn20_blank_pixel_data( }
-static void dcn20_power_on_plane( +static void dcn20_power_on_plane_resources( struct dce_hwseq *hws, struct pipe_ctx *pipe_ctx) { DC_LOGGER_INIT(hws->ctx->logger); + + if (hws->funcs.dpp_root_clock_control) + hws->funcs.dpp_root_clock_control(hws, pipe_ctx->plane_res.dpp->inst, true); + if (REG(DC_IP_REQUEST_CNTL)) { REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1); @@ -1138,7 +1142,7 @@ static void dcn20_enable_plane(struct dc //if (dc->debug.sanity_checks) { // dcn10_verify_allow_pstate_change_high(dc); //} - dcn20_power_on_plane(dc->hwseq, pipe_ctx); + dcn20_power_on_plane_resources(dc->hwseq, pipe_ctx);
/* enable DCFCLK current DCHUB */ pipe_ctx->plane_res.hubp->funcs->hubp_clk_cntl(pipe_ctx->plane_res.hubp, true); --- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c +++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c @@ -66,17 +66,8 @@ void dccg31_update_dpp_dto(struct dccg * REG_UPDATE(DPPCLK_DTO_CTRL, DPPCLK_DTO_ENABLE[dpp_inst], 1); } else { - //DTO must be enabled to generate a 0Hz clock output - if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp) { - REG_UPDATE(DPPCLK_DTO_CTRL, - DPPCLK_DTO_ENABLE[dpp_inst], 1); - REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0, - DPPCLK0_DTO_PHASE, 0, - DPPCLK0_DTO_MODULO, 1); - } else { - REG_UPDATE(DPPCLK_DTO_CTRL, - DPPCLK_DTO_ENABLE[dpp_inst], 0); - } + REG_UPDATE(DPPCLK_DTO_CTRL, + DPPCLK_DTO_ENABLE[dpp_inst], 0); } dccg->pipe_dppclk_khz[dpp_inst] = req_dppclk; } --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dccg.c @@ -289,8 +289,31 @@ static void dccg314_set_valid_pixel_rate dccg314_set_dtbclk_dto(dccg, &dto_params); }
+static void dccg314_dpp_root_clock_control( + struct dccg *dccg, + unsigned int dpp_inst, + bool clock_on) +{ + struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg); + + if (clock_on) { + /* turn off the DTO and leave phase/modulo at max */ + REG_UPDATE(DPPCLK_DTO_CTRL, DPPCLK_DTO_ENABLE[dpp_inst], 0); + REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0, + DPPCLK0_DTO_PHASE, 0xFF, + DPPCLK0_DTO_MODULO, 0xFF); + } else { + /* turn on the DTO to generate a 0hz clock */ + REG_UPDATE(DPPCLK_DTO_CTRL, DPPCLK_DTO_ENABLE[dpp_inst], 1); + REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0, + DPPCLK0_DTO_PHASE, 0, + DPPCLK0_DTO_MODULO, 1); + } +} + static const struct dccg_funcs dccg314_funcs = { .update_dpp_dto = dccg31_update_dpp_dto, + .dpp_root_clock_control = dccg314_dpp_root_clock_control, .get_dccg_ref_freq = dccg31_get_dccg_ref_freq, .dccg_init = dccg31_init, .set_dpstreamclk = dccg314_set_dpstreamclk, --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c @@ -392,6 +392,16 @@ void dcn314_set_pixels_per_cycle(struct pix_per_cycle); }
+void dcn314_dpp_root_clock_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool clock_on) +{ + if (!hws->ctx->dc->debug.root_clock_optimization.bits.dpp) + return; + + if (hws->ctx->dc->res_pool->dccg->funcs->dpp_root_clock_control) + hws->ctx->dc->res_pool->dccg->funcs->dpp_root_clock_control( + hws->ctx->dc->res_pool->dccg, dpp_inst, clock_on); +} + void dcn314_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on) { struct dc_context *ctx = hws->ctx; --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h @@ -43,4 +43,6 @@ void dcn314_set_pixels_per_cycle(struct
void dcn314_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on);
+void dcn314_dpp_root_clock_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool clock_on); + #endif /* __DC_HWSS_DCN314_H__ */ --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c @@ -137,6 +137,7 @@ static const struct hwseq_private_funcs .plane_atomic_disable = dcn20_plane_atomic_disable, .plane_atomic_power_down = dcn10_plane_atomic_power_down, .enable_power_gating_plane = dcn314_enable_power_gating_plane, + .dpp_root_clock_control = dcn314_dpp_root_clock_control, .hubp_pg_control = dcn314_hubp_pg_control, .program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree, .update_odm = dcn314_update_odm, --- a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h @@ -148,18 +148,21 @@ struct dccg_funcs { struct dccg *dccg, int inst);
-void (*set_pixel_rate_div)( - struct dccg *dccg, - uint32_t otg_inst, - enum pixel_rate_div k1, - enum pixel_rate_div k2); + void (*set_pixel_rate_div)(struct dccg *dccg, + uint32_t otg_inst, + enum pixel_rate_div k1, + enum pixel_rate_div k2);
-void (*set_valid_pixel_rate)( - struct dccg *dccg, - int ref_dtbclk_khz, - int otg_inst, - int pixclk_khz); + void (*set_valid_pixel_rate)( + struct dccg *dccg, + int ref_dtbclk_khz, + int otg_inst, + int pixclk_khz);
+ void (*dpp_root_clock_control)( + struct dccg *dccg, + unsigned int dpp_inst, + bool clock_on); };
#endif //__DAL_DCCG_H__ --- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h @@ -115,6 +115,10 @@ struct hwseq_private_funcs { void (*plane_atomic_disable)(struct dc *dc, struct pipe_ctx *pipe_ctx); void (*enable_power_gating_plane)(struct dce_hwseq *hws, bool enable); + void (*dpp_root_clock_control)( + struct dce_hwseq *hws, + unsigned int dpp_inst, + bool clock_on); void (*dpp_pg_control)(struct dce_hwseq *hws, unsigned int dpp_inst, bool power_on);
From: Samson Tam Samson.Tam@amd.com
commit 682439fffad9fa9a38d37dd1b1318e9374232213 upstream.
[Why] Reading pipe_fuses from register may have invalid bits set, which may affect the num_pipes erroneously.
[How] Add read_pipes_fuses() call and filter bits based on expected number of pipes.
Reviewed-by: Alvin Lee Alvin.Lee2@amd.com Acked-by: Alan Liu HaoPing.Liu@amd.com Signed-off-by: Samson Tam Samson.Tam@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org # 6.1.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c | 10 +++++++++- drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c | 10 +++++++++- 2 files changed, 18 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c @@ -2077,6 +2077,14 @@ static struct resource_funcs dcn32_res_p .restore_mall_state = dcn32_restore_mall_state, };
+static uint32_t read_pipe_fuses(struct dc_context *ctx) +{ + uint32_t value = REG_READ(CC_DC_PIPE_DIS); + /* DCN32 support max 4 pipes */ + value = value & 0xf; + return value; +} +
static bool dcn32_resource_construct( uint8_t num_virtual_links, @@ -2119,7 +2127,7 @@ static bool dcn32_resource_construct( pool->base.res_cap = &res_cap_dcn32; /* max number of pipes for ASIC before checking for pipe fuses */ num_pipes = pool->base.res_cap->num_timing_generator; - pipe_fuses = REG_READ(CC_DC_PIPE_DIS); + pipe_fuses = read_pipe_fuses(ctx);
for (i = 0; i < pool->base.res_cap->num_timing_generator; i++) if (pipe_fuses & 1 << i) --- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c @@ -1626,6 +1626,14 @@ static struct resource_funcs dcn321_res_ .restore_mall_state = dcn32_restore_mall_state, };
+static uint32_t read_pipe_fuses(struct dc_context *ctx) +{ + uint32_t value = REG_READ(CC_DC_PIPE_DIS); + /* DCN321 support max 4 pipes */ + value = value & 0xf; + return value; +} +
static bool dcn321_resource_construct( uint8_t num_virtual_links, @@ -1668,7 +1676,7 @@ static bool dcn321_resource_construct( pool->base.res_cap = &res_cap_dcn321; /* max number of pipes for ASIC before checking for pipe fuses */ num_pipes = pool->base.res_cap->num_timing_generator; - pipe_fuses = REG_READ(CC_DC_PIPE_DIS); + pipe_fuses = read_pipe_fuses(ctx);
for (i = 0; i < pool->base.res_cap->num_timing_generator; i++) if (pipe_fuses & 1 << i)
From: Hamza Mahfooz hamza.mahfooz@amd.com
commit 08da182175db4c7f80850354849d95f2670e8cd9 upstream.
Currently, on a handful of ASICs. We allow the framebuffer for a given plane to exist in either VRAM or GTT. However, if the plane's new framebuffer is in a different memory domain than it's previous framebuffer, flipping between them can cause the screen to flicker. So, to fix this, don't perform an immediate flip in the aforementioned case.
Cc: stable@vger.kernel.org Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2354 Reviewed-by: Roman Li Roman.Li@amd.com Fixes: 81d0bcf99009 ("drm/amdgpu: make display pinning more flexible (v2)") Signed-off-by: Hamza Mahfooz hamza.mahfooz@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7703,6 +7703,13 @@ static void amdgpu_dm_commit_cursors(str handle_cursor_update(plane, old_plane_state); }
+static inline uint32_t get_mem_type(struct drm_framebuffer *fb) +{ + struct amdgpu_bo *abo = gem_to_amdgpu_bo(fb->obj[0]); + + return abo->tbo.resource ? abo->tbo.resource->mem_type : 0; +} + static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, struct dc_state *dc_state, struct drm_device *dev, @@ -7820,11 +7827,13 @@ static void amdgpu_dm_commit_planes(stru
/* * Only allow immediate flips for fast updates that don't - * change FB pitch, DCC state, rotation or mirroing. + * change memory domain, FB pitch, DCC state, rotation or + * mirroring. */ bundle->flip_addrs[planes_count].flip_immediate = crtc->state->async_flip && - acrtc_state->update_type == UPDATE_TYPE_FAST; + acrtc_state->update_type == UPDATE_TYPE_FAST && + get_mem_type(old_plane_state->fb) == get_mem_type(fb);
timestamp_ns = ktime_get_ns(); bundle->flip_addrs[planes_count].flip_timestamp_in_us = div_u64(timestamp_ns, 1000);
From: Guchun Chen guchun.chen@amd.com
commit 1253685f0d3eb3eab0bfc4bf15ab341a5f3da0c8 upstream.
Once command submission failed due to userptr invalidation in amdgpu_cs_submit, legacy code will perform cleanup of scheduler job. However, it's not needed at all, as former commit has integrated job cleanup stuff into amdgpu_job_free. Otherwise, because of double free, a NULL pointer dereference will occur in such scenario.
Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/2457 Fixes: f7d66fb2ea43 ("drm/amdgpu: cleanup scheduler job initialization v2") Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1274,7 +1274,7 @@ static int amdgpu_cs_submit(struct amdgp r = drm_sched_job_add_dependency(&leader->base, fence); if (r) { dma_fence_put(fence); - goto error_cleanup; + return r; } }
@@ -1301,7 +1301,8 @@ static int amdgpu_cs_submit(struct amdgp } if (r) { r = -EAGAIN; - goto error_unlock; + mutex_unlock(&p->adev->notifier_lock); + return r; }
p->fence = dma_fence_get(&leader->base.s_fence->finished); @@ -1348,14 +1349,6 @@ static int amdgpu_cs_submit(struct amdgp mutex_unlock(&p->adev->notifier_lock); mutex_unlock(&p->bo_list->bo_list_mutex); return 0; - -error_unlock: - mutex_unlock(&p->adev->notifier_lock); - -error_cleanup: - for (i = 0; i < p->gang_size; ++i) - drm_sched_job_cleanup(&p->jobs[i]->base); - return r; }
/* Cleanup the parser structure */
From: Horatio Zhang Hongkun.Zhang@amd.com
commit 08c677cb0b436a96a836792bb35a8ec5de4999c2 upstream.
The gmc.ecc_irq is enabled by firmware per IFWI setting, and the host driver is not privileged to enable/disable the interrupt. So, it is meaningless to use the amdgpu_irq_put function in gmc_v10_0_hw_fini, which also leads to the call trace.
[ 82.340264] Call Trace: [ 82.340265] <TASK> [ 82.340269] gmc_v10_0_hw_fini+0x83/0xa0 [amdgpu] [ 82.340447] gmc_v10_0_suspend+0xe/0x20 [amdgpu] [ 82.340623] amdgpu_device_ip_suspend_phase2+0x127/0x1c0 [amdgpu] [ 82.340789] amdgpu_device_ip_suspend+0x3d/0x80 [amdgpu] [ 82.340955] amdgpu_device_pre_asic_reset+0xdd/0x2b0 [amdgpu] [ 82.341122] amdgpu_device_gpu_recover.cold+0x4dd/0xbb2 [amdgpu] [ 82.341359] amdgpu_debugfs_reset_work+0x4c/0x70 [amdgpu] [ 82.341529] process_one_work+0x21d/0x3f0 [ 82.341535] worker_thread+0x1fa/0x3c0 [ 82.341538] ? process_one_work+0x3f0/0x3f0 [ 82.341540] kthread+0xff/0x130 [ 82.341544] ? kthread_complete_and_exit+0x20/0x20 [ 82.341547] ret_from_fork+0x22/0x30
Signed-off-by: Horatio Zhang Hongkun.Zhang@amd.com Reviewed-by: Hawking Zhang Hawking.Zhang@amd.com Reviewed-by: Guchun Chen guchun.chen@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2522 Fixes: c8b5a95b5709 ("drm/amdgpu: Fix desktop freezed after gpu-reset") Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c @@ -1145,7 +1145,6 @@ static int gmc_v10_0_hw_fini(void *handl return 0; }
- amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0); amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
return 0;
From: Hamza Mahfooz hamza.mahfooz@amd.com
commit 922a76ba31adf84e72bc947267385be420c689ee upstream.
As made mention of in commit 08c677cb0b43 ("drm/amdgpu: fix amdgpu_irq_put call trace in gmc_v10_0_hw_fini") and commit 13af556104fa ("drm/amdgpu: fix amdgpu_irq_put call trace in gmc_v11_0_hw_fini"). It is meaningless to call amdgpu_irq_put() for gmc.ecc_irq. So, remove it from gmc_v9_0_hw_fini().
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2522 Fixes: 3029c855d79f ("drm/amdgpu: Fix desktop freezed after gpu-reset") Reviewed-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Hamza Mahfooz hamza.mahfooz@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c @@ -1963,7 +1963,6 @@ static int gmc_v9_0_hw_fini(void *handle if (adev->mmhub.funcs->update_power_gating) adev->mmhub.funcs->update_power_gating(adev, false);
- amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0); amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
return 0;
From: Horatio Zhang Hongkun.Zhang@amd.com
commit 13af556104fa93b1945c70bbf8a0a62cd2c92879 upstream.
The gmc.ecc_irq is enabled by firmware per IFWI setting, and the host driver is not privileged to enable/disable the interrupt. So, it is meaningless to use the amdgpu_irq_put function in gmc_v11_0_hw_fini, which also leads to the call trace.
[ 102.980303] Call Trace: [ 102.980303] <TASK> [ 102.980304] gmc_v11_0_hw_fini+0x54/0x90 [amdgpu] [ 102.980357] gmc_v11_0_suspend+0xe/0x20 [amdgpu] [ 102.980409] amdgpu_device_ip_suspend_phase2+0x240/0x460 [amdgpu] [ 102.980459] amdgpu_device_ip_suspend+0x3d/0x80 [amdgpu] [ 102.980520] amdgpu_device_pre_asic_reset+0xd9/0x490 [amdgpu] [ 102.980573] amdgpu_device_gpu_recover.cold+0x548/0xce6 [amdgpu] [ 102.980687] amdgpu_debugfs_reset_work+0x4c/0x70 [amdgpu] [ 102.980740] process_one_work+0x21f/0x3f0 [ 102.980741] worker_thread+0x200/0x3e0 [ 102.980742] ? process_one_work+0x3f0/0x3f0 [ 102.980743] kthread+0xfd/0x130 [ 102.980743] ? kthread_complete_and_exit+0x20/0x20 [ 102.980744] ret_from_fork+0x22/0x30
Signed-off-by: Horatio Zhang Hongkun.Zhang@amd.com Reviewed-by: Hawking Zhang Hawking.Zhang@amd.com Reviewed-by: Guchun Chen guchun.chen@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2522 Fixes: c8b5a95b5709 ("drm/amdgpu: Fix desktop freezed after gpu-reset") Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c @@ -941,7 +941,6 @@ static int gmc_v11_0_hw_fini(void *handl return 0; }
- amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0); amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0); gmc_v11_0_gart_disable(adev);
From: Guchun Chen guchun.chen@amd.com
commit 4a76680311330aefe5074bed8f06afa354b85c48 upstream.
gfx9 cp_ecc_error_irq is only enabled when legacy gfx ras is assert. So in gfx_v9_0_hw_fini, interrupt disablement for cp_ecc_error_irq should be executed under such condition, otherwise, an amdgpu_irq_put calltrace will occur.
[ 7283.170322] RIP: 0010:amdgpu_irq_put+0x45/0x70 [amdgpu] [ 7283.170964] RSP: 0018:ffff9a5fc3967d00 EFLAGS: 00010246 [ 7283.170967] RAX: ffff98d88afd3040 RBX: ffff98d89da20000 RCX: 0000000000000000 [ 7283.170969] RDX: 0000000000000000 RSI: ffff98d89da2bef8 RDI: ffff98d89da20000 [ 7283.170971] RBP: ffff98d89da20000 R08: ffff98d89da2ca18 R09: 0000000000000006 [ 7283.170973] R10: ffffd5764243c008 R11: 0000000000000000 R12: 0000000000001050 [ 7283.170975] R13: ffff98d89da38978 R14: ffffffff999ae15a R15: ffff98d880130105 [ 7283.170978] FS: 0000000000000000(0000) GS:ffff98d996f00000(0000) knlGS:0000000000000000 [ 7283.170981] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 7283.170983] CR2: 00000000f7a9d178 CR3: 00000001c42ea000 CR4: 00000000003506e0 [ 7283.170986] Call Trace: [ 7283.170988] <TASK> [ 7283.170989] gfx_v9_0_hw_fini+0x1c/0x6d0 [amdgpu] [ 7283.171655] amdgpu_device_ip_suspend_phase2+0x101/0x1a0 [amdgpu] [ 7283.172245] amdgpu_device_suspend+0x103/0x180 [amdgpu] [ 7283.172823] amdgpu_pmops_freeze+0x21/0x60 [amdgpu] [ 7283.173412] pci_pm_freeze+0x54/0xc0 [ 7283.173419] ? __pfx_pci_pm_freeze+0x10/0x10 [ 7283.173425] dpm_run_callback+0x98/0x200 [ 7283.173430] __device_suspend+0x164/0x5f0
v2: drop gfx11 as it's fixed in a different solution by retiring cp_ecc_irq funcs(Hawking)
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2522 Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Tao Zhou tao.zhou1@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c @@ -3845,7 +3845,8 @@ static int gfx_v9_0_hw_fini(void *handle { struct amdgpu_device *adev = (struct amdgpu_device *)handle;
- amdgpu_irq_put(adev, &adev->gfx.cp_ecc_error_irq, 0); + if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__GFX)) + amdgpu_irq_put(adev, &adev->gfx.cp_ecc_error_irq, 0); amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0); amdgpu_irq_put(adev, &adev->gfx.priv_inst_irq, 0);
From: Saleemkhan Jamadar saleemkhan.jamadar@amd.com
commit 5b94db73e45e2e6c2840f39c022fd71dfa47fc58 upstream.
Register CC_UVD_HARVESTING is obsolete for JPEG 3.1.2
Signed-off-by: Saleemkhan Jamadar saleemkhan.jamadar@amd.com Reviewed-by: Veerabadhran Gopalakrishnan Veerabadhran.Gopalakrishnan@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org # 6.1.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c @@ -54,6 +54,7 @@ static int jpeg_v3_0_early_init(void *ha
switch (adev->ip_versions[UVD_HWIP][0]) { case IP_VERSION(3, 1, 1): + case IP_VERSION(3, 1, 2): break; default: harvest = RREG32_SOC15(JPEG, 0, mmCC_UVD_HARVESTING);
From: Yifan Zhang yifan1.zhang@amd.com
commit 996e93a3fe74dcf9d467ae3020aea42cc3ff65e3 upstream.
gfx 11.0.4 range starts from 0x80.
Fixes: 311d52367d0a ("drm/amdgpu: add soc21 common ip block support for GC 11.0.4") Cc: stable@vger.kernel.org Signed-off-by: Yifan Zhang yifan1.zhang@amd.com Reported-by: Yogesh Mohan Marimuthu Yogesh.Mohanmarimuthu@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Tim Huang Tim.Huang@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/soc21.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/soc21.c +++ b/drivers/gpu/drm/amd/amdgpu/soc21.c @@ -777,7 +777,7 @@ static int soc21_common_early_init(void AMD_PG_SUPPORT_VCN_DPG | AMD_PG_SUPPORT_GFX_PG | AMD_PG_SUPPORT_JPEG; - adev->external_rev_id = adev->rev_id + 0x1; + adev->external_rev_id = adev->rev_id + 0x80; break;
default:
From: Lin.Cao lincao12@amd.com
commit 6c032c37ac3ef3b7df30937c785ecc4da428edc0 upstream.
v1: Vmbo->shadow is used to back vram bo up when vram lost. So that we should set shadow as vmbo->shadow to recover vmbo->bo v2: Modify if(vmbo->shadow) shadow = vmbo->shadow as if(!vmbo->shadow) continue;
Fixes: e18aaea733da ("drm/amdgpu: move shadow_list to amdgpu_bo_vm") Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Lin.Cao lincao12@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -4503,7 +4503,11 @@ static int amdgpu_device_recover_vram(st dev_info(adev->dev, "recover vram bo from shadow start\n"); mutex_lock(&adev->shadow_list_lock); list_for_each_entry(vmbo, &adev->shadow_list, shadow_list) { - shadow = &vmbo->bo; + /* If vm is compute context or adev is APU, shadow will be NULL */ + if (!vmbo->shadow) + continue; + shadow = vmbo->shadow; + /* No need to recover an evicted BO */ if (shadow->tbo.resource->mem_type != TTM_PL_TT || shadow->tbo.resource->start == AMDGPU_BO_INVALID_OFFSET ||
From: Alvin Lee Alvin.Lee2@amd.com
commit b504f99ccaa64da364443431e388ecf30b604e38 upstream.
[Description] - Due to bandwidth / arbitration issues at 200Mhz DCFCLK, we want to enforce minimum 60us of prefetch to avoid intermittent underflow issues - Since 60us prefetch is already enforced for UCLK DPM0, and many DCFCLK's > 200Mhz are mapped to UCLK DPM1, in theory there should not be any UCLK DPM regressions by enforcing greater prefetch
Reviewed-by: Nevenko Stupar Nevenko.Stupar@amd.com Reviewed-by: Jun Lei Jun.Lei@amd.com Cc: Mario Limonciello mario.limonciello@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Acked-by: Alex Hung alex.hung@amd.com Signed-off-by: Alvin Lee Alvin.Lee2@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c | 5 +++-- drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h | 1 + 2 files changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c @@ -807,7 +807,8 @@ static void DISPCLKDPPCLKDCFCLKDeepSleep v->SwathHeightY[k], v->SwathHeightC[k], TWait, - v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ ? + (v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ || + v->DCFCLKPerState[mode_lib->vba.VoltageLevel] <= MIN_DCFCLK_FREQ_MHZ) ? mode_lib->vba.ip.min_prefetch_in_strobe_us : 0, /* Output */ &v->DSTXAfterScaler[k], @@ -3289,7 +3290,7 @@ void dml32_ModeSupportAndSystemConfigura v->swath_width_chroma_ub_this_state[k], v->SwathHeightYThisState[k], v->SwathHeightCThisState[k], v->TWait, - v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ ? + (v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ || v->DCFCLKState[i][j] <= MIN_DCFCLK_FREQ_MHZ) ? mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
/* Output */ --- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h @@ -52,6 +52,7 @@ #define BPP_BLENDED_PIPE 0xffffffff
#define MEM_STROBE_FREQ_MHZ 1600 +#define MIN_DCFCLK_FREQ_MHZ 200 #define MEM_STROBE_MAX_DELIVERY_TIME_US 60.0
struct display_mode_lib;
From: Guchun Chen guchun.chen@amd.com
commit 58d9b9a14b47c2a3da6effcbb01607ad7edc0275 upstream.
amdgpu_dpm_is_overdrive_supported is a common API across all asics, so we should cast pp_handle into correct structure under different power frameworks.
v2: using return directly to simplify code v3: SI asic does not carry od_enabled member in pp_handle, and update Fixes tag
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2541 Fixes: eb4900aa4c49 ("drm/amdgpu: Fix kernel NULL pointer dereference in dpm functions") Suggested-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/pm/amdgpu_dpm.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-)
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c @@ -1414,15 +1414,21 @@ int amdgpu_dpm_get_smu_prv_buf_details(s
int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev) { - struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle; - struct smu_context *smu = adev->powerplay.pp_handle; + if (is_support_sw_smu(adev)) { + struct smu_context *smu = adev->powerplay.pp_handle;
- if ((is_support_sw_smu(adev) && smu->od_enabled) || - (is_support_sw_smu(adev) && smu->is_apu) || - (!is_support_sw_smu(adev) && hwmgr->od_enabled)) - return true; + return (smu->od_enabled || smu->is_apu); + } else { + struct pp_hwmgr *hwmgr;
- return false; + /* SI asic does not carry od_enabled */ + if (adev->family == AMDGPU_FAMILY_SI) + return false; + + hwmgr = (struct pp_hwmgr *)adev->powerplay.pp_handle; + + return hwmgr->od_enabled; + } }
int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
From: Guchun Chen guchun.chen@amd.com
commit 8b229ada2669b74fdae06c83fbfda5a5a99fc253 upstream.
sdma_v4_0_ip is shared on a few asics, but in sdma_v4_0_hw_fini, driver unconditionally disables ecc_irq which is only enabled on those asics enabling sdma ecc. This will introduce a warning in suspend cycle on those chips with sdma ip v4.0, while without sdma ecc. So this patch correct this.
[ 7283.166354] RIP: 0010:amdgpu_irq_put+0x45/0x70 [amdgpu] [ 7283.167001] RSP: 0018:ffff9a5fc3967d08 EFLAGS: 00010246 [ 7283.167019] RAX: ffff98d88afd3770 RBX: 0000000000000001 RCX: 0000000000000000 [ 7283.167023] RDX: 0000000000000000 RSI: ffff98d89da30390 RDI: ffff98d89da20000 [ 7283.167025] RBP: ffff98d89da20000 R08: 0000000000036838 R09: 0000000000000006 [ 7283.167028] R10: ffffd5764243c008 R11: 0000000000000000 R12: ffff98d89da30390 [ 7283.167030] R13: ffff98d89da38978 R14: ffffffff999ae15a R15: ffff98d880130105 [ 7283.167032] FS: 0000000000000000(0000) GS:ffff98d996f00000(0000) knlGS:0000000000000000 [ 7283.167036] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 7283.167039] CR2: 00000000f7a9d178 CR3: 00000001c42ea000 CR4: 00000000003506e0 [ 7283.167041] Call Trace: [ 7283.167046] <TASK> [ 7283.167048] sdma_v4_0_hw_fini+0x38/0xa0 [amdgpu] [ 7283.167704] amdgpu_device_ip_suspend_phase2+0x101/0x1a0 [amdgpu] [ 7283.168296] amdgpu_device_suspend+0x103/0x180 [amdgpu] [ 7283.168875] amdgpu_pmops_freeze+0x21/0x60 [amdgpu] [ 7283.169464] pci_pm_freeze+0x54/0xc0
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2522 Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Tao Zhou tao.zhou1@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c @@ -1941,9 +1941,11 @@ static int sdma_v4_0_hw_fini(void *handl return 0; }
- for (i = 0; i < adev->sdma.num_instances; i++) { - amdgpu_irq_put(adev, &adev->sdma.ecc_irq, - AMDGPU_SDMA_IRQ_INSTANCE0 + i); + if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__SDMA)) { + for (i = 0; i < adev->sdma.num_instances; i++) { + amdgpu_irq_put(adev, &adev->sdma.ecc_irq, + AMDGPU_SDMA_IRQ_INSTANCE0 + i); + } }
sdma_v4_0_ctx_switch_enable(adev, false);
From: Guchun Chen guchun.chen@amd.com
commit 5247f05eadf1081a74b2233f291cee2efed25e3a upstream.
Prevent further dpm casting on legacy asics without od_enabled in amdgpu_dpm_is_overdrive_supported. This can avoid UBSAN complain in init sequence.
v2: add a macro to check legacy dpm instead of checking asic family/type v3: refine macro name for naming consistency
Suggested-by: Evan Quan evan.quan@amd.com Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Lijo Lazar lijo.lazar@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/pm/amdgpu_dpm.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c @@ -36,6 +36,8 @@ #define amdgpu_dpm_enable_bapm(adev, e) \ ((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
+#define amdgpu_dpm_is_legacy_dpm(adev) ((adev)->powerplay.pp_handle == (adev)) + int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low) { const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs; @@ -1421,8 +1423,11 @@ int amdgpu_dpm_is_overdrive_supported(st } else { struct pp_hwmgr *hwmgr;
- /* SI asic does not carry od_enabled */ - if (adev->family == AMDGPU_FAMILY_SI) + /* + * dpm on some legacy asics don't carry od_enabled member + * as its pp_handle is casted directly from adev. + */ + if (amdgpu_dpm_is_legacy_dpm(adev)) return false;
hwmgr = (struct pp_hwmgr *)adev->powerplay.pp_handle;
From: Mario Limonciello mario.limonciello@amd.com
commit cc42e76e7de5190a7da5dac9d7b2bbb458e050bf upstream.
Add an early_init phase to MES for fetching and validating microcode from the filesystem.
If MES microcode is required but not available during early init, the firmware framebuffer will have already been released and the screen will freeze.
Move the request for MES microcode into the early_init phase so that if it's not available, early_init will fail.
Reviewed-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Lijo Lazar lijo.lazar@amd.com Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c | 65 +++++++++++++++++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 1 drivers/gpu/drm/amd/amdgpu/mes_v10_1.c | 97 +++++--------------------------- drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 88 +++++------------------------ 4 files changed, 100 insertions(+), 151 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c @@ -21,6 +21,8 @@ * */
+#include <linux/firmware.h> + #include "amdgpu_mes.h" #include "amdgpu.h" #include "soc15_common.h" @@ -1423,3 +1425,66 @@ error_pasid: kfree(vm); return 0; } + +int amdgpu_mes_init_microcode(struct amdgpu_device *adev, int pipe) +{ + const struct mes_firmware_header_v1_0 *mes_hdr; + struct amdgpu_firmware_info *info; + char ucode_prefix[30]; + char fw_name[40]; + int r; + + amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix)); + snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes%s.bin", + ucode_prefix, + pipe == AMDGPU_MES_SCHED_PIPE ? "" : "1"); + r = request_firmware(&adev->mes.fw[pipe], fw_name, adev->dev); + if (r) + goto out; + + r = amdgpu_ucode_validate(adev->mes.fw[pipe]); + if (r) + goto out; + + mes_hdr = (const struct mes_firmware_header_v1_0 *) + adev->mes.fw[pipe]->data; + adev->mes.uc_start_addr[pipe] = + le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) | + ((uint64_t)(le32_to_cpu(mes_hdr->mes_uc_start_addr_hi)) << 32); + adev->mes.data_start_addr[pipe] = + le32_to_cpu(mes_hdr->mes_data_start_addr_lo) | + ((uint64_t)(le32_to_cpu(mes_hdr->mes_data_start_addr_hi)) << 32); + + if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { + int ucode, ucode_data; + + if (pipe == AMDGPU_MES_SCHED_PIPE) { + ucode = AMDGPU_UCODE_ID_CP_MES; + ucode_data = AMDGPU_UCODE_ID_CP_MES_DATA; + } else { + ucode = AMDGPU_UCODE_ID_CP_MES1; + ucode_data = AMDGPU_UCODE_ID_CP_MES1_DATA; + } + + info = &adev->firmware.ucode[ucode]; + info->ucode_id = ucode; + info->fw = adev->mes.fw[pipe]; + adev->firmware.fw_size += + ALIGN(le32_to_cpu(mes_hdr->mes_ucode_size_bytes), + PAGE_SIZE); + + info = &adev->firmware.ucode[ucode_data]; + info->ucode_id = ucode_data; + info->fw = adev->mes.fw[pipe]; + adev->firmware.fw_size += + ALIGN(le32_to_cpu(mes_hdr->mes_ucode_data_size_bytes), + PAGE_SIZE); + } + + return 0; + +out: + release_firmware(adev->mes.fw[pipe]); + adev->mes.fw[pipe] = NULL; + return r; +} --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h @@ -306,6 +306,7 @@ struct amdgpu_mes_funcs {
int amdgpu_mes_ctx_get_offs(struct amdgpu_ring *ring, unsigned int id_offs);
+int amdgpu_mes_init_microcode(struct amdgpu_device *adev, int pipe); int amdgpu_mes_init(struct amdgpu_device *adev); void amdgpu_mes_fini(struct amdgpu_device *adev);
--- a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c +++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c @@ -379,82 +379,6 @@ static const struct amdgpu_mes_funcs mes .resume_gang = mes_v10_1_resume_gang, };
-static int mes_v10_1_init_microcode(struct amdgpu_device *adev, - enum admgpu_mes_pipe pipe) -{ - const char *chip_name; - char fw_name[30]; - int err; - const struct mes_firmware_header_v1_0 *mes_hdr; - struct amdgpu_firmware_info *info; - - switch (adev->ip_versions[GC_HWIP][0]) { - case IP_VERSION(10, 1, 10): - chip_name = "navi10"; - break; - case IP_VERSION(10, 3, 0): - chip_name = "sienna_cichlid"; - break; - default: - BUG(); - } - - if (pipe == AMDGPU_MES_SCHED_PIPE) - snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes.bin", - chip_name); - else - snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes1.bin", - chip_name); - - err = request_firmware(&adev->mes.fw[pipe], fw_name, adev->dev); - if (err) - return err; - - err = amdgpu_ucode_validate(adev->mes.fw[pipe]); - if (err) { - release_firmware(adev->mes.fw[pipe]); - adev->mes.fw[pipe] = NULL; - return err; - } - - mes_hdr = (const struct mes_firmware_header_v1_0 *) - adev->mes.fw[pipe]->data; - adev->mes.uc_start_addr[pipe] = - le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) | - ((uint64_t)(le32_to_cpu(mes_hdr->mes_uc_start_addr_hi)) << 32); - adev->mes.data_start_addr[pipe] = - le32_to_cpu(mes_hdr->mes_data_start_addr_lo) | - ((uint64_t)(le32_to_cpu(mes_hdr->mes_data_start_addr_hi)) << 32); - - if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { - int ucode, ucode_data; - - if (pipe == AMDGPU_MES_SCHED_PIPE) { - ucode = AMDGPU_UCODE_ID_CP_MES; - ucode_data = AMDGPU_UCODE_ID_CP_MES_DATA; - } else { - ucode = AMDGPU_UCODE_ID_CP_MES1; - ucode_data = AMDGPU_UCODE_ID_CP_MES1_DATA; - } - - info = &adev->firmware.ucode[ucode]; - info->ucode_id = ucode; - info->fw = adev->mes.fw[pipe]; - adev->firmware.fw_size += - ALIGN(le32_to_cpu(mes_hdr->mes_ucode_size_bytes), - PAGE_SIZE); - - info = &adev->firmware.ucode[ucode_data]; - info->ucode_id = ucode_data; - info->fw = adev->mes.fw[pipe]; - adev->firmware.fw_size += - ALIGN(le32_to_cpu(mes_hdr->mes_ucode_data_size_bytes), - PAGE_SIZE); - } - - return 0; -} - static void mes_v10_1_free_microcode(struct amdgpu_device *adev, enum admgpu_mes_pipe pipe) { @@ -1019,10 +943,6 @@ static int mes_v10_1_sw_init(void *handl if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) continue;
- r = mes_v10_1_init_microcode(adev, pipe); - if (r) - return r; - r = mes_v10_1_allocate_eop_buf(adev, pipe); if (r) return r; @@ -1229,6 +1149,22 @@ static int mes_v10_1_resume(void *handle return amdgpu_mes_resume(adev); }
+static int mes_v10_0_early_init(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + int pipe, r; + + for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { + if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) + continue; + r = amdgpu_mes_init_microcode(adev, pipe); + if (r) + return r; + } + + return 0; +} + static int mes_v10_0_late_init(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle; @@ -1241,6 +1177,7 @@ static int mes_v10_0_late_init(void *han
static const struct amd_ip_funcs mes_v10_1_ip_funcs = { .name = "mes_v10_1", + .early_init = mes_v10_0_early_init, .late_init = mes_v10_0_late_init, .sw_init = mes_v10_1_sw_init, .sw_fini = mes_v10_1_sw_fini, --- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c @@ -460,73 +460,6 @@ static const struct amdgpu_mes_funcs mes .misc_op = mes_v11_0_misc_op, };
-static int mes_v11_0_init_microcode(struct amdgpu_device *adev, - enum admgpu_mes_pipe pipe) -{ - char fw_name[30]; - char ucode_prefix[30]; - int err; - const struct mes_firmware_header_v1_0 *mes_hdr; - struct amdgpu_firmware_info *info; - - amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix)); - - if (pipe == AMDGPU_MES_SCHED_PIPE) - snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes.bin", - ucode_prefix); - else - snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes1.bin", - ucode_prefix); - - err = request_firmware(&adev->mes.fw[pipe], fw_name, adev->dev); - if (err) - return err; - - err = amdgpu_ucode_validate(adev->mes.fw[pipe]); - if (err) { - release_firmware(adev->mes.fw[pipe]); - adev->mes.fw[pipe] = NULL; - return err; - } - - mes_hdr = (const struct mes_firmware_header_v1_0 *) - adev->mes.fw[pipe]->data; - adev->mes.uc_start_addr[pipe] = - le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) | - ((uint64_t)(le32_to_cpu(mes_hdr->mes_uc_start_addr_hi)) << 32); - adev->mes.data_start_addr[pipe] = - le32_to_cpu(mes_hdr->mes_data_start_addr_lo) | - ((uint64_t)(le32_to_cpu(mes_hdr->mes_data_start_addr_hi)) << 32); - - if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { - int ucode, ucode_data; - - if (pipe == AMDGPU_MES_SCHED_PIPE) { - ucode = AMDGPU_UCODE_ID_CP_MES; - ucode_data = AMDGPU_UCODE_ID_CP_MES_DATA; - } else { - ucode = AMDGPU_UCODE_ID_CP_MES1; - ucode_data = AMDGPU_UCODE_ID_CP_MES1_DATA; - } - - info = &adev->firmware.ucode[ucode]; - info->ucode_id = ucode; - info->fw = adev->mes.fw[pipe]; - adev->firmware.fw_size += - ALIGN(le32_to_cpu(mes_hdr->mes_ucode_size_bytes), - PAGE_SIZE); - - info = &adev->firmware.ucode[ucode_data]; - info->ucode_id = ucode_data; - info->fw = adev->mes.fw[pipe]; - adev->firmware.fw_size += - ALIGN(le32_to_cpu(mes_hdr->mes_ucode_data_size_bytes), - PAGE_SIZE); - } - - return 0; -} - static void mes_v11_0_free_microcode(struct amdgpu_device *adev, enum admgpu_mes_pipe pipe) { @@ -1101,10 +1034,6 @@ static int mes_v11_0_sw_init(void *handl if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) continue;
- r = mes_v11_0_init_microcode(adev, pipe); - if (r) - return r; - r = mes_v11_0_allocate_eop_buf(adev, pipe); if (r) return r; @@ -1339,6 +1268,22 @@ static int mes_v11_0_resume(void *handle return amdgpu_mes_resume(adev); }
+static int mes_v11_0_early_init(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + int pipe, r; + + for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { + if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) + continue; + r = amdgpu_mes_init_microcode(adev, pipe); + if (r) + return r; + } + + return 0; +} + static int mes_v11_0_late_init(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle; @@ -1353,6 +1298,7 @@ static int mes_v11_0_late_init(void *han
static const struct amd_ip_funcs mes_v11_0_ip_funcs = { .name = "mes_v11_0", + .early_init = mes_v11_0_early_init, .late_init = mes_v11_0_late_init, .sw_init = mes_v11_0_sw_init, .sw_fini = mes_v11_0_sw_fini,
From: Mario Limonciello mario.limonciello@amd.com
commit 2210af50ae7f4104269dfde7bafbbfbacdbe1a2b upstream.
All microcode runs a basic validation after it's been loaded. Each IP block as part of init will run both.
Introduce a wrapper for request_firmware and amdgpu_ucode_validate. This wrapper will also remap any error codes from request_firmware to -ENODEV. This is so that early_init will fail if firmware couldn't be loaded instead of the IP block being disabled.
Reviewed-by: Lijo Lazar lijo.lazar@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 36 ++++++++++++++++++++++++++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 3 ++ 2 files changed, 39 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c @@ -1091,3 +1091,39 @@ void amdgpu_ucode_ip_version_decode(stru
snprintf(ucode_prefix, len, "%s_%d_%d_%d", ip_name, maj, min, rev); } + +/* + * amdgpu_ucode_request - Fetch and validate amdgpu microcode + * + * @adev: amdgpu device + * @fw: pointer to load firmware to + * @fw_name: firmware to load + * + * This is a helper that will use request_firmware and amdgpu_ucode_validate + * to load and run basic validation on firmware. If the load fails, remap + * the error code to -ENODEV, so that early_init functions will fail to load. + */ +int amdgpu_ucode_request(struct amdgpu_device *adev, const struct firmware **fw, + const char *fw_name) +{ + int err = request_firmware(fw, fw_name, adev->dev); + + if (err) + return -ENODEV; + err = amdgpu_ucode_validate(*fw); + if (err) + dev_dbg(adev->dev, ""%s" failed to validate\n", fw_name); + + return err; +} + +/* + * amdgpu_ucode_release - Release firmware microcode + * + * @fw: pointer to firmware to release + */ +void amdgpu_ucode_release(const struct firmware **fw) +{ + release_firmware(*fw); + *fw = NULL; +} --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h @@ -544,6 +544,9 @@ void amdgpu_ucode_print_sdma_hdr(const s void amdgpu_ucode_print_psp_hdr(const struct common_firmware_header *hdr); void amdgpu_ucode_print_gpu_info_hdr(const struct common_firmware_header *hdr); int amdgpu_ucode_validate(const struct firmware *fw); +int amdgpu_ucode_request(struct amdgpu_device *adev, const struct firmware **fw, + const char *fw_name); +void amdgpu_ucode_release(const struct firmware **fw); bool amdgpu_ucode_hdr_version(union amdgpu_firmware_header *hdr, uint16_t hdr_major, uint16_t hdr_minor);
From: Mario Limonciello mario.limonciello@amd.com
commit 11e0b0067ec0707e8e598a5f9a547ab618ae7982 upstream.
The `amdgpu_ucode_request` helper will ensure that the return code for missing firmware is -ENODEV so that early_init can fail.
The `amdgpu_ucode_release` helper provides symmetry for releasing firmware.
Reviewed-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Lijo Lazar lijo.lazar@amd.com Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c | 10 ++-------- drivers/gpu/drm/amd/amdgpu/mes_v10_1.c | 10 +--------- drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 10 +--------- 3 files changed, 4 insertions(+), 26 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c @@ -1438,11 +1438,7 @@ int amdgpu_mes_init_microcode(struct amd snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes%s.bin", ucode_prefix, pipe == AMDGPU_MES_SCHED_PIPE ? "" : "1"); - r = request_firmware(&adev->mes.fw[pipe], fw_name, adev->dev); - if (r) - goto out; - - r = amdgpu_ucode_validate(adev->mes.fw[pipe]); + r = amdgpu_ucode_request(adev, &adev->mes.fw[pipe], fw_name); if (r) goto out;
@@ -1482,9 +1478,7 @@ int amdgpu_mes_init_microcode(struct amd }
return 0; - out: - release_firmware(adev->mes.fw[pipe]); - adev->mes.fw[pipe] = NULL; + amdgpu_ucode_release(&adev->mes.fw[pipe]); return r; } --- a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c +++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c @@ -379,13 +379,6 @@ static const struct amdgpu_mes_funcs mes .resume_gang = mes_v10_1_resume_gang, };
-static void mes_v10_1_free_microcode(struct amdgpu_device *adev, - enum admgpu_mes_pipe pipe) -{ - release_firmware(adev->mes.fw[pipe]); - adev->mes.fw[pipe] = NULL; -} - static int mes_v10_1_allocate_ucode_buffer(struct amdgpu_device *adev, enum admgpu_mes_pipe pipe) { @@ -979,8 +972,7 @@ static int mes_v10_1_sw_fini(void *handl amdgpu_bo_free_kernel(&adev->mes.eop_gpu_obj[pipe], &adev->mes.eop_gpu_addr[pipe], NULL); - - mes_v10_1_free_microcode(adev, pipe); + amdgpu_ucode_release(&adev->mes.fw[pipe]); }
amdgpu_bo_free_kernel(&adev->gfx.kiq.ring.mqd_obj, --- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c @@ -460,13 +460,6 @@ static const struct amdgpu_mes_funcs mes .misc_op = mes_v11_0_misc_op, };
-static void mes_v11_0_free_microcode(struct amdgpu_device *adev, - enum admgpu_mes_pipe pipe) -{ - release_firmware(adev->mes.fw[pipe]); - adev->mes.fw[pipe] = NULL; -} - static int mes_v11_0_allocate_ucode_buffer(struct amdgpu_device *adev, enum admgpu_mes_pipe pipe) { @@ -1070,8 +1063,7 @@ static int mes_v11_0_sw_fini(void *handl amdgpu_bo_free_kernel(&adev->mes.eop_gpu_obj[pipe], &adev->mes.eop_gpu_addr[pipe], NULL); - - mes_v11_0_free_microcode(adev, pipe); + amdgpu_ucode_release(&adev->mes.fw[pipe]); }
amdgpu_bo_free_kernel(&adev->gfx.kiq.ring.mqd_obj,
From: Ping Cheng pinglinux@gmail.com
commit 08a46b4190d345544d04ce4fe2e1844b772b8535 upstream.
Some older tablets may not report physical maximum for X/Y coordinates. Set a default to prevent undefined resolution.
Signed-off-by: Ping Cheng ping.cheng@wacom.com Link: https://lore.kernel.org/r/20230409164229.29777-1-ping.cheng@wacom.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hid/wacom_wac.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
--- a/drivers/hid/wacom_wac.c +++ b/drivers/hid/wacom_wac.c @@ -1895,6 +1895,7 @@ static void wacom_map_usage(struct input int fmax = field->logical_maximum; unsigned int equivalent_usage = wacom_equivalent_usage(usage->hid); int resolution_code = code; + int resolution = hidinput_calc_abs_res(field, resolution_code);
if (equivalent_usage == HID_DG_TWIST) { resolution_code = ABS_RZ; @@ -1915,8 +1916,15 @@ static void wacom_map_usage(struct input switch (type) { case EV_ABS: input_set_abs_params(input, code, fmin, fmax, fuzz, 0); - input_abs_set_res(input, code, - hidinput_calc_abs_res(field, resolution_code)); + + /* older tablet may miss physical usage */ + if ((code == ABS_X || code == ABS_Y) && !resolution) { + resolution = WACOM_INTUOS_RES; + hid_warn(input, + "Wacom usage (%d) missing resolution \n", + code); + } + input_abs_set_res(input, code, resolution); break; case EV_KEY: case EV_MSC:
From: Ping Cheng pinglinux@gmail.com
commit 17d793f3ed53080dab6bbeabfc82de890c901001 upstream.
To fully utilize the BT polling/refresh rate, a few input events are sent together to reduce event delay. This causes issue to the timestamp generated by input_sync since all the events in the same packet would pretty much have the same timestamp. This patch inserts time interval to the events by averaging the total time used for sending the packet.
This decision was mainly based on observing the actual time interval between each BT polling. The interval doesn't seem to be constant, due to the network and system environment. So, using solutions other than averaging doesn't end up with valid timestamps.
Signed-off-by: Ping Cheng ping.cheng@wacom.com Reviewed-by: Jason Gerecke jason.gerecke@wacom.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hid/wacom_wac.c | 26 ++++++++++++++++++++++++++ drivers/hid/wacom_wac.h | 1 + 2 files changed, 27 insertions(+)
--- a/drivers/hid/wacom_wac.c +++ b/drivers/hid/wacom_wac.c @@ -1308,6 +1308,9 @@ static void wacom_intuos_pro2_bt_pen(str
struct input_dev *pen_input = wacom->pen_input; unsigned char *data = wacom->data; + int number_of_valid_frames = 0; + int time_interval = 15000000; + ktime_t time_packet_received = ktime_get(); int i;
if (wacom->features.type == INTUOSP2_BT || @@ -1328,12 +1331,30 @@ static void wacom_intuos_pro2_bt_pen(str wacom->id[0] |= (wacom->serial[0] >> 32) & 0xFFFFF; }
+ /* number of valid frames */ for (i = 0; i < pen_frames; i++) { unsigned char *frame = &data[i*pen_frame_len + 1]; bool valid = frame[0] & 0x80; + + if (valid) + number_of_valid_frames++; + } + + if (number_of_valid_frames) { + if (wacom->hid_data.time_delayed) + time_interval = ktime_get() - wacom->hid_data.time_delayed; + time_interval /= number_of_valid_frames; + wacom->hid_data.time_delayed = time_packet_received; + } + + for (i = 0; i < number_of_valid_frames; i++) { + unsigned char *frame = &data[i*pen_frame_len + 1]; + bool valid = frame[0] & 0x80; bool prox = frame[0] & 0x40; bool range = frame[0] & 0x20; bool invert = frame[0] & 0x10; + int frames_number_reversed = number_of_valid_frames - i - 1; + int event_timestamp = time_packet_received - frames_number_reversed * time_interval;
if (!valid) continue; @@ -1346,6 +1367,7 @@ static void wacom_intuos_pro2_bt_pen(str wacom->tool[0] = 0; wacom->id[0] = 0; wacom->serial[0] = 0; + wacom->hid_data.time_delayed = 0; return; }
@@ -1382,6 +1404,7 @@ static void wacom_intuos_pro2_bt_pen(str get_unaligned_le16(&frame[11])); } } + if (wacom->tool[0]) { input_report_abs(pen_input, ABS_PRESSURE, get_unaligned_le16(&frame[5])); if (wacom->features.type == INTUOSP2_BT || @@ -1405,6 +1428,9 @@ static void wacom_intuos_pro2_bt_pen(str
wacom->shared->stylus_in_proximity = prox;
+ /* add timestamp to unpack the frames */ + input_set_timestamp(pen_input, event_timestamp); + input_sync(pen_input); } } --- a/drivers/hid/wacom_wac.h +++ b/drivers/hid/wacom_wac.h @@ -324,6 +324,7 @@ struct hid_data { int ps_connected; bool pad_input_event_flag; unsigned short sequence_number; + int time_delayed; };
struct wacom_remote_data {
From: Konstantin Komarov almaz.alexandrovich@paragon-software.com
commit 6827d50b2c430c329af442b64c9176d174f56521 upstream.
Removed unused macro. Changed null pointer checking. Fixed inconsistent indenting.
Signed-off-by: Konstantin Komarov almaz.alexandrovich@paragon-software.com Cc: Rudi Heitbaum rudi@heitbaum.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ntfs3/bitmap.c | 3 ++- fs/ntfs3/frecord.c | 2 +- fs/ntfs3/fsntfs.c | 6 ++++-- fs/ntfs3/namei.c | 2 +- fs/ntfs3/ntfs.h | 3 --- 5 files changed, 8 insertions(+), 8 deletions(-)
--- a/fs/ntfs3/bitmap.c +++ b/fs/ntfs3/bitmap.c @@ -658,7 +658,8 @@ int wnd_init(struct wnd_bitmap *wnd, str if (!wnd->bits_last) wnd->bits_last = wbits;
- wnd->free_bits = kcalloc(wnd->nwnd, sizeof(u16), GFP_NOFS | __GFP_NOWARN); + wnd->free_bits = + kcalloc(wnd->nwnd, sizeof(u16), GFP_NOFS | __GFP_NOWARN); if (!wnd->free_bits) return -ENOMEM;
--- a/fs/ntfs3/frecord.c +++ b/fs/ntfs3/frecord.c @@ -1645,7 +1645,7 @@ struct ATTR_FILE_NAME *ni_fname_name(str { struct ATTRIB *attr = NULL; struct ATTR_FILE_NAME *fname; - struct le_str *fns; + struct le_str *fns;
if (le) *le = NULL; --- a/fs/ntfs3/fsntfs.c +++ b/fs/ntfs3/fsntfs.c @@ -2594,8 +2594,10 @@ static inline bool is_reserved_name(stru if (len == 4 || (len > 4 && le16_to_cpu(name[4]) == '.')) { port_digit = le16_to_cpu(name[3]); if (port_digit >= '1' && port_digit <= '9') - if (!ntfs_cmp_names(name, 3, COM_NAME, 3, upcase, false) || - !ntfs_cmp_names(name, 3, LPT_NAME, 3, upcase, false)) + if (!ntfs_cmp_names(name, 3, COM_NAME, 3, upcase, + false) || + !ntfs_cmp_names(name, 3, LPT_NAME, 3, upcase, + false)) return true; }
--- a/fs/ntfs3/namei.c +++ b/fs/ntfs3/namei.c @@ -93,7 +93,7 @@ static struct dentry *ntfs_lookup(struct * If the MFT record of ntfs inode is not a base record, inode->i_op can be NULL. * This causes null pointer dereference in d_splice_alias(). */ - if (!IS_ERR(inode) && inode->i_op == NULL) { + if (!IS_ERR_OR_NULL(inode) && !inode->i_op) { iput(inode); inode = ERR_PTR(-EINVAL); } --- a/fs/ntfs3/ntfs.h +++ b/fs/ntfs3/ntfs.h @@ -435,9 +435,6 @@ static inline u64 attr_svcn(const struct return attr->non_res ? le64_to_cpu(attr->nres.svcn) : 0; }
-/* The size of resident attribute by its resident size. */ -#define BYTES_PER_RESIDENT(b) (0x18 + (b)) - static_assert(sizeof(struct ATTRIB) == 0x48); static_assert(sizeof(((struct ATTRIB *)NULL)->res) == 0x08); static_assert(sizeof(((struct ATTRIB *)NULL)->nres) == 0x38);
From: Konrad Dybcio konrad.dybcio@linaro.org
commit 3eeca5e5f3100435b06a5b5d86daa3d135a8a4bd upstream.
The adreno_load_gpu() path is guarded by an error check on adreno_load_fw(). This function is responsible for loading Qualcomm-only-signed binaries (e.g. SQE and GMU FW for A6XX), but it does not take the vendor-signed ZAP blob into account.
By embedding the SQE (and GMU, if necessary) firmware into the initrd/kernel, we can trigger and unfortunate path that would not bail out early and proceed with gpu->hw_init(). That will fail, as the ZAP loader path will not find the firmware and return back to adreno_load_gpu().
This error path involves pm_runtime_put_sync() which then calls idle() instead of suspend(). This is suboptimal, as it means that we're not going through the clean shutdown sequence. With at least A619_holi, this makes the GPU not wake up until it goes through at least one more start-fail-stop cycle. The pm_runtime_put_sync that appears in the error path actually does not guarantee that because of the earlier enabling of runtime autosuspend.
Fix that by using pm_runtime_put_sync_suspend to force a clean shutdown.
Test cases: 1. All firmware baked into kernel 2. error loading ZAP fw in initrd -> load from rootfs at DE start
Both succeed on A619_holi (SM6375) and A630 (SDM845).
Fixes: 0d997f95b70f ("drm/msm/adreno: fix runtime PM imbalance at gpu load") Signed-off-by: Konrad Dybcio konrad.dybcio@linaro.org Reviewed-by: Johan Hovold johan+linaro@kernel.org Patchwork: https://patchwork.freedesktop.org/patch/530001/ Link: https://lore.kernel.org/r/20230330231517.2747024-1-konrad.dybcio@linaro.org Signed-off-by: Rob Clark robdclark@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/msm/adreno/adreno_device.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -465,7 +465,7 @@ struct msm_gpu *adreno_load_gpu(struct d return gpu;
err_put_rpm: - pm_runtime_put_sync(&pdev->dev); + pm_runtime_put_sync_suspend(&pdev->dev); err_disable_rpm: pm_runtime_disable(&pdev->dev);
From: Radhakrishna Sripada radhakrishna.sripada@intel.com
[ Upstream commit 5fba65efa7cfb8cef227a2c555deb10327a5e27b ]
Both workarounds require the same implementation and apply to MTL P and M from stepping A0 to B0 (exclusive).
v2: - Remove unrelated brace removal. (Matt)
Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Signed-off-by: Gustavo Sousa gustavo.sousa@intel.com Reviewed-by: Matt Roper matthew.d.roper@intel.com Signed-off-by: Matt Roper matthew.d.roper@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230329212336.106161-2-gustav... Stable-dep-of: 81900e3a3775 ("drm/i915: disable sampler indirect state in bindless heap") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/intel_gt_regs.h | 1 + drivers/gpu/drm/i915/gt/intel_workarounds.c | 9 +++++++++ 2 files changed, 10 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_regs.h b/drivers/gpu/drm/i915/gt/intel_gt_regs.h index 9758b0b635601..cd45a45066ccb 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_regs.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_regs.h @@ -1135,6 +1135,7 @@ #define ENABLE_SMALLPL REG_BIT(15) #define SC_DISABLE_POWER_OPTIMIZATION_EBB REG_BIT(9) #define GEN11_SAMPLER_ENABLE_HEADLESS_MSG REG_BIT(5) +#define MTL_DISABLE_SAMPLER_SC_OOO REG_BIT(3)
#define GEN9_HALF_SLICE_CHICKEN7 MCR_REG(0xe194) #define DG2_DISABLE_ROUND_ENABLE_ALLOW_FOR_SSLA REG_BIT(15) diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c index e13052c5dae19..526fb9cc36b9b 100644 --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c @@ -3035,6 +3035,15 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li
add_render_compute_tuning_settings(i915, wal);
+ if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) || + IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0)) + /* + * Wa_14017066071 + * Wa_14017654203 + */ + wa_mcr_masked_en(wal, GEN10_SAMPLER_MODE, + MTL_DISABLE_SAMPLER_SC_OOO); + if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) || IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0) || IS_PONTEVECCHIO(i915) ||
From: Haridhar Kalvala haridhar.kalvala@intel.com
[ Upstream commit 4b51210f98c2b89ce37aede5b8dc5105be0572c6 ]
Wa_14017856879 implementation for mtl.
Bspec: 46046
Signed-off-by: Haridhar Kalvala haridhar.kalvala@intel.com Reviewed-by: Gustavo Sousa gustavo.sousa@intel.com Signed-off-by: Matt Roper matthew.d.roper@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230404173220.3175577-1-harid... Stable-dep-of: 81900e3a3775 ("drm/i915: disable sampler indirect state in bindless heap") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/intel_gt_regs.h | 2 ++ drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 +++++ 2 files changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_regs.h b/drivers/gpu/drm/i915/gt/intel_gt_regs.h index cd45a45066ccb..0c30738087a79 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_regs.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_regs.h @@ -1162,7 +1162,9 @@ #define THREAD_EX_ARB_MODE_RR_AFTER_DEP REG_FIELD_PREP(THREAD_EX_ARB_MODE, 0x2)
#define HSW_ROW_CHICKEN3 _MMIO(0xe49c) +#define GEN9_ROW_CHICKEN3 MCR_REG(0xe49c) #define HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE (1 << 6) +#define MTL_DISABLE_FIX_FOR_EOT_FLUSH REG_BIT(9)
#define GEN8_ROW_CHICKEN MCR_REG(0xe4f0) #define FLOW_CONTROL_ENABLE REG_BIT(15) diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c index 526fb9cc36b9b..09455682967de 100644 --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c @@ -3035,6 +3035,11 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li
add_render_compute_tuning_settings(i915, wal);
+ if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_B0, STEP_FOREVER) || + IS_MTL_GRAPHICS_STEP(i915, P, STEP_B0, STEP_FOREVER)) + /* Wa_14017856879 */ + wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN3, MTL_DISABLE_FIX_FOR_EOT_FLUSH); + if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) || IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0)) /*
From: Lionel Landwerlin lionel.g.landwerlin@intel.com
[ Upstream commit 81900e3a37750d8c6ad705045310e002f6dd0356 ]
By default the indirect state sampler data (border colors) are stored in the same heap as the SAMPLER_STATE structure. For userspace drivers that can be 2 different heaps (dynamic state heap & bindless sampler state heap). This means that border colors have to copied in 2 different places so that the same SAMPLER_STATE structure find the right data.
This change is forcing the indirect state sampler data to only be in the dynamic state pool (more convenient for userspace drivers, they only have to have one copy of the border colors). This is reproducing the behavior of the Windows drivers.
BSpec: 46052
Signed-off-by: Lionel Landwerlin lionel.g.landwerlin@intel.com Cc: stable@vger.kernel.org Reviewed-by: Haridhar Kalvala haridhar.kalvala@intel.com Signed-off-by: Matt Roper matthew.d.roper@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230407093237.3296286-1-lione... (cherry picked from commit 16fc9c08f0ec7b1c95f1ea4a16097acdb3fc943d) Signed-off-by: Joonas Lahtinen joonas.lahtinen@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/intel_gt_regs.h | 1 + drivers/gpu/drm/i915/gt/intel_workarounds.c | 19 +++++++++++++++++++ 2 files changed, 20 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_regs.h b/drivers/gpu/drm/i915/gt/intel_gt_regs.h index 0c30738087a79..1d96c36f9efc2 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_regs.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_regs.h @@ -1136,6 +1136,7 @@ #define SC_DISABLE_POWER_OPTIMIZATION_EBB REG_BIT(9) #define GEN11_SAMPLER_ENABLE_HEADLESS_MSG REG_BIT(5) #define MTL_DISABLE_SAMPLER_SC_OOO REG_BIT(3) +#define GEN11_INDIRECT_STATE_BASE_ADDR_OVERRIDE REG_BIT(0)
#define GEN9_HALF_SLICE_CHICKEN7 MCR_REG(0xe194) #define DG2_DISABLE_ROUND_ENABLE_ALLOW_FOR_SSLA REG_BIT(15) diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c index 09455682967de..620071efb2fc1 100644 --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c @@ -3035,6 +3035,25 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li
add_render_compute_tuning_settings(i915, wal);
+ if (GRAPHICS_VER(i915) >= 11) { + /* This is not a Wa (although referred to as + * WaSetInidrectStateOverride in places), this allows + * applications that reference sampler states through + * the BindlessSamplerStateBaseAddress to have their + * border color relative to DynamicStateBaseAddress + * rather than BindlessSamplerStateBaseAddress. + * + * Otherwise SAMPLER_STATE border colors have to be + * copied in multiple heaps (DynamicStateBaseAddress & + * BindlessSamplerStateBaseAddress) + * + * BSpec: 46052 + */ + wa_mcr_masked_en(wal, + GEN10_SAMPLER_MODE, + GEN11_INDIRECT_STATE_BASE_ADDR_OVERRIDE); + } + if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_B0, STEP_FOREVER) || IS_MTL_GRAPHICS_STEP(i915, P, STEP_B0, STEP_FOREVER)) /* Wa_14017856879 */
From: Animesh Manna animesh.manna@intel.com
[ Upstream commit f840834a8b60ffd305f03a53007605ba4dfbbc4b ]
The max source and destination limits for scalers in MTL have changed. Use the new values accordingly.
Signed-off-by: José Roberto de Souza jose.souza@intel.com Signed-off-by: Animesh Manna animesh.manna@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221223130509.43245-3-luciano... Stable-dep-of: d944eafed618 ("drm/i915: Check pipe source size when using skl+ scalers") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/skl_scaler.c | 40 ++++++++++++++++++----- 1 file changed, 32 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/skl_scaler.c b/drivers/gpu/drm/i915/display/skl_scaler.c index d7390067b7d4c..01e8812936126 100644 --- a/drivers/gpu/drm/i915/display/skl_scaler.c +++ b/drivers/gpu/drm/i915/display/skl_scaler.c @@ -87,6 +87,10 @@ static u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_cosited) #define ICL_MAX_SRC_H 4096 #define ICL_MAX_DST_W 5120 #define ICL_MAX_DST_H 4096 +#define MTL_MAX_SRC_W 4096 +#define MTL_MAX_SRC_H 8192 +#define MTL_MAX_DST_W 8192 +#define MTL_MAX_DST_H 8192 #define SKL_MIN_YUV_420_SRC_W 16 #define SKL_MIN_YUV_420_SRC_H 16
@@ -103,6 +107,8 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach, struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; + int min_src_w, min_src_h, min_dst_w, min_dst_h; + int max_src_w, max_src_h, max_dst_w, max_dst_h;
/* * Src coordinates are already rotated by 270 degrees for @@ -157,15 +163,33 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach, return -EINVAL; }
+ min_src_w = SKL_MIN_SRC_W; + min_src_h = SKL_MIN_SRC_H; + min_dst_w = SKL_MIN_DST_W; + min_dst_h = SKL_MIN_DST_H; + + if (DISPLAY_VER(dev_priv) < 11) { + max_src_w = SKL_MAX_SRC_W; + max_src_h = SKL_MAX_SRC_H; + max_dst_w = SKL_MAX_DST_W; + max_dst_h = SKL_MAX_DST_H; + } else if (DISPLAY_VER(dev_priv) < 14) { + max_src_w = ICL_MAX_SRC_W; + max_src_h = ICL_MAX_SRC_H; + max_dst_w = ICL_MAX_DST_W; + max_dst_h = ICL_MAX_DST_H; + } else { + max_src_w = MTL_MAX_SRC_W; + max_src_h = MTL_MAX_SRC_H; + max_dst_w = MTL_MAX_DST_W; + max_dst_h = MTL_MAX_DST_H; + } + /* range checks */ - if (src_w < SKL_MIN_SRC_W || src_h < SKL_MIN_SRC_H || - dst_w < SKL_MIN_DST_W || dst_h < SKL_MIN_DST_H || - (DISPLAY_VER(dev_priv) >= 11 && - (src_w > ICL_MAX_SRC_W || src_h > ICL_MAX_SRC_H || - dst_w > ICL_MAX_DST_W || dst_h > ICL_MAX_DST_H)) || - (DISPLAY_VER(dev_priv) < 11 && - (src_w > SKL_MAX_SRC_W || src_h > SKL_MAX_SRC_H || - dst_w > SKL_MAX_DST_W || dst_h > SKL_MAX_DST_H))) { + if (src_w < min_src_w || src_h < min_src_h || + dst_w < min_dst_w || dst_h < min_dst_h || + src_w > max_src_w || src_h > max_src_h || + dst_w > max_dst_w || dst_h > max_dst_h) { drm_dbg_kms(&dev_priv->drm, "scaler_user index %u.%u: src %ux%u dst %ux%u " "size is out of scaler range\n",
On Mon, 2023-05-15 at 18:28 +0200, Greg Kroah-Hartman wrote:
From: Animesh Manna animesh.manna@intel.com
[ Upstream commit f840834a8b60ffd305f03a53007605ba4dfbbc4b ]
The max source and destination limits for scalers in MTL have changed. Use the new values accordingly.
Signed-off-by: José Roberto de Souza jose.souza@intel.com Signed-off-by: Animesh Manna animesh.manna@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221223130509.43245-3-luciano... Stable-dep-of: d944eafed618 ("drm/i915: Check pipe source size when using skl+ scalers") Signed-off-by: Sasha Levin sashal@kernel.org
This patch is only relevant for MTL, which is not fully supported on v6.2 yet, so I don't think we need to cherry-pick it to v6.2.
-- Cheers, Luca.
On Tue, May 16, 2023 at 07:13:42AM +0000, Coelho, Luciano wrote:
On Mon, 2023-05-15 at 18:28 +0200, Greg Kroah-Hartman wrote:
From: Animesh Manna animesh.manna@intel.com
[ Upstream commit f840834a8b60ffd305f03a53007605ba4dfbbc4b ]
The max source and destination limits for scalers in MTL have changed. Use the new values accordingly.
Signed-off-by: José Roberto de Souza jose.souza@intel.com Signed-off-by: Animesh Manna animesh.manna@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221223130509.43245-3-luciano... Stable-dep-of: d944eafed618 ("drm/i915: Check pipe source size when using skl+ scalers") Signed-off-by: Sasha Levin sashal@kernel.org
This patch is only relevant for MTL, which is not fully supported on v6.2 yet, so I don't think we need to cherry-pick it to v6.2.
Does it break anything on 6.2? If not, I think you want it as it is a dependancy for another fix, right?
thanks,
greg k-h
On Tue, 2023-05-16 at 09:19 +0200, gregkh@linuxfoundation.org wrote:
On Tue, May 16, 2023 at 07:13:42AM +0000, Coelho, Luciano wrote:
On Mon, 2023-05-15 at 18:28 +0200, Greg Kroah-Hartman wrote:
From: Animesh Manna animesh.manna@intel.com
[ Upstream commit f840834a8b60ffd305f03a53007605ba4dfbbc4b ]
The max source and destination limits for scalers in MTL have changed. Use the new values accordingly.
Signed-off-by: José Roberto de Souza jose.souza@intel.com Signed-off-by: Animesh Manna animesh.manna@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Radhakrishna Sripada radhakrishna.sripada@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221223130509.43245-3-luciano... Stable-dep-of: d944eafed618 ("drm/i915: Check pipe source size when using skl+ scalers") Signed-off-by: Sasha Levin sashal@kernel.org
This patch is only relevant for MTL, which is not fully supported on v6.2 yet, so I don't think we need to cherry-pick it to v6.2.
Does it break anything on 6.2? If not, I think you want it as it is a dependancy for another fix, right?
Ah, I missed the dependency link there. I was about to write that it doesn't _hurt_ to take it, just thought it was irrelevant.
So, this patch _can_ be taken if something else depends on it.
-- Cheers, Luca.
From: Ville Syrjälä ville.syrjala@linux.intel.com
[ Upstream commit d944eafed618a8507270b324ad9d5405bb7f0b3e ]
The skl+ scalers only sample 12 bits of PIPESRC so we can't do any plane scaling at all when the pipe source size is >4k.
Make sure the pipe source size is also below the scaler's src size limits. Might not be 100% accurate, but should at least be safe. We can refine the limits later if we discover that recent hw is less restricted.
Cc: stable@vger.kernel.org Tested-by: Ross Zwisler zwisler@google.com Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/8357 Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230418175528.13117-2-ville.s... Reviewed-by: Jani Nikula jani.nikula@intel.com (cherry picked from commit 691248d4135fe3fae64b4ee0676bc96a7fd6950c) Signed-off-by: Joonas Lahtinen joonas.lahtinen@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/skl_scaler.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+)
diff --git a/drivers/gpu/drm/i915/display/skl_scaler.c b/drivers/gpu/drm/i915/display/skl_scaler.c index 01e8812936126..fe5c47672580b 100644 --- a/drivers/gpu/drm/i915/display/skl_scaler.c +++ b/drivers/gpu/drm/i915/display/skl_scaler.c @@ -107,6 +107,8 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach, struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; + int pipe_src_w = drm_rect_width(&crtc_state->pipe_src); + int pipe_src_h = drm_rect_height(&crtc_state->pipe_src); int min_src_w, min_src_h, min_dst_w, min_dst_h; int max_src_w, max_src_h, max_dst_w, max_dst_h;
@@ -198,6 +200,21 @@ skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach, return -EINVAL; }
+ /* + * The pipe scaler does not use all the bits of PIPESRC, at least + * on the earlier platforms. So even when we're scaling a plane + * the *pipe* source size must not be too large. For simplicity + * we assume the limits match the scaler source size limits. Might + * not be 100% accurate on all platforms, but good enough for now. + */ + if (pipe_src_w > max_src_w || pipe_src_h > max_src_h) { + drm_dbg_kms(&dev_priv->drm, + "scaler_user index %u.%u: pipe src size %ux%u " + "is out of scaler range\n", + crtc->pipe, scaler_user, pipe_src_w, pipe_src_h); + return -EINVAL; + } + /* mark this plane as a scaler user in crtc_state */ scaler_state->scaler_users |= (1 << scaler_user); drm_dbg_kms(&dev_priv->drm, "scaler_user index %u.%u: "
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 73dd4ca4b5a01235607231839bd351bbef75a1d2 ]
[Why] It's not supported in multi-display, but it is supported in 2nd eDP screen only.
[How] Remove multi display support, restrict number of planes for all z-states support, but still allow Z8 if we're not using PWRSEQ0.
Reviewed-by: Charlene Liu Charlene.Liu@amd.com Acked-by: Alex Hung alex.hung@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: d893f39320e1 ("drm/amd/display: Lowering min Z8 residency time") Signed-off-by: Sasha Levin sashal@kernel.org --- .../gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c index c26da3bb2892b..859dc67a1fb6b 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c @@ -949,7 +949,6 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc int plane_count; int i; unsigned int optimized_min_dst_y_next_start_us; - bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > 1000.0;
plane_count = 0; optimized_min_dst_y_next_start_us = 0; @@ -974,6 +973,8 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc else if (context->stream_count == 1 && context->streams[0]->signal == SIGNAL_TYPE_EDP) { struct dc_link *link = context->streams[0]->sink->link; struct dc_stream_status *stream_status = &context->stream_status[0]; + bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > 1000.0; + bool is_pwrseq0 = link->link_index == 0;
if (dc_extended_blank_supported(dc)) { for (i = 0; i < dc->res_pool->pipe_count; i++) { @@ -986,18 +987,17 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc } } } - /* zstate only supported on PWRSEQ0 and when there's <2 planes*/ - if (link->link_index != 0 || stream_status->plane_count > 1) + + /* Don't support multi-plane configurations */ + if (stream_status->plane_count > 1) return DCN_ZSTATE_SUPPORT_DISALLOW;
- if (context->bw_ctx.dml.vba.StutterPeriod > 5000.0 || optimized_min_dst_y_next_start_us > 5000) + if (is_pwrseq0 && (context->bw_ctx.dml.vba.StutterPeriod > 5000.0 || optimized_min_dst_y_next_start_us > 5000)) return DCN_ZSTATE_SUPPORT_ALLOW; - else if (link->psr_settings.psr_version == DC_PSR_VERSION_1 && !link->panel_config.psr.disable_psr) + else if (is_pwrseq0 && link->psr_settings.psr_version == DC_PSR_VERSION_1 && !link->panel_config.psr.disable_psr) return allow_z8 ? DCN_ZSTATE_SUPPORT_ALLOW_Z8_Z10_ONLY : DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY; else return allow_z8 ? DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY : DCN_ZSTATE_SUPPORT_DISALLOW; - } else if (allow_z8) { - return DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY; } else { return DCN_ZSTATE_SUPPORT_DISALLOW; }
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 0db13eae41fcc67f408dbb3dfda59633c4fa03fb ]
[Why] Allows finer control and tuning for debug and profiling.
[How] Add the debug option into DC. The default remains the same as before for now.
Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: d893f39320e1 ("drm/amd/display: Lowering min Z8 residency time") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dc.h | 1 + drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c | 1 + drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 3 ++- 3 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h index 37998dc0fc144..b519602c054b2 100644 --- a/drivers/gpu/drm/amd/display/dc/dc.h +++ b/drivers/gpu/drm/amd/display/dc/dc.h @@ -796,6 +796,7 @@ struct dc_debug_options { unsigned int force_odm_combine; //bit vector based on otg inst unsigned int seamless_boot_odm_combine; unsigned int force_odm_combine_4to1; //bit vector based on otg inst + int minimum_z8_residency_time; bool disable_z9_mpc; unsigned int force_fclk_khz; bool enable_tri_buf; diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c index 9ffba4c6fe550..5c23c934c9751 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c @@ -887,6 +887,7 @@ static const struct dc_plane_cap plane_cap = { static const struct dc_debug_options debug_defaults_drv = { .disable_z10 = false, .enable_z9_disable_interface = true, + .minimum_z8_residency_time = 1000, .psr_skip_crtc_disable = true, .disable_dmcu = true, .force_abm_enable = false, diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c index 859dc67a1fb6b..b6b8be74ee0ea 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c @@ -973,7 +973,8 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc else if (context->stream_count == 1 && context->streams[0]->signal == SIGNAL_TYPE_EDP) { struct dc_link *link = context->streams[0]->sink->link; struct dc_stream_status *stream_status = &context->stream_status[0]; - bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > 1000.0; + int minmum_z8_residency = dc->debug.minimum_z8_residency_time > 0 ? dc->debug.minimum_z8_residency_time : 1000; + bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > (double)minmum_z8_residency; bool is_pwrseq0 = link->link_index == 0;
if (dc_extended_blank_supported(dc)) {
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 0215ce9057edf69aff9c1a32f4254e1ec297db31 ]
[Why] Block periods that are too short as they have the potential to currently cause hangs in other firmware components on the system.
[How] Update the threshold, mostly targeting a block of 4k and downscaling.
Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: d893f39320e1 ("drm/amd/display: Lowering min Z8 residency time") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c index 5c23c934c9751..33d8188d076ab 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c @@ -887,7 +887,7 @@ static const struct dc_plane_cap plane_cap = { static const struct dc_debug_options debug_defaults_drv = { .disable_z10 = false, .enable_z9_disable_interface = true, - .minimum_z8_residency_time = 1000, + .minimum_z8_residency_time = 3080, .psr_skip_crtc_disable = true, .disable_dmcu = true, .force_abm_enable = false,
From: Leo Chen sancchen@amd.com
[ Upstream commit d893f39320e1248d1c97fde0d6e51e5ea008a76b ]
[Why & How] Per HW team request, we're lowering the minimum Z8 residency time to 2000us. This enables Z8 support for additional modes we were previously blocking like 2k>60hz
Cc: stable@vger.kernel.org Tested-by: Daniel Wheeler daniel.wheeler@amd.com Reviewed-by: Nicholas Kazlauskas Nicholas.Kazlauskas@amd.com Acked-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Leo Chen sancchen@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c index 33d8188d076ab..30129fb9c27a9 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c @@ -887,7 +887,7 @@ static const struct dc_plane_cap plane_cap = { static const struct dc_debug_options debug_defaults_drv = { .disable_z10 = false, .enable_z9_disable_interface = true, - .minimum_z8_residency_time = 3080, + .minimum_z8_residency_time = 2000, .psr_skip_crtc_disable = true, .disable_dmcu = true, .force_abm_enable = false,
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 9b0f51e8449f6f76170fda6a8dd9c417a43ce270 ]
[Why] Request from HW team to update the latencies to the new measured values.
[How] Update the values in the bounding box.
Reviewed-by: Charlene Liu Charlene.Liu@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: 8f586cc16c1f ("drm/amd/display: Change default Z8 watermark values") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c index 6a1cf6adea77d..0c91d8a3de4c3 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c @@ -149,8 +149,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_14_soc = { .num_states = 5, .sr_exit_time_us = 16.5, .sr_enter_plus_exit_time_us = 18.5, - .sr_exit_z8_time_us = 280.0, - .sr_enter_plus_exit_z8_time_us = 350.0, + .sr_exit_z8_time_us = 210.0, + .sr_enter_plus_exit_z8_time_us = 310.0, .writeback_latency_us = 12.0, .dram_channel_width_bytes = 4, .round_trip_ping_latency_dcfclk_cycles = 106,
From: Leo Chen sancchen@amd.com
[ Upstream commit 8f586cc16c1fc3c2202c9d54563db8c7ed365f82 ]
[Why & How] Previous Z8 watermark values were causing flickering and OTC underflow. Updating Z8 watermark values based on the measurement.
Reviewed-by: Nicholas Kazlauskas Nicholas.Kazlauskas@amd.com Cc: Mario Limonciello mario.limonciello@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Acked-by: Alan Liu HaoPing.Liu@amd.com Signed-off-by: Leo Chen sancchen@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c index 0c91d8a3de4c3..db06f3b9e637e 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c @@ -149,8 +149,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_14_soc = { .num_states = 5, .sr_exit_time_us = 16.5, .sr_enter_plus_exit_time_us = 18.5, - .sr_exit_z8_time_us = 210.0, - .sr_enter_plus_exit_z8_time_us = 310.0, + .sr_exit_z8_time_us = 268.0, + .sr_enter_plus_exit_z8_time_us = 393.0, .writeback_latency_us = 12.0, .dram_channel_width_bytes = 4, .round_trip_ping_latency_dcfclk_cycles = 106,
From: Stanislav Lisovskiy stanislav.lisovskiy@intel.com
[ Upstream commit 1482ec00be4a3634aeffbcc799791a723df69339 ]
Adding DP DSC register definitions, we might need for further DSC implementation, supporting MST and DP branch pass-through mode.
v2: - Fixed checkpatch comment warning v3: - Removed function which is not yet used(Jani Nikula)
Reviewed-by: Vinod Govindapillai vinod.govindapillai@intel.com Acked-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com Signed-off-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221101094222.22091-2-stanisl... Stable-dep-of: 13525645e224 ("drm/dsc: fix drm_edp_dsc_sink_output_bpp() DPCD high byte usage") Signed-off-by: Sasha Levin sashal@kernel.org --- include/drm/display/drm_dp.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h index e934aab357bea..9bc22a02874d9 100644 --- a/include/drm/display/drm_dp.h +++ b/include/drm/display/drm_dp.h @@ -240,6 +240,8 @@ #define DP_DSC_SUPPORT 0x060 /* DP 1.4 */ # define DP_DSC_DECOMPRESSION_IS_SUPPORTED (1 << 0) # define DP_DSC_PASSTHROUGH_IS_SUPPORTED (1 << 1) +# define DP_DSC_DYNAMIC_PPS_UPDATE_SUPPORT_COMP_TO_COMP (1 << 2) +# define DP_DSC_DYNAMIC_PPS_UPDATE_SUPPORT_UNCOMP_TO_COMP (1 << 3)
#define DP_DSC_REV 0x061 # define DP_DSC_MAJOR_MASK (0xf << 0) @@ -278,12 +280,15 @@
#define DP_DSC_BLK_PREDICTION_SUPPORT 0x066 # define DP_DSC_BLK_PREDICTION_IS_SUPPORTED (1 << 0) +# define DP_DSC_RGB_COLOR_CONV_BYPASS_SUPPORT (1 << 1)
#define DP_DSC_MAX_BITS_PER_PIXEL_LOW 0x067 /* eDP 1.4 */
#define DP_DSC_MAX_BITS_PER_PIXEL_HI 0x068 /* eDP 1.4 */ # define DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK (0x3 << 0) # define DP_DSC_MAX_BITS_PER_PIXEL_HI_SHIFT 8 +# define DP_DSC_MAX_BPP_DELTA_VERSION_MASK 0x06 +# define DP_DSC_MAX_BPP_DELTA_AVAILABILITY 0x08
#define DP_DSC_DEC_COLOR_FORMAT_CAP 0x069 # define DP_DSC_RGB (1 << 0) @@ -345,11 +350,13 @@ # define DP_DSC_24_PER_DP_DSC_SINK (1 << 2)
#define DP_DSC_BITS_PER_PIXEL_INC 0x06F +# define DP_DSC_RGB_YCbCr444_MAX_BPP_DELTA_MASK 0x1f +# define DP_DSC_RGB_YCbCr420_MAX_BPP_DELTA_MASK 0xe0 # define DP_DSC_BITS_PER_PIXEL_1_16 0x0 # define DP_DSC_BITS_PER_PIXEL_1_8 0x1 # define DP_DSC_BITS_PER_PIXEL_1_4 0x2 # define DP_DSC_BITS_PER_PIXEL_1_2 0x3 -# define DP_DSC_BITS_PER_PIXEL_1 0x4 +# define DP_DSC_BITS_PER_PIXEL_1_1 0x4
#define DP_PSR_SUPPORT 0x070 /* XXX 1.2? */ # define DP_PSR_IS_SUPPORTED 1
From: Jani Nikula jani.nikula@intel.com
[ Upstream commit 13525645e2246ebc8a21bd656248d86022a6ee8f ]
The operator precedence between << and & is wrong, leading to the high byte being completely ignored. For example, with the 6.4 format, 32 becomes 0 and 24 becomes 8. Fix it, and remove the slightly confusing and unnecessary DP_DSC_MAX_BITS_PER_PIXEL_HI_SHIFT macro while at it.
Fixes: 0575650077ea ("drm/dp: DRM DP helper/macros to get DP sink DSC parameters") Cc: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Cc: Manasi Navare navaremanasi@google.com Cc: Anusha Srivatsa anusha.srivatsa@intel.com Cc: stable@vger.kernel.org # v5.0+ Signed-off-by: Jani Nikula jani.nikula@intel.com Reviewed-by: Ankit Nautiyal ankit.k.nautiyal@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230406134615.1422509-1-jani.... Signed-off-by: Sasha Levin sashal@kernel.org --- include/drm/display/drm_dp.h | 1 - include/drm/display/drm_dp_helper.h | 5 ++--- 2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h index 9bc22a02874d9..50428ba92ce8b 100644 --- a/include/drm/display/drm_dp.h +++ b/include/drm/display/drm_dp.h @@ -286,7 +286,6 @@
#define DP_DSC_MAX_BITS_PER_PIXEL_HI 0x068 /* eDP 1.4 */ # define DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK (0x3 << 0) -# define DP_DSC_MAX_BITS_PER_PIXEL_HI_SHIFT 8 # define DP_DSC_MAX_BPP_DELTA_VERSION_MASK 0x06 # define DP_DSC_MAX_BPP_DELTA_AVAILABILITY 0x08
diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h index ab55453f2d2cd..ade9df59e156a 100644 --- a/include/drm/display/drm_dp_helper.h +++ b/include/drm/display/drm_dp_helper.h @@ -181,9 +181,8 @@ static inline u16 drm_edp_dsc_sink_output_bpp(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE]) { return dsc_dpcd[DP_DSC_MAX_BITS_PER_PIXEL_LOW - DP_DSC_SUPPORT] | - (dsc_dpcd[DP_DSC_MAX_BITS_PER_PIXEL_HI - DP_DSC_SUPPORT] & - DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK << - DP_DSC_MAX_BITS_PER_PIXEL_HI_SHIFT); + ((dsc_dpcd[DP_DSC_MAX_BITS_PER_PIXEL_HI - DP_DSC_SUPPORT] & + DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK) << 8); }
static inline u32
From: John Stultz jstultz@google.com
commit 92cc5d00a431e96e5a49c0b97e5ad4fa7536bd4b upstream.
Apparently despite it being marked inline, the compiler may not inline __down_read_common() which makes it difficult to identify the cause of lock contention, as the blocked function in traceevents will always be listed as __down_read_common().
So this patch adds __always_inline annotation to the common function (as well as the inlined helper callers) to force it to be inlined so the blocking function will be listed (via Wchan) in traceevents.
Fixes: c995e638ccbb ("locking/rwsem: Fold __down_{read,write}*()") Reported-by: Tim Murray timmurray@google.com Signed-off-by: John Stultz jstultz@google.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Waiman Long longman@redhat.com Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20230503023351.2832796-1-jstultz@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/locking/rwsem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -1251,7 +1251,7 @@ static struct rw_semaphore *rwsem_downgr /* * lock for reading */ -static inline int __down_read_common(struct rw_semaphore *sem, int state) +static __always_inline int __down_read_common(struct rw_semaphore *sem, int state) { int ret = 0; long count; @@ -1269,17 +1269,17 @@ out: return ret; }
-static inline void __down_read(struct rw_semaphore *sem) +static __always_inline void __down_read(struct rw_semaphore *sem) { __down_read_common(sem, TASK_UNINTERRUPTIBLE); }
-static inline int __down_read_interruptible(struct rw_semaphore *sem) +static __always_inline int __down_read_interruptible(struct rw_semaphore *sem) { return __down_read_common(sem, TASK_INTERRUPTIBLE); }
-static inline int __down_read_killable(struct rw_semaphore *sem) +static __always_inline int __down_read_killable(struct rw_semaphore *sem) { return __down_read_common(sem, TASK_KILLABLE); }
From: Ye Bin yebin10@huawei.com
commit fa08a7b61dff8a4df11ff1e84abfc214b487caf7 upstream.
Syzbot found the following issue:
EXT4-fs: Warning: mounting with data=journal disables delayed allocation, dioread_nolock, O_DIRECT and fast_commit support! EXT4-fs (loop0): orphan cleanup on readonly fs ------------[ cut here ]------------ WARNING: CPU: 1 PID: 5067 at fs/ext4/mballoc.c:1869 mb_find_extent+0x8a1/0xe30 Modules linked in: CPU: 1 PID: 5067 Comm: syz-executor307 Not tainted 6.2.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 RIP: 0010:mb_find_extent+0x8a1/0xe30 fs/ext4/mballoc.c:1869 RSP: 0018:ffffc90003c9e098 EFLAGS: 00010293 RAX: ffffffff82405731 RBX: 0000000000000041 RCX: ffff8880783457c0 RDX: 0000000000000000 RSI: 0000000000000041 RDI: 0000000000000040 RBP: 0000000000000040 R08: ffffffff82405723 R09: ffffed10053c9402 R10: ffffed10053c9402 R11: 1ffff110053c9401 R12: 0000000000000000 R13: ffffc90003c9e538 R14: dffffc0000000000 R15: ffffc90003c9e2cc FS: 0000555556665300(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000056312f6796f8 CR3: 0000000022437000 CR4: 00000000003506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> ext4_mb_complex_scan_group+0x353/0x1100 fs/ext4/mballoc.c:2307 ext4_mb_regular_allocator+0x1533/0x3860 fs/ext4/mballoc.c:2735 ext4_mb_new_blocks+0xddf/0x3db0 fs/ext4/mballoc.c:5605 ext4_ext_map_blocks+0x1868/0x6880 fs/ext4/extents.c:4286 ext4_map_blocks+0xa49/0x1cc0 fs/ext4/inode.c:651 ext4_getblk+0x1b9/0x770 fs/ext4/inode.c:864 ext4_bread+0x2a/0x170 fs/ext4/inode.c:920 ext4_quota_write+0x225/0x570 fs/ext4/super.c:7105 write_blk fs/quota/quota_tree.c:64 [inline] get_free_dqblk+0x34a/0x6d0 fs/quota/quota_tree.c:130 do_insert_tree+0x26b/0x1aa0 fs/quota/quota_tree.c:340 do_insert_tree+0x722/0x1aa0 fs/quota/quota_tree.c:375 do_insert_tree+0x722/0x1aa0 fs/quota/quota_tree.c:375 do_insert_tree+0x722/0x1aa0 fs/quota/quota_tree.c:375 dq_insert_tree fs/quota/quota_tree.c:401 [inline] qtree_write_dquot+0x3b6/0x530 fs/quota/quota_tree.c:420 v2_write_dquot+0x11b/0x190 fs/quota/quota_v2.c:358 dquot_acquire+0x348/0x670 fs/quota/dquot.c:444 ext4_acquire_dquot+0x2dc/0x400 fs/ext4/super.c:6740 dqget+0x999/0xdc0 fs/quota/dquot.c:914 __dquot_initialize+0x3d0/0xcf0 fs/quota/dquot.c:1492 ext4_process_orphan+0x57/0x2d0 fs/ext4/orphan.c:329 ext4_orphan_cleanup+0xb60/0x1340 fs/ext4/orphan.c:474 __ext4_fill_super fs/ext4/super.c:5516 [inline] ext4_fill_super+0x81cd/0x8700 fs/ext4/super.c:5644 get_tree_bdev+0x400/0x620 fs/super.c:1282 vfs_get_tree+0x88/0x270 fs/super.c:1489 do_new_mount+0x289/0xad0 fs/namespace.c:3145 do_mount fs/namespace.c:3488 [inline] __do_sys_mount fs/namespace.c:3697 [inline] __se_sys_mount+0x2d3/0x3c0 fs/namespace.c:3674 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Add some debug information: mb_find_extent: mb_find_extent block=41, order=0 needed=64 next=0 ex=0/41/1@3735929054 64 64 7 block_bitmap: ff 3f 0c 00 fc 01 00 00 d2 3d 00 00 00 00 00 00 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Acctually, blocks per group is 64, but block bitmap indicate at least has 128 blocks. Now, ext4_validate_block_bitmap() didn't check invalid block's bitmap if set. To resolve above issue, add check like fsck "Padding at end of block bitmap is not set".
Cc: stable@kernel.org Reported-by: syzbot+68223fe9f6c95ad43bed@syzkaller.appspotmail.com Signed-off-by: Ye Bin yebin10@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230116020015.1506120-1-yebin@huaweicloud.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/balloc.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+)
--- a/fs/ext4/balloc.c +++ b/fs/ext4/balloc.c @@ -303,6 +303,22 @@ struct ext4_group_desc * ext4_get_group_ return desc; }
+static ext4_fsblk_t ext4_valid_block_bitmap_padding(struct super_block *sb, + ext4_group_t block_group, + struct buffer_head *bh) +{ + ext4_grpblk_t next_zero_bit; + unsigned long bitmap_size = sb->s_blocksize * 8; + unsigned int offset = num_clusters_in_group(sb, block_group); + + if (bitmap_size <= offset) + return 0; + + next_zero_bit = ext4_find_next_zero_bit(bh->b_data, bitmap_size, offset); + + return (next_zero_bit < bitmap_size ? next_zero_bit : 0); +} + /* * Return the block number which was discovered to be invalid, or 0 if * the block bitmap is valid. @@ -401,6 +417,15 @@ static int ext4_validate_block_bitmap(st EXT4_GROUP_INFO_BBITMAP_CORRUPT); return -EFSCORRUPTED; } + blk = ext4_valid_block_bitmap_padding(sb, block_group, bh); + if (unlikely(blk != 0)) { + ext4_unlock_group(sb, block_group); + ext4_error(sb, "bg %u: block %llu: padding at end of block bitmap is not set", + block_group, blk); + ext4_mark_group_bitmap_corrupted(sb, block_group, + EXT4_GROUP_INFO_BBITMAP_CORRUPT); + return -EFSCORRUPTED; + } set_buffer_verified(bh); verified: ext4_unlock_group(sb, block_group);
From: Tudor Ambarus tudor.ambarus@linaro.org
commit 4f04351888a83e595571de672e0a4a8b74f4fb31 upstream.
When modifying the block device while it is mounted by the filesystem, syzbot reported the following:
BUG: KASAN: slab-out-of-bounds in crc16+0x206/0x280 lib/crc16.c:58 Read of size 1 at addr ffff888075f5c0a8 by task syz-executor.2/15586
CPU: 1 PID: 15586 Comm: syz-executor.2 Not tainted 6.2.0-rc5-syzkaller-00205-gc96618275234 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/12/2023 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1b1/0x290 lib/dump_stack.c:106 print_address_description+0x74/0x340 mm/kasan/report.c:306 print_report+0x107/0x1f0 mm/kasan/report.c:417 kasan_report+0xcd/0x100 mm/kasan/report.c:517 crc16+0x206/0x280 lib/crc16.c:58 ext4_group_desc_csum+0x81b/0xb20 fs/ext4/super.c:3187 ext4_group_desc_csum_set+0x195/0x230 fs/ext4/super.c:3210 ext4_mb_clear_bb fs/ext4/mballoc.c:6027 [inline] ext4_free_blocks+0x191a/0x2810 fs/ext4/mballoc.c:6173 ext4_remove_blocks fs/ext4/extents.c:2527 [inline] ext4_ext_rm_leaf fs/ext4/extents.c:2710 [inline] ext4_ext_remove_space+0x24ef/0x46a0 fs/ext4/extents.c:2958 ext4_ext_truncate+0x177/0x220 fs/ext4/extents.c:4416 ext4_truncate+0xa6a/0xea0 fs/ext4/inode.c:4342 ext4_setattr+0x10c8/0x1930 fs/ext4/inode.c:5622 notify_change+0xe50/0x1100 fs/attr.c:482 do_truncate+0x200/0x2f0 fs/open.c:65 handle_truncate fs/namei.c:3216 [inline] do_open fs/namei.c:3561 [inline] path_openat+0x272b/0x2dd0 fs/namei.c:3714 do_filp_open+0x264/0x4f0 fs/namei.c:3741 do_sys_openat2+0x124/0x4e0 fs/open.c:1310 do_sys_open fs/open.c:1326 [inline] __do_sys_creat fs/open.c:1402 [inline] __se_sys_creat fs/open.c:1396 [inline] __x64_sys_creat+0x11f/0x160 fs/open.c:1396 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f72f8a8c0c9 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f72f97e3168 EFLAGS: 00000246 ORIG_RAX: 0000000000000055 RAX: ffffffffffffffda RBX: 00007f72f8bac050 RCX: 00007f72f8a8c0c9 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000280 RBP: 00007f72f8ae7ae9 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffd165348bf R14: 00007f72f97e3300 R15: 0000000000022000
Replace le16_to_cpu(sbi->s_es->s_desc_size) with sbi->s_desc_size
It reduces ext4's compiled text size, and makes the code more efficient (we remove an extra indirect reference and a potential byte swap on big endian systems), and there is no downside. It also avoids the potential KASAN / syzkaller failure, as a bonus.
Reported-by: syzbot+fc51227e7100c9294894@syzkaller.appspotmail.com Reported-by: syzbot+8785e41224a3afd04321@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=70d28d11ab14bd7938f3e088365252aa923cff4... Link: https://syzkaller.appspot.com/bug?id=b85721b38583ecc6b5e72ff524c67302abbc30f... Link: https://lore.kernel.org/all/000000000000ece18705f3b20934@google.com/ Fixes: 717d50e4971b ("Ext4: Uninitialized Block Groups") Cc: stable@vger.kernel.org Signed-off-by: Tudor Ambarus tudor.ambarus@linaro.org Link: https://lore.kernel.org/r/20230504121525.3275886-1-tudor.ambarus@linaro.org Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/super.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
--- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -3195,11 +3195,9 @@ static __le16 ext4_group_desc_csum(struc crc = crc16(crc, (__u8 *)gdp, offset); offset += sizeof(gdp->bg_checksum); /* skip checksum */ /* for checksum of struct ext4_group_desc do the rest...*/ - if (ext4_has_feature_64bit(sb) && - offset < le16_to_cpu(sbi->s_es->s_desc_size)) + if (ext4_has_feature_64bit(sb) && offset < sbi->s_desc_size) crc = crc16(crc, (__u8 *)gdp + offset, - le16_to_cpu(sbi->s_es->s_desc_size) - - offset); + sbi->s_desc_size - offset);
out: return cpu_to_le16(crc);
From: Jan Kara jack@suse.cz
commit 492888df0c7b42fc0843631168b0021bc4caee84 upstream.
When using cached extent stored in extent status tree in tree->cache_es another process holding ei->i_es_lock for reading can be racing with us setting new value of tree->cache_es. If the compiler would decide to refetch tree->cache_es at an unfortunate moment, it could result in a bogus in_range() check. Fix the possible race by using READ_ONCE() when using tree->cache_es only under ei->i_es_lock for reading.
Cc: stable@kernel.org Reported-by: syzbot+4a03518df1e31b537066@syzkaller.appspotmail.com Link: https://lore.kernel.org/all/000000000000d3b33905fa0fd4a6@google.com Suggested-by: Dmitry Vyukov dvyukov@google.com Signed-off-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230504125524.10802-1-jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/extents_status.c | 30 +++++++++++++----------------- 1 file changed, 13 insertions(+), 17 deletions(-)
--- a/fs/ext4/extents_status.c +++ b/fs/ext4/extents_status.c @@ -267,14 +267,12 @@ static void __es_find_extent_range(struc
/* see if the extent has been cached */ es->es_lblk = es->es_len = es->es_pblk = 0; - if (tree->cache_es) { - es1 = tree->cache_es; - if (in_range(lblk, es1->es_lblk, es1->es_len)) { - es_debug("%u cached by [%u/%u) %llu %x\n", - lblk, es1->es_lblk, es1->es_len, - ext4_es_pblock(es1), ext4_es_status(es1)); - goto out; - } + es1 = READ_ONCE(tree->cache_es); + if (es1 && in_range(lblk, es1->es_lblk, es1->es_len)) { + es_debug("%u cached by [%u/%u) %llu %x\n", + lblk, es1->es_lblk, es1->es_len, + ext4_es_pblock(es1), ext4_es_status(es1)); + goto out; }
es1 = __es_tree_search(&tree->root, lblk); @@ -293,7 +291,7 @@ out: }
if (es1 && matching_fn(es1)) { - tree->cache_es = es1; + WRITE_ONCE(tree->cache_es, es1); es->es_lblk = es1->es_lblk; es->es_len = es1->es_len; es->es_pblk = es1->es_pblk; @@ -931,14 +929,12 @@ int ext4_es_lookup_extent(struct inode *
/* find extent in cache firstly */ es->es_lblk = es->es_len = es->es_pblk = 0; - if (tree->cache_es) { - es1 = tree->cache_es; - if (in_range(lblk, es1->es_lblk, es1->es_len)) { - es_debug("%u cached by [%u/%u)\n", - lblk, es1->es_lblk, es1->es_len); - found = 1; - goto out; - } + es1 = READ_ONCE(tree->cache_es); + if (es1 && in_range(lblk, es1->es_lblk, es1->es_len)) { + es_debug("%u cached by [%u/%u)\n", + lblk, es1->es_lblk, es1->es_len); + found = 1; + goto out; }
node = tree->root.rb_node;
From: Jan Kara jack@suse.cz
commit 00d873c17e29cc32d90ca852b82685f1673acaa5 upstream.
Ext4 has a filesystem wide lock protecting ext4_writepages() calls to avoid races with switching of journalled data flag or inode format. This lock can however cause a deadlock like:
CPU0 CPU1
ext4_writepages() percpu_down_read(sbi->s_writepages_rwsem); ext4_change_inode_journal_flag() percpu_down_write(sbi->s_writepages_rwsem); - blocks, all readers block from now on ext4_do_writepages() ext4_init_io_end() kmem_cache_zalloc(io_end_cachep, GFP_KERNEL) fs_reclaim frees dentry... dentry_unlink_inode() iput() - last ref => iput_final() - inode dirty => write_inode_now()... ext4_writepages() tries to acquire sbi->s_writepages_rwsem and blocks forever
Make sure we cannot recurse into filesystem reclaim from writeback code to avoid the deadlock.
Reported-by: syzbot+6898da502aef574c5f8a@syzkaller.appspotmail.com Link: https://lore.kernel.org/all/0000000000004c66b405fa108e27@google.com Fixes: c8585c6fcaf2 ("ext4: fix races between changing inode journal mode and ext4_writepages") CC: stable@vger.kernel.org Signed-off-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230504124723.20205-1-jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/ext4.h | 24 ++++++++++++++++++++++++ fs/ext4/inode.c | 18 ++++++++++-------- fs/ext4/migrate.c | 11 ++++++----- 3 files changed, 40 insertions(+), 13 deletions(-)
--- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1774,6 +1774,30 @@ static inline struct ext4_inode_info *EX return container_of(inode, struct ext4_inode_info, vfs_inode); }
+static inline int ext4_writepages_down_read(struct super_block *sb) +{ + percpu_down_read(&EXT4_SB(sb)->s_writepages_rwsem); + return memalloc_nofs_save(); +} + +static inline void ext4_writepages_up_read(struct super_block *sb, int ctx) +{ + memalloc_nofs_restore(ctx); + percpu_up_read(&EXT4_SB(sb)->s_writepages_rwsem); +} + +static inline int ext4_writepages_down_write(struct super_block *sb) +{ + percpu_down_write(&EXT4_SB(sb)->s_writepages_rwsem); + return memalloc_nofs_save(); +} + +static inline void ext4_writepages_up_write(struct super_block *sb, int ctx) +{ + memalloc_nofs_restore(ctx); + percpu_up_write(&EXT4_SB(sb)->s_writepages_rwsem); +} + static inline int ext4_valid_inum(struct super_block *sb, unsigned long ino) { return ino == EXT4_ROOT_INO || --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2957,13 +2957,14 @@ static int ext4_writepages(struct addres .can_map = 1, }; int ret; + int alloc_ctx;
if (unlikely(ext4_forced_shutdown(EXT4_SB(sb)))) return -EIO;
- percpu_down_read(&EXT4_SB(sb)->s_writepages_rwsem); + alloc_ctx = ext4_writepages_down_read(sb); ret = ext4_do_writepages(&mpd); - percpu_up_read(&EXT4_SB(sb)->s_writepages_rwsem); + ext4_writepages_up_read(sb, alloc_ctx);
return ret; } @@ -2991,17 +2992,18 @@ static int ext4_dax_writepages(struct ad long nr_to_write = wbc->nr_to_write; struct inode *inode = mapping->host; struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb); + int alloc_ctx;
if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) return -EIO;
- percpu_down_read(&sbi->s_writepages_rwsem); + alloc_ctx = ext4_writepages_down_read(inode->i_sb); trace_ext4_writepages(inode, wbc);
ret = dax_writeback_mapping_range(mapping, sbi->s_daxdev, wbc); trace_ext4_writepages_result(inode, wbc, ret, nr_to_write - wbc->nr_to_write); - percpu_up_read(&sbi->s_writepages_rwsem); + ext4_writepages_up_read(inode->i_sb, alloc_ctx); return ret; }
@@ -6124,7 +6126,7 @@ int ext4_change_inode_journal_flag(struc journal_t *journal; handle_t *handle; int err; - struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb); + int alloc_ctx;
/* * We have to be very careful here: changing a data block's @@ -6162,7 +6164,7 @@ int ext4_change_inode_journal_flag(struc } }
- percpu_down_write(&sbi->s_writepages_rwsem); + alloc_ctx = ext4_writepages_down_write(inode->i_sb); jbd2_journal_lock_updates(journal);
/* @@ -6179,7 +6181,7 @@ int ext4_change_inode_journal_flag(struc err = jbd2_journal_flush(journal, 0); if (err < 0) { jbd2_journal_unlock_updates(journal); - percpu_up_write(&sbi->s_writepages_rwsem); + ext4_writepages_up_write(inode->i_sb, alloc_ctx); return err; } ext4_clear_inode_flag(inode, EXT4_INODE_JOURNAL_DATA); @@ -6187,7 +6189,7 @@ int ext4_change_inode_journal_flag(struc ext4_set_aops(inode);
jbd2_journal_unlock_updates(journal); - percpu_up_write(&sbi->s_writepages_rwsem); + ext4_writepages_up_write(inode->i_sb, alloc_ctx);
if (val) filemap_invalidate_unlock(inode->i_mapping); --- a/fs/ext4/migrate.c +++ b/fs/ext4/migrate.c @@ -408,7 +408,6 @@ static int free_ext_block(handle_t *hand
int ext4_ext_migrate(struct inode *inode) { - struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb); handle_t *handle; int retval = 0, i; __le32 *i_data; @@ -418,6 +417,7 @@ int ext4_ext_migrate(struct inode *inode unsigned long max_entries; __u32 goal, tmp_csum_seed; uid_t owner[2]; + int alloc_ctx;
/* * If the filesystem does not support extents, or the inode @@ -434,7 +434,7 @@ int ext4_ext_migrate(struct inode *inode */ return retval;
- percpu_down_write(&sbi->s_writepages_rwsem); + alloc_ctx = ext4_writepages_down_write(inode->i_sb);
/* * Worst case we can touch the allocation bitmaps and a block @@ -586,7 +586,7 @@ out_tmp_inode: unlock_new_inode(tmp_inode); iput(tmp_inode); out_unlock: - percpu_up_write(&sbi->s_writepages_rwsem); + ext4_writepages_up_write(inode->i_sb, alloc_ctx); return retval; }
@@ -605,6 +605,7 @@ int ext4_ind_migrate(struct inode *inode ext4_fsblk_t blk; handle_t *handle; int ret, ret2 = 0; + int alloc_ctx;
if (!ext4_has_feature_extents(inode->i_sb) || (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) @@ -621,7 +622,7 @@ int ext4_ind_migrate(struct inode *inode if (test_opt(inode->i_sb, DELALLOC)) ext4_alloc_da_blocks(inode);
- percpu_down_write(&sbi->s_writepages_rwsem); + alloc_ctx = ext4_writepages_down_write(inode->i_sb);
handle = ext4_journal_start(inode, EXT4_HT_MIGRATE, 1); if (IS_ERR(handle)) { @@ -665,6 +666,6 @@ errout: ext4_journal_stop(handle); up_write(&EXT4_I(inode)->i_data_sem); out_unlock: - percpu_up_write(&sbi->s_writepages_rwsem); + ext4_writepages_up_write(inode->i_sb, alloc_ctx); return ret; }
From: Baokun Li libaokun1@huawei.com
commit fa83c34e3e56b3c672af38059e066242655271b1 upstream.
When ext4_iomap_overwrite_begin() calls ext4_iomap_begin() map blocks may fail for some reason (e.g. memory allocation failure, bare disk write), and later because "iomap->type ! = IOMAP_MAPPED" triggers WARN_ON(). When ext4 iomap_begin() returns an error, it is normal that the type of iomap->type may not match the expectation. Therefore, we only determine if iomap->type is as expected when ext4_iomap_begin() is executed successfully.
Cc: stable@kernel.org Reported-by: syzbot+08106c4b7d60702dbc14@syzkaller.appspotmail.com Link: https://lore.kernel.org/all/00000000000015760b05f9b4eee9@google.com Signed-off-by: Baokun Li libaokun1@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230505132429.714648-1-libaokun1@huawei.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/inode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3577,7 +3577,7 @@ static int ext4_iomap_overwrite_begin(st */ flags &= ~IOMAP_WRITE; ret = ext4_iomap_begin(inode, offset, length, flags, iomap, srcmap); - WARN_ON_ONCE(iomap->type != IOMAP_MAPPED); + WARN_ON_ONCE(!ret && iomap->type != IOMAP_MAPPED); return ret; }
From: Theodore Ts'o tytso@mit.edu
commit 4c0b4818b1f636bc96359f7817a2d8bab6370162 upstream.
If there are failures while changing the mount options in __ext4_remount(), we need to restore the old mount options.
This commit fixes two problem. The first is there is a chance that we will free the old quota file names before a potential failure leading to a use-after-free. The second problem addressed in this commit is if there is a failed read/write to read-only transition, if the quota has already been suspended, we need to renable quota handling.
Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230506142419.984260-2-tytso@mit.edu Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/super.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
--- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -6566,9 +6566,6 @@ static int __ext4_remount(struct fs_cont }
#ifdef CONFIG_QUOTA - /* Release old quota file names */ - for (i = 0; i < EXT4_MAXQUOTAS; i++) - kfree(old_opts.s_qf_names[i]); if (enable_quota) { if (sb_any_quota_suspended(sb)) dquot_resume(sb, -1); @@ -6578,6 +6575,9 @@ static int __ext4_remount(struct fs_cont goto restore_opts; } } + /* Release old quota file names */ + for (i = 0; i < EXT4_MAXQUOTAS; i++) + kfree(old_opts.s_qf_names[i]); #endif if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks) ext4_release_system_zone(sb); @@ -6588,6 +6588,13 @@ static int __ext4_remount(struct fs_cont return 0;
restore_opts: + /* + * If there was a failing r/w to ro transition, we may need to + * re-enable quota + */ + if ((sb->s_flags & SB_RDONLY) && !(old_sb_flags & SB_RDONLY) && + sb_any_quota_suspended(sb)) + dquot_resume(sb, -1); sb->s_flags = old_sb_flags; sbi->s_mount_opt = old_opts.s_mount_opt; sbi->s_mount_opt2 = old_opts.s_mount_opt2;
From: Theodore Ts'o tytso@mit.edu
commit 4b3cb1d108bfc2aebb0d7c8a52261a53cf7f5786 upstream.
The ext4_dirhash() will *almost* never fail, especially when the hash tree feature was first introduced. However, with the addition of support of encrypted, casefolded file names, that function can most certainly fail today.
So make sure the callers of ext4_dirhash() properly check for failures, and reflect the errors back up to their callers.
Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230506142419.984260-1-tytso@mit.edu Reported-by: syzbot+394aa8a792cb99dbc837@syzkaller.appspotmail.com Reported-by: syzbot+344aaa8697ebd232bfc8@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=db56459ea4ac4a676ae4b4678f633e55da005a9... Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/hash.c | 6 +++++- fs/ext4/namei.c | 53 +++++++++++++++++++++++++++++++++++++---------------- 2 files changed, 42 insertions(+), 17 deletions(-)
--- a/fs/ext4/hash.c +++ b/fs/ext4/hash.c @@ -277,7 +277,11 @@ static int __ext4fs_dirhash(const struct } default: hinfo->hash = 0; - return -1; + hinfo->minor_hash = 0; + ext4_warning(dir->i_sb, + "invalid/unsupported hash tree version %u", + hinfo->hash_version); + return -EINVAL; } hash = hash & ~1; if (hash == (EXT4_HTREE_EOF_32BIT << 1)) --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -674,7 +674,7 @@ static struct stats dx_show_leaf(struct len = de->name_len; if (!IS_ENCRYPTED(dir)) { /* Directory is not encrypted */ - ext4fs_dirhash(dir, de->name, + (void) ext4fs_dirhash(dir, de->name, de->name_len, &h); printk("%*.s:(U)%x.%u ", len, name, h.hash, @@ -709,8 +709,9 @@ static struct stats dx_show_leaf(struct if (IS_CASEFOLDED(dir)) h.hash = EXT4_DIRENT_HASH(de); else - ext4fs_dirhash(dir, de->name, - de->name_len, &h); + (void) ext4fs_dirhash(dir, + de->name, + de->name_len, &h); printk("%*.s:(E)%x.%u ", len, name, h.hash, (unsigned) ((char *) de - base)); @@ -720,7 +721,8 @@ static struct stats dx_show_leaf(struct #else int len = de->name_len; char *name = de->name; - ext4fs_dirhash(dir, de->name, de->name_len, &h); + (void) ext4fs_dirhash(dir, de->name, + de->name_len, &h); printk("%*.s:%x.%u ", len, name, h.hash, (unsigned) ((char *) de - base)); #endif @@ -849,8 +851,14 @@ dx_probe(struct ext4_filename *fname, st hinfo->seed = EXT4_SB(dir->i_sb)->s_hash_seed; /* hash is already computed for encrypted casefolded directory */ if (fname && fname_name(fname) && - !(IS_ENCRYPTED(dir) && IS_CASEFOLDED(dir))) - ext4fs_dirhash(dir, fname_name(fname), fname_len(fname), hinfo); + !(IS_ENCRYPTED(dir) && IS_CASEFOLDED(dir))) { + int ret = ext4fs_dirhash(dir, fname_name(fname), + fname_len(fname), hinfo); + if (ret < 0) { + ret_err = ERR_PTR(ret); + goto fail; + } + } hash = hinfo->hash;
if (root->info.unused_flags & 1) { @@ -1111,7 +1119,12 @@ static int htree_dirblock_to_tree(struct hinfo->minor_hash = 0; } } else { - ext4fs_dirhash(dir, de->name, de->name_len, hinfo); + err = ext4fs_dirhash(dir, de->name, + de->name_len, hinfo); + if (err < 0) { + count = err; + goto errout; + } } if ((hinfo->hash < start_hash) || ((hinfo->hash == start_hash) && @@ -1313,8 +1326,12 @@ static int dx_make_map(struct inode *dir if (de->name_len && de->inode) { if (ext4_hash_in_dirent(dir)) h.hash = EXT4_DIRENT_HASH(de); - else - ext4fs_dirhash(dir, de->name, de->name_len, &h); + else { + int err = ext4fs_dirhash(dir, de->name, + de->name_len, &h); + if (err < 0) + return err; + } map_tail--; map_tail->hash = h.hash; map_tail->offs = ((char *) de - base)>>2; @@ -1452,10 +1469,9 @@ int ext4_fname_setup_ci_filename(struct hinfo->hash_version = DX_HASH_SIPHASH; hinfo->seed = NULL; if (cf_name->name) - ext4fs_dirhash(dir, cf_name->name, cf_name->len, hinfo); + return ext4fs_dirhash(dir, cf_name->name, cf_name->len, hinfo); else - ext4fs_dirhash(dir, iname->name, iname->len, hinfo); - return 0; + return ext4fs_dirhash(dir, iname->name, iname->len, hinfo); } #endif
@@ -2298,10 +2314,15 @@ static int make_indexed_dir(handle_t *ha fname->hinfo.seed = EXT4_SB(dir->i_sb)->s_hash_seed;
/* casefolded encrypted hashes are computed on fname setup */ - if (!ext4_hash_in_dirent(dir)) - ext4fs_dirhash(dir, fname_name(fname), - fname_len(fname), &fname->hinfo); - + if (!ext4_hash_in_dirent(dir)) { + int err = ext4fs_dirhash(dir, fname_name(fname), + fname_len(fname), &fname->hinfo); + if (err < 0) { + brelse(bh2); + brelse(bh); + return err; + } + } memset(frames, 0, sizeof(frames)); frame = frames; frame->entries = entries;
From: Theodore Ts'o tytso@mit.edu
commit f4ce24f54d9cca4f09a395f3eecce20d6bec4663 upstream.
In no journal mode, ext4_finish_convert_inline_dir() can self-deadlock by calling ext4_handle_dirty_dirblock() when it already has taken the directory lock. There is a similar self-deadlock in ext4_incvert_inline_data_nolock() for data files which we'll fix at the same time.
A simple reproducer demonstrating the problem:
mke2fs -Fq -t ext2 -O inline_data -b 4k /dev/vdc 64 mount -t ext4 -o dirsync /dev/vdc /vdc cd /vdc mkdir file0 cd file0 touch file0 touch file1 attr -s BurnSpaceInEA -V abcde . touch supercalifragilisticexpialidocious
Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230507021608.1290720-1-tytso@mit.edu Reported-by: syzbot+91dccab7c64e2850a4e5@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=ba84cc80a9491d65416bc7877e1650c87530fe8... Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/inline.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -1177,6 +1177,7 @@ static int ext4_finish_convert_inline_di ext4_initialize_dirent_tail(dir_block, inode->i_sb->s_blocksize); set_buffer_uptodate(dir_block); + unlock_buffer(dir_block); err = ext4_handle_dirty_dirblock(handle, inode, dir_block); if (err) return err; @@ -1251,6 +1252,7 @@ static int ext4_convert_inline_data_nolo if (!S_ISDIR(inode->i_mode)) { memcpy(data_bh->b_data, buf, inline_size); set_buffer_uptodate(data_bh); + unlock_buffer(data_bh); error = ext4_handle_dirty_metadata(handle, inode, data_bh); } else { @@ -1258,7 +1260,6 @@ static int ext4_convert_inline_data_nolo buf, inline_size); }
- unlock_buffer(data_bh); out_restore: if (error) ext4_restore_inline_data(handle, inode, iloc, buf, inline_size);
From: Theodore Ts'o tytso@mit.edu
commit 2220eaf90992c11d888fe771055d4de330385f01 upstream.
Normally the extended attributes in the inode body would have been checked when the inode is first opened, but if someone is writing to the block device while the file system is mounted, it's possible for the inode table to get corrupted. Add bounds checking to avoid reading beyond the end of allocated memory if this happens.
Reported-by: syzbot+1966db24521e5f6e23f7@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=1966db24521e5f6e23f7 Cc: stable@kernel.org Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/inline.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
--- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -34,6 +34,7 @@ static int get_max_inline_xattr_value_si struct ext4_xattr_ibody_header *header; struct ext4_xattr_entry *entry; struct ext4_inode *raw_inode; + void *end; int free, min_offs;
if (!EXT4_INODE_HAS_XATTR_SPACE(inode)) @@ -57,14 +58,23 @@ static int get_max_inline_xattr_value_si raw_inode = ext4_raw_inode(iloc); header = IHDR(inode, raw_inode); entry = IFIRST(header); + end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
/* Compute min_offs. */ - for (; !IS_LAST_ENTRY(entry); entry = EXT4_XATTR_NEXT(entry)) { + while (!IS_LAST_ENTRY(entry)) { + void *next = EXT4_XATTR_NEXT(entry); + + if (next >= end) { + EXT4_ERROR_INODE(inode, + "corrupt xattr in inline inode"); + return 0; + } if (!entry->e_value_inum && entry->e_value_size) { size_t offs = le16_to_cpu(entry->e_value_offs); if (offs < min_offs) min_offs = offs; } + entry = next; } free = min_offs - ((void *)entry - (void *)IFIRST(header)) - sizeof(__u32);
From: Theodore Ts'o tytso@mit.edu
commit 2a534e1d0d1591e951f9ece2fb460b2ff92edabd upstream.
In ext4_update_inline_data(), if ext4_xattr_ibody_get() fails for any reason, it's best if we just fail as opposed to stumbling on, especially if the failure is EFSCORRUPTED.
Cc: stable@kernel.org Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/inline.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -360,7 +360,7 @@ static int ext4_update_inline_data(handl
error = ext4_xattr_ibody_get(inode, i.name_index, i.name, value, len); - if (error == -ENODATA) + if (error < 0) goto out;
BUFFER_TRACE(is.iloc.bh, "get_write_access");
From: Jan Kara jack@suse.cz
commit 949f95ff39bf188e594e7ecd8e29b82eb108f5bf upstream.
When we enable MMP in ext4_multi_mount_protect() during mount or remount, we end up calling sb_start_write() from write_mmp_block(). This triggers lockdep warning because freeze protection ranks above s_umount semaphore we are holding during mount / remount. The problem is harmless because we are guaranteed the filesystem is not frozen during mount / remount but still let's fix the warning by not grabbing freeze protection from ext4_multi_mount_protect().
Cc: stable@kernel.org Reported-by: syzbot+6b7df7d5506b32467149@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=ab7e5b6f400b7778d46f01841422e5718fb8184... Signed-off-by: Jan Kara jack@suse.cz Reviewed-by: Christian Brauner brauner@kernel.org Link: https://lore.kernel.org/r/20230411121019.21940-1-jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/mmp.c | 30 +++++++++++++++++++++--------- 1 file changed, 21 insertions(+), 9 deletions(-)
--- a/fs/ext4/mmp.c +++ b/fs/ext4/mmp.c @@ -39,28 +39,36 @@ static void ext4_mmp_csum_set(struct sup * Write the MMP block using REQ_SYNC to try to get the block on-disk * faster. */ -static int write_mmp_block(struct super_block *sb, struct buffer_head *bh) +static int write_mmp_block_thawed(struct super_block *sb, + struct buffer_head *bh) { struct mmp_struct *mmp = (struct mmp_struct *)(bh->b_data);
- /* - * We protect against freezing so that we don't create dirty buffers - * on frozen filesystem. - */ - sb_start_write(sb); ext4_mmp_csum_set(sb, mmp); lock_buffer(bh); bh->b_end_io = end_buffer_write_sync; get_bh(bh); submit_bh(REQ_OP_WRITE | REQ_SYNC | REQ_META | REQ_PRIO, bh); wait_on_buffer(bh); - sb_end_write(sb); if (unlikely(!buffer_uptodate(bh))) return -EIO; - return 0; }
+static int write_mmp_block(struct super_block *sb, struct buffer_head *bh) +{ + int err; + + /* + * We protect against freezing so that we don't create dirty buffers + * on frozen filesystem. + */ + sb_start_write(sb); + err = write_mmp_block_thawed(sb, bh); + sb_end_write(sb); + return err; +} + /* * Read the MMP block. It _must_ be read from disk and hence we clear the * uptodate flag on the buffer. @@ -340,7 +348,11 @@ skip: seq = mmp_new_seq(); mmp->mmp_seq = cpu_to_le32(seq);
- retval = write_mmp_block(sb, bh); + /* + * On mount / remount we are protected against fs freezing (by s_umount + * semaphore) and grabbing freeze protection upsets lockdep + */ + retval = write_mmp_block_thawed(sb, bh); if (retval) goto failed;
From: Theodore Ts'o tytso@mit.edu
commit 463808f237cf73e98a1a45ff7460c2406a150a0b upstream.
If a malicious fuzzer overwrites the ext4 superblock while it is mounted such that the s_first_data_block is set to a very large number, the calculation of the block group can underflow, and trigger a BUG_ON check. Change this to be an ext4_warning so that we don't crash the kernel.
Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230430154311.579720-3-tytso@mit.edu Reported-by: syzbot+e2efa3efc15a1c9e95c3@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=69b28112e098b070f639efb356393af3ffec422... Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/mballoc.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -4820,7 +4820,11 @@ ext4_mb_release_group_pa(struct ext4_bud trace_ext4_mb_release_group_pa(sb, pa); BUG_ON(pa->pa_deleted == 0); ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, &bit); - BUG_ON(group != e4b->bd_group && pa->pa_len != 0); + if (unlikely(group != e4b->bd_group && pa->pa_len != 0)) { + ext4_warning(sb, "bad group: expected %u, group %u, pa_start %llu", + e4b->bd_group, group, pa->pa_pstart); + return 0; + } mb_free_blocks(pa->pa_inode, e4b, bit, pa->pa_len); atomic_add(pa->pa_len, &EXT4_SB(sb)->s_mb_discarded); trace_ext4_mballoc_discard(sb, NULL, group, bit, pa->pa_len);
From: Theodore Ts'o tytso@mit.edu
commit b87c7cdf2bed4928b899e1ce91ef0d147017ba45 upstream.
In ext4_xattr_move_to_block(), the value of the extended attribute which we need to move to an external block may be allocated by kvmalloc() if the value is stored in an external inode. So at the end of the function the code tried to check if this was the case by testing entry->e_value_inum.
However, at this point, the pointer to the xattr entry is no longer valid, because it was removed from the original location where it had been stored. So we could end up calling kvfree() on a pointer which was not allocated by kvmalloc(); or we could also potentially leak memory by not freeing the buffer when it should be freed. Fix this by storing whether it should be freed in a separate variable.
Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230430160426.581366-1-tytso@mit.edu Link: https://syzkaller.appspot.com/bug?id=5c2aee8256e30b55ccf57312c16d88417adbd5e... Link: https://syzkaller.appspot.com/bug?id=41a6b5d4917c0412eb3b3c3c604965bed7d7420... Reported-by: syzbot+64b645917ce07d89bde5@syzkaller.appspotmail.com Reported-by: syzbot+0d042627c4f2ad332195@syzkaller.appspotmail.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ext4/xattr.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/fs/ext4/xattr.c +++ b/fs/ext4/xattr.c @@ -2581,6 +2581,7 @@ static int ext4_xattr_move_to_block(hand .in_inode = !!entry->e_value_inum, }; struct ext4_xattr_ibody_header *header = IHDR(inode, raw_inode); + int needs_kvfree = 0; int error;
is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS); @@ -2603,7 +2604,7 @@ static int ext4_xattr_move_to_block(hand error = -ENOMEM; goto out; } - + needs_kvfree = 1; error = ext4_xattr_inode_get(inode, entry, buffer, value_size); if (error) goto out; @@ -2642,7 +2643,7 @@ static int ext4_xattr_move_to_block(hand
out: kfree(b_entry_name); - if (entry->e_value_inum && buffer) + if (needs_kvfree && buffer) kvfree(buffer); if (is) brelse(is->iloc.bh);
From: Jani Nikula jani.nikula@intel.com
commit 0d68683838f2850dd8ff31f1121e05bfb7a2def0 upstream.
The macro values just don't match the specs. Fix them.
Fixes: 1482ec00be4a ("drm: Add missing DP DSC extended capability definitions.") Cc: Vinod Govindapillai vinod.govindapillai@intel.com Cc: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Jani Nikula jani.nikula@intel.com Reviewed-by: Ankit Nautiyal ankit.k.nautiyal@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230406134615.1422509-2-jani.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/drm/display/drm_dp.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/include/drm/display/drm_dp.h +++ b/include/drm/display/drm_dp.h @@ -286,8 +286,8 @@
#define DP_DSC_MAX_BITS_PER_PIXEL_HI 0x068 /* eDP 1.4 */ # define DP_DSC_MAX_BITS_PER_PIXEL_HI_MASK (0x3 << 0) -# define DP_DSC_MAX_BPP_DELTA_VERSION_MASK 0x06 -# define DP_DSC_MAX_BPP_DELTA_AVAILABILITY 0x08 +# define DP_DSC_MAX_BPP_DELTA_VERSION_MASK (0x3 << 5) /* eDP 1.5 & DP 2.0 */ +# define DP_DSC_MAX_BPP_DELTA_AVAILABILITY (1 << 7) /* eDP 1.5 & DP 2.0 */
#define DP_DSC_DEC_COLOR_FORMAT_CAP 0x069 # define DP_DSC_RGB (1 << 0)
From: Mario Limonciello mario.limonciello@amd.com
commit 23a5b8bb022c1e071ca91b1a9c10f0ad6a0966e9 upstream.
Commit
310e782a99c7 ("platform/x86/amd: pmc: Utilize SMN index 0 for driver probe")
switched to using amd_smn_read() which relies upon the misc PCI ID used by DF function 3 being included in a table. The ID for model 78h is missing in that table, so amd_smn_read() doesn't work.
Add the missing ID into amd_nb, restoring s2idle on this system.
[ bp: Simplify commit message. ]
Fixes: 310e782a99c7 ("platform/x86/amd: pmc: Utilize SMN index 0 for driver probe") Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Acked-by: Bjorn Helgaas bhelgaas@google.com # pci_ids.h Acked-by: Guenter Roeck linux@roeck-us.net Link: https://lore.kernel.org/r/20230427053338.16653-2-mario.limonciello@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/amd_nb.c | 2 ++ include/linux/pci_ids.h | 1 + 2 files changed, 3 insertions(+)
--- a/arch/x86/kernel/amd_nb.c +++ b/arch/x86/kernel/amd_nb.c @@ -36,6 +36,7 @@ #define PCI_DEVICE_ID_AMD_19H_M50H_DF_F4 0x166e #define PCI_DEVICE_ID_AMD_19H_M60H_DF_F4 0x14e4 #define PCI_DEVICE_ID_AMD_19H_M70H_DF_F4 0x14f4 +#define PCI_DEVICE_ID_AMD_19H_M78H_DF_F4 0x12fc
/* Protect the PCI config register pairs used for SMN. */ static DEFINE_MUTEX(smn_mutex); @@ -79,6 +80,7 @@ static const struct pci_device_id amd_nb { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M50H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M60H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M70H_DF_F3) }, + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_M78H_DF_F3) }, {} };
--- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -567,6 +567,7 @@ #define PCI_DEVICE_ID_AMD_19H_M50H_DF_F3 0x166d #define PCI_DEVICE_ID_AMD_19H_M60H_DF_F3 0x14e3 #define PCI_DEVICE_ID_AMD_19H_M70H_DF_F3 0x14f3 +#define PCI_DEVICE_ID_AMD_19H_M78H_DF_F3 0x12fb #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703 #define PCI_DEVICE_ID_AMD_LANCE 0x2000 #define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001
From: Linus Torvalds torvalds@linux-foundation.org
This code no longer exists in mainline, because it was removed in commit d2c95f9d6802 ("x86: don't use REP_GOOD or ERMS for user memory clearing") upstream.
However, rather than backport the full range of x86 memory clearing and copying cleanups, fix the exception table annotation placement for the final 'rep movsb' in clear_user_rep_good(): rather than pointing at the actual instruction that did the user space access, it pointed to the register move just before it.
That made sense from a code flow standpoint, but not from an actual usage standpoint: it means that if user access takes an exception, the exception handler won't actually find the instruction in the exception tables.
As a result, rather than fixing it up and returning -EFAULT, it would then turn it into a kernel oops report instead, something like:
BUG: unable to handle page fault for address: 0000000020081000 #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page ... RIP: 0010:clear_user_rep_good+0x1c/0x30 arch/x86/lib/clear_page_64.S:147 ... Call Trace: __clear_user arch/x86/include/asm/uaccess_64.h:103 [inline] clear_user arch/x86/include/asm/uaccess_64.h:124 [inline] iov_iter_zero+0x709/0x1290 lib/iov_iter.c:800 iomap_dio_hole_iter fs/iomap/direct-io.c:389 [inline] iomap_dio_iter fs/iomap/direct-io.c:440 [inline] __iomap_dio_rw+0xe3d/0x1cd0 fs/iomap/direct-io.c:601 iomap_dio_rw+0x40/0xa0 fs/iomap/direct-io.c:689 ext4_dio_read_iter fs/ext4/file.c:94 [inline] ext4_file_read_iter+0x4be/0x690 fs/ext4/file.c:145 call_read_iter include/linux/fs.h:2183 [inline] do_iter_readv_writev+0x2e0/0x3b0 fs/read_write.c:733 do_iter_read+0x2f2/0x750 fs/read_write.c:796 vfs_readv+0xe5/0x150 fs/read_write.c:916 do_preadv+0x1b6/0x270 fs/read_write.c:1008 __do_sys_preadv2 fs/read_write.c:1070 [inline] __se_sys_preadv2 fs/read_write.c:1061 [inline] __x64_sys_preadv2+0xef/0x150 fs/read_write.c:1061 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
which then looks like a filesystem bug rather than the incorrect exception annotation that it is.
[ The alternative to this one-liner fix is to take the upstream series that cleans this all up:
68674f94ffc9 ("x86: don't use REP_GOOD or ERMS for small memory copies") 20f3337d350c ("x86: don't use REP_GOOD or ERMS for small memory clearing") adfcf4231b8c ("x86: don't use REP_GOOD or ERMS for user memory copies") * d2c95f9d6802 ("x86: don't use REP_GOOD or ERMS for user memory clearing") 3639a535587d ("x86: move stac/clac from user copy routines into callers") 577e6a7fd50d ("x86: inline the 'rep movs' in user copies for the FSRM case") 8c9b6a88b7e2 ("x86: improve on the non-rep 'clear_user' function") 427fda2c8a49 ("x86: improve on the non-rep 'copy_user' function") * e046fe5a36a9 ("x86: set FSRS automatically on AMD CPUs that have FSRM") e1f2750edc4a ("x86: remove 'zerorest' argument from __copy_user_nocache()") 034ff37d3407 ("x86: rewrite '__copy_user_nocache' function")
with either the whole series or at a minimum the two marked commits being needed to fix this issue ]
Reported-by: syzbot syzbot+401145a9a237779feb26@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=401145a9a237779feb26 Fixes: 0db7058e8e23 ("x86/clear_user: Make it faster") Cc: Borislav Petkov bp@alien8.de Cc: stable@kernel.org Signed-off-by: Linus Torvalds torvalds@linux-foundation Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/lib/clear_page_64.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/lib/clear_page_64.S +++ b/arch/x86/lib/clear_page_64.S @@ -142,8 +142,8 @@ SYM_FUNC_START(clear_user_rep_good) and $7, %edx jz .Lrep_good_exit
-.Lrep_good_bytes: mov %edx, %ecx +.Lrep_good_bytes: rep stosb
.Lrep_good_exit:
From: Christophe Leroy christophe.leroy@csgroup.eu
commit 8a5299a1278eadf1e08a598a5345c376206f171e upstream.
For different reasons, fsl-spi driver performs bits_per_word modifications for different reasons: - On CPU mode, to minimise amount of interrupts - On CPM/QE mode to work around controller byte order
For CPU mode that's done in fsl_spi_prepare_message() while for CPM mode that's done in fsl_spi_setup_transfer().
Reunify all of it in fsl_spi_prepare_message(), and catch impossible cases early through master's bits_per_word_mask instead of returning EINVAL later.
Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Link: https://lore.kernel.org/r/0ce96fe96e8b07cba0613e4097cfd94d09b8919a.168037180... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/spi/spi-fsl-spi.c | 46 +++++++++++++++++++++------------------------- 1 file changed, 21 insertions(+), 25 deletions(-)
--- a/drivers/spi/spi-fsl-spi.c +++ b/drivers/spi/spi-fsl-spi.c @@ -177,26 +177,6 @@ static int mspi_apply_cpu_mode_quirks(st return bits_per_word; }
-static int mspi_apply_qe_mode_quirks(struct spi_mpc8xxx_cs *cs, - struct spi_device *spi, - int bits_per_word) -{ - /* CPM/QE uses Little Endian for words > 8 - * so transform 16 and 32 bits words into 8 bits - * Unfortnatly that doesn't work for LSB so - * reject these for now */ - /* Note: 32 bits word, LSB works iff - * tfcr/rfcr is set to CPMFCR_GBL */ - if (spi->mode & SPI_LSB_FIRST && - bits_per_word > 8) - return -EINVAL; - if (bits_per_word <= 8) - return bits_per_word; - if (bits_per_word == 16 || bits_per_word == 32) - return 8; /* pretend its 8 bits */ - return -EINVAL; -} - static int fsl_spi_setup_transfer(struct spi_device *spi, struct spi_transfer *t) { @@ -224,9 +204,6 @@ static int fsl_spi_setup_transfer(struct bits_per_word = mspi_apply_cpu_mode_quirks(cs, spi, mpc8xxx_spi, bits_per_word); - else - bits_per_word = mspi_apply_qe_mode_quirks(cs, spi, - bits_per_word);
if (bits_per_word < 0) return bits_per_word; @@ -361,6 +338,19 @@ static int fsl_spi_prepare_message(struc t->bits_per_word = 32; else if ((t->len & 1) == 0) t->bits_per_word = 16; + } else { + /* + * CPM/QE uses Little Endian for words > 8 + * so transform 16 and 32 bits words into 8 bits + * Unfortnatly that doesn't work for LSB so + * reject these for now + * Note: 32 bits word, LSB works iff + * tfcr/rfcr is set to CPMFCR_GBL + */ + if (m->spi->mode & SPI_LSB_FIRST && t->bits_per_word > 8) + return -EINVAL; + if (t->bits_per_word == 16 || t->bits_per_word == 32) + t->bits_per_word = 8; /* pretend its 8 bits */ } } return fsl_spi_setup_transfer(m->spi, first); @@ -594,8 +584,14 @@ static struct spi_master *fsl_spi_probe( if (mpc8xxx_spi->type == TYPE_GRLIB) fsl_spi_grlib_probe(dev);
- master->bits_per_word_mask = - (SPI_BPW_RANGE_MASK(4, 16) | SPI_BPW_MASK(32)) & + if (mpc8xxx_spi->flags & SPI_CPM_MODE) + master->bits_per_word_mask = + (SPI_BPW_RANGE_MASK(4, 8) | SPI_BPW_MASK(16) | SPI_BPW_MASK(32)); + else + master->bits_per_word_mask = + (SPI_BPW_RANGE_MASK(4, 16) | SPI_BPW_MASK(32)); + + master->bits_per_word_mask &= SPI_BPW_RANGE_MASK(1, mpc8xxx_spi->max_bits_per_word);
if (mpc8xxx_spi->flags & SPI_QE_CPU_MODE)
From: Christophe Leroy christophe.leroy@csgroup.eu
commit fc96ec826bced75cc6b9c07a4ac44bbf651337ab upstream.
On CPM, the RISC core is a lot more efficiant when doing transfers in 16-bits chunks than in 8-bits chunks, but unfortunately the words need to be byte swapped as seen in a previous commit.
So, for large tranfers with an even size, allocate a temporary tx buffer and byte-swap data before and after transfer.
This change allows setting higher speed for transfer. For instance on an MPC 8xx (CPM1 comms RISC processor), the documentation tells that transfer in byte mode at 1 kbit/s uses 0.200% of CPM load at 25 MHz while a word transfer at the same speed uses 0.032% of CPM load. This means the speed can be 6 times higher in word mode for the same CPM load.
For the time being, only do it on CPM1 as there must be a trade-off between the CPM load reduction and the CPU load required to byte swap the data.
Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Link: https://lore.kernel.org/r/f2e981f20f92dd28983c3949702a09248c23845c.168037180... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/spi/spi-fsl-cpm.c | 23 +++++++++++++++++++++++ drivers/spi/spi-fsl-spi.c | 3 +++ 2 files changed, 26 insertions(+)
--- a/drivers/spi/spi-fsl-cpm.c +++ b/drivers/spi/spi-fsl-cpm.c @@ -21,6 +21,7 @@ #include <linux/spi/spi.h> #include <linux/types.h> #include <linux/platform_device.h> +#include <linux/byteorder/generic.h>
#include "spi-fsl-cpm.h" #include "spi-fsl-lib.h" @@ -120,6 +121,21 @@ int fsl_spi_cpm_bufs(struct mpc8xxx_spi mspi->rx_dma = mspi->dma_dummy_rx; mspi->map_rx_dma = 0; } + if (t->bits_per_word == 16 && t->tx_buf) { + const u16 *src = t->tx_buf; + u16 *dst; + int i; + + dst = kmalloc(t->len, GFP_KERNEL); + if (!dst) + return -ENOMEM; + + for (i = 0; i < t->len >> 1; i++) + dst[i] = cpu_to_le16p(src + i); + + mspi->tx = dst; + mspi->map_tx_dma = 1; + }
if (mspi->map_tx_dma) { void *nonconst_tx = (void *)mspi->tx; /* shut up gcc */ @@ -173,6 +189,13 @@ void fsl_spi_cpm_bufs_complete(struct mp if (mspi->map_rx_dma) dma_unmap_single(dev, mspi->rx_dma, t->len, DMA_FROM_DEVICE); mspi->xfer_in_progress = NULL; + + if (t->bits_per_word == 16 && t->rx_buf) { + int i; + + for (i = 0; i < t->len; i += 2) + le16_to_cpus(t->rx_buf + i); + } } EXPORT_SYMBOL_GPL(fsl_spi_cpm_bufs_complete);
--- a/drivers/spi/spi-fsl-spi.c +++ b/drivers/spi/spi-fsl-spi.c @@ -351,6 +351,9 @@ static int fsl_spi_prepare_message(struc return -EINVAL; if (t->bits_per_word == 16 || t->bits_per_word == 32) t->bits_per_word = 8; /* pretend its 8 bits */ + if (t->bits_per_word == 8 && t->len >= 256 && + (mpc8xxx_spi->flags & SPI_CPM1)) + t->bits_per_word = 16; } } return fsl_spi_setup_transfer(m->spi, first);
From: Aurabindo Pillai aurabindo.pillai@amd.com
commit da5e14909776edea4462672fb4a3007802d262e7 upstream.
[Why&How]
When skipping full modeset since the only state change was a front porch change, the DC commit sequence requires extra checks to handle non existant plane states being asked to be removed from context.
Reviewed-by: Alvin Lee Alvin.Lee2@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Aurabindo Pillai aurabindo.pillai@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 ++++- drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 3 +++ 2 files changed, 7 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7783,6 +7783,8 @@ static void amdgpu_dm_commit_planes(stru continue;
dc_plane = dm_new_plane_state->dc_state; + if (!dc_plane) + continue;
bundle->surface_updates[planes_count].surface = dc_plane; if (new_pcrtc_state->color_mgmt_changed) { @@ -9330,8 +9332,9 @@ static int dm_update_plane_state(struct return -EINVAL; }
+ if (dm_old_plane_state->dc_state) + dc_plane_state_release(dm_old_plane_state->dc_state);
- dc_plane_state_release(dm_old_plane_state->dc_state); dm_new_plane_state->dc_state = NULL;
*lock_and_validation_needed = true; --- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c @@ -1707,6 +1707,9 @@ bool dc_remove_plane_from_context( struct dc_stream_status *stream_status = NULL; struct resource_pool *pool = dc->res_pool;
+ if (!plane_state) + return true; + for (i = 0; i < context->stream_count; i++) if (context->streams[i] == stream) { stream_status = &context->stream_status[i];
Hello Greg,
From: Greg Kroah-Hartman gregkh@linuxfoundation.org Sent: Monday, May 15, 2023 5:25 PM
This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
CIP configurations built and booted with Linux 6.2.16-rc1 (704eace42a14): https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/pipelines/86... https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/commits/linu...
Tested-by: Chris Paterson (CIP) chris.paterson2@renesas.com
Kind regards, Chris
On 5/15/23 10:25, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.16-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
On 5/15/23 9:25 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.16-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
On Mon, May 15, 2023 at 06:25:26PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Successfully compiled and installed bindeb-pkgs on my computer (Acer Aspire E15, Intel Core i3 Haswell). No noticeable regressions.
Tested-by: Bagas Sanjaya bagasdotme@gmail.com
Hi Greg,
On Mon, May 15, 2023 at 06:25:26PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Build test (gcc version 12.2.1 20230511): mips: 52 configs -> no failure arm: 100 configs -> no failure arm64: 3 configs -> no failure x86_64: 4 configs -> no failure alpha allmodconfig -> no failure csky allmodconfig -> no failure powerpc allmodconfig -> no failure riscv allmodconfig -> no failure s390 allmodconfig -> no failure xtensa allmodconfig -> no failure
Boot test: x86_64: Booted on my test laptop. No regression. x86_64: Booted on qemu. No regression. [1] arm64: Booted on rpi4b (4GB model). No regression. [2]
[1]. https://openqa.qa.codethink.co.uk/tests/3537 [2]. https://openqa.qa.codethink.co.uk/tests/3538
Tested-by: Sudip Mukherjee sudip.mukherjee@codethink.co.uk
On Mon, 15 May 2023 at 22:43, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.16-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. Regressions on arm, x86_64, and i386.
Reported-by: Linux Kernel Functional Testing lkft@linaro.org
LTP commands test case mkfs01_ntfs_sh fails on following stable-rc - 6.2.16-rc1. - 6.1.29-rc1 - 5.15.112-rc1 - 5.10.180-rc1 - 5.4.243-rc1 - 4.19.283-rc1 - 4.14.315-rc1
LTP commands, - mkfs01_ntfs_sh
Test log: ======= tst_device.c:93:[ 147.226163] loop0: detected capacity change from 0 to 614400 TINFO: Found free device 0 '/dev/loop0' mkfs01 1 TINFO: timeout per run is 0h 50m 0s mkfs01 1 TPASS: 'mkfs -t ntfs /dev/loop0 ' passed. mkfs01 2 TINFO: Mounting device: mount -t ntfs /dev/loop0 /scratch/ltp-zKu0Zn6L6o/LTP_mkfs01.ULVi9nN6XF/mntpoint modprobe: FATAL: Module fuse not found in directory /lib/modules/6.2.16-rc1 ntfs-3g-mount: fuse device is missing, try 'modprobe fuse' as root mkfs01 2 TBROK: Failed to mount device ntfs type: mount exit = 21 [ 166.420602] I/O error, dev loop0, sector 614272 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
Test log: - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.2.y/build/v6.2.15...
Test history: - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.28... - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.2.y/build/v6.2.15...
## Build * kernel: 6.2.16-rc1 * git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc * git branch: linux-6.2.y * git commit: 704eace42a1426c1ef49afee9db673752a7b7c73 * git describe: v6.2.15-243-g704eace42a14 * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.2.y/build/v6.2.15...
## Test Regressions (compared to v6.2.15)
* x15, ltp-commands - mkfs01_ntfs_sh
* x86, ltp-commands - mkfs01_ntfs_sh
* x86-kasan, ltp-commands - mkfs01_ntfs_sh
* x86_64-clang, ltp-commands - mkfs01_ntfs_sh
## Metric Regressions (compared to v6.2.15)
## Test Fixes (compared to v6.2.15)
## Metric Fixes (compared to v6.2.15)
## Test result summary total: 178929, pass: 153909, fail: 3053, skip: 21665, xfail: 302
## Build Summary * arc: 5 total, 5 passed, 0 failed * arm: 145 total, 142 passed, 3 failed * arm64: 54 total, 53 passed, 1 failed * i386: 41 total, 38 passed, 3 failed * mips: 30 total, 28 passed, 2 failed * parisc: 8 total, 8 passed, 0 failed * powerpc: 38 total, 36 passed, 2 failed * riscv: 26 total, 25 passed, 1 failed * s390: 16 total, 16 passed, 0 failed * sh: 14 total, 12 passed, 2 failed * sparc: 8 total, 7 passed, 1 failed * x86_64: 46 total, 46 passed, 0 failed
## Test suites summary * boot * fwts * kselftest-android * kselftest-arm64 * kselftest-breakpoints * kselftest-capabilities * kselftest-cgroup * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-cpufreq * kselftest-drivers-dma-buf * kselftest-efivarfs * kselftest-exec * kselftest-filesystems * kselftest-filesystems-binderfs * kselftest-firmware * kselftest-fpu * kselftest-ftrace * kselftest-futex * kselftest-gpio * kselftest-intel_pstate * kselftest-ipc * kselftest-ir * kselftest-kcmp * kselftest-kexec * kselftest-kvm * kselftest-lib * kselftest-livepatch * kselftest-membarrier * kselftest-memfd * kselftest-memory-hotplug * kselftest-mincore * kselftest-mount * kselftest-mqueue * kselftest-net * kselftest-net-forwarding * kselftest-net-mptcp * kselftest-netfilter * kselftest-nsfs * kselftest-openat2 * kselftest-pid_namespace * kselftest-pidfd * kselftest-proc * kselftest-pstore * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-seccomp * kselftest-sigaltstack * kselftest-size * kselftest-splice * kselftest-static_keys * kselftest-sync * kselftest-sysctl * kselftest-tc-testing * kselftest-timens * kselftest-timers * kselftest-tmpfs * kselftest-tpm2 * kselftest-user * kselftest-user_events * kselftest-vDSO * kselftest-vm * kselftest-watchdog * kselftest-x86 * kselftest-zram * kunit * kvm-unit-tests * libhugetlbfs * log-parser-boot * log-parser-test * ltp-cap_bo[ * ltp-cap_bounds * ltp-commands * ltp-containers * ltp-controllers * ltp-cpuhotplug * ltp-crypto * ltp-cve * ltp-dio * ltp-fcntl-locktests * ltp-filecaps * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-fsx * ltp-hugetlb * ltp-io * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-pty * ltp-sched * ltp-securebits * ltp-smoke * ltp-syscalls * ltp-tracing * network-basic-tests * perf * rcutorture * v4l2-compliance * vdso
-- Linaro LKFT https://lkft.linaro.org
On Mon, May 15, 2023 at 06:25:26PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.16-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
Tested rc1 against the Fedora build system (aarch64, armv7, ppc64le, s390x, x86_64), and boot tested x86_64. No regressions noted.
Tested-by: Justin M. Forbes jforbes@fedoraproject.org
On Mon, May 15, 2023 at 06:25:26PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Nothing untoward here. Tested-by: Conor Dooley conor.dooley@microchip.com
Thanks, Conor
* Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
Hi Greg
6.2.16-rc1
compiles, boots and runs here on x86_64 (AMD Ryzen 5 PRO 4650G, Slackware64-15.0)
Tested-by: Markus Reichelt lkt+2023@mareichelt.com
On Mon, May 15, 2023 at 06:25:26PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.16 release. There are 242 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 17 May 2023 16:16:37 +0000. Anything received after that time might be too late.
Build results: total: 155 pass: 155 fail: 0 Qemu test results: total: 520 pass: 520 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
Guenter
linux-stable-mirror@lists.linaro.org