This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 12 Mar 2025 17:04:00 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.12.19-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.12.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.12.19-rc1
Xi Ruoyao xry111@xry111.site x86/mm: Don't disable PCID when INVLPG has been fixed by microcode
Kumar Kartikeya Dwivedi memxor@gmail.com selftests/bpf: Clean up open-coded gettid syscall invocations
Jiri Olsa jolsa@kernel.org uprobes: Fix race in uprobe_free_utask
Paolo Bonzini pbonzini@redhat.com KVM: e500: always restore irqs
Greg Kroah-Hartman gregkh@linuxfoundation.org Revert "KVM: PPC: e500: Mark "struct page" dirty in kvmppc_e500_shadow_map()"
Greg Kroah-Hartman gregkh@linuxfoundation.org Revert "KVM: PPC: e500: Mark "struct page" pfn accessed before dropping mmu_lock"
Greg Kroah-Hartman gregkh@linuxfoundation.org Revert "KVM: PPC: e500: Use __kvm_faultin_pfn() to handle page faults"
Greg Kroah-Hartman gregkh@linuxfoundation.org Revert "KVM: e500: always restore irqs"
Miguel Ojeda ojeda@kernel.org docs: rust: remove spurious item in `expect` list
Maurizio Lombardi mlombard@redhat.com nvme-tcp: Fix a C2HTermReq error message
Arnd Bergmann arnd@arndb.de ALSA: hda: realtek: fix incorrect IS_REACHABLE() usage
Arnd Bergmann arnd@arndb.de kbuild: hdrcheck: fix cross build with clang
Max Kellermann max.kellermann@ionos.com fs/netfs/read_collect: fix crash due to uninitialized `prev` variable
Max Kellermann max.kellermann@ionos.com fs/netfs/read_pgpriv2: skip folio queues without `marks3`
Ryan Roberts ryan.roberts@arm.com arm64: hugetlb: Fix huge_ptep_get_and_clear() for non-present ptes
Ryan Roberts ryan.roberts@arm.com mm: hugetlb: Add huge page size param to huge_ptep_get_and_clear()
Nayab Sayed nayabbasha.sayed@microchip.com iio: adc: at91-sama5d2_adc: fix sama7g5 realbits value
Markus Burri markus.burri@mt.com iio: adc: ad7192: fix channel select
Angelo Dureghello adureghello@baylibre.com iio: dac: ad3552r: clear reset status flag
Javier Carrasco javier.carrasco.cruz@gmail.com iio: light: apds9306: fix max_scale_nano values
Sam Winchenbach swinchenbach@arka.org iio: filter: admv8818: Force initialization of SDO
Haoyu Li lihaoyu499@gmail.com drivers: virt: acrn: hsm: Use kzalloc to avoid info leak in pmcmd_ioctl
Andy Shevchenko andriy.shevchenko@linux.intel.com eeprom: digsy_mtc: Make GPIO lookup table match the device
Manivannan Sadhasivam manivannan.sadhasivam@linaro.org bus: mhi: host: pci_generic: Use pci_try_reset_function() to avoid deadlock
Visweswara Tanuku quic_vtanuku@quicinc.com slimbus: messaging: Free transaction ID in delayed interrupt scenario
Luca Ceresoli luca.ceresoli@bootlin.com drivers: core: fix device leak in __fw_devlink_relax_cycles()
Thadeu Lima de Souza Cascardo cascardo@igalia.com char: misc: deallocate static minor in error path
Alexander Shishkin alexander.shishkin@linux.intel.com intel_th: pci: Add Panther Lake-P/U support
Alexander Shishkin alexander.shishkin@linux.intel.com intel_th: pci: Add Panther Lake-H support
Pawel Chmielewski pawel.chmielewski@intel.com intel_th: pci: Add Arrow Lake support
Hans de Goede hdegoede@redhat.com mei: vsc: Use "wakeuphostint" when getting the host wakeup GPIO
Alexander Usyskin alexander.usyskin@intel.com mei: me: add panther lake P DID
Qiu-ji Chen chenqiuji666@gmail.com cdx: Fix possible UAF error in driver_override_show()
Xiaoyao Li xiaoyao.li@intel.com KVM: x86: Explicitly zero EAX and EBX when PERFMON_V2 isn't supported by KVM
Sean Christopherson seanjc@google.com KVM: x86: Snapshot the host's DEBUGCTL after disabling IRQs
Sean Christopherson seanjc@google.com KVM: SVM: Manually context switch DEBUGCTL if LBR virtualization is disabled
Sean Christopherson seanjc@google.com KVM: x86: Snapshot the host's DEBUGCTL in common x86
Sean Christopherson seanjc@google.com KVM: SVM: Suppress DEBUGCTL.BTF on AMD
Sean Christopherson seanjc@google.com KVM: SVM: Drop DEBUGCTL[5:2] from guest's effective value
Sean Christopherson seanjc@google.com KVM: SVM: Save host DR masks on CPUs with DebugSwap
Sean Christopherson seanjc@google.com KVM: SVM: Set RFLAGS.IF=1 in C code, to get VMRUN out of the STI shadow
Michal Pecio michal.pecio@gmail.com usb: xhci: Enable the TRB overfetch quirk on VIA VL805
Andy Shevchenko andriy.shevchenko@linux.intel.com xhci: pci: Fix indentation in the PCI device ID definitions
Gary Guo gary@garyguo.net rust: map `long` to `isize` and `char` to `u8`
Miguel Ojeda ojeda@kernel.org rust: finish using custom FFI integer types
Christian A. Ehrhardt lk@c--e.de acpi: typec: ucsi: Introduce a ->poll_cci method
Thomas Weißschuh thomas.weissschuh@linutronix.de kbuild: userprogs: use correct lld when linking through clang
Prashanth K prashanth.k@oss.qualcomm.com usb: gadget: Check bmAttributes only if configuration is valid
Marek Szyprowski m.szyprowski@samsung.com usb: gadget: Fix setting self-powered state on suspend
Prashanth K prashanth.k@oss.qualcomm.com usb: gadget: Set self-powered based on MaxPower and bmAttributes
AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com usb: typec: tcpci_rt1711h: Unmask alert interrupts to fix functionality
Fedor Pchelkin boddah8794@gmail.com usb: typec: ucsi: increase timeout for PPM reset operations
Badhri Jagan Sridharan badhri@google.com usb: dwc3: gadget: Prevent irq storm when TH re-executes
Thinh Nguyen Thinh.Nguyen@synopsys.com usb: dwc3: Set SUSPENDENABLE soon after phy init
Nikita Zhandarovich n.zhandarovich@fintech.ru usb: atm: cxacru: fix a flaw in existing endpoint checks
Prashanth K prashanth.k@oss.qualcomm.com usb: gadget: u_ether: Set is_suspend flag if remote wakeup fails
Claudiu Beznea claudiu.beznea.uj@bp.renesas.com usb: renesas_usbhs: Flush the notify_hotplug_work
Andrei Kuchynski akuchynski@chromium.org usb: typec: ucsi: Fix NULL pointer access
Miao Li limiao@kylinos.cn usb: quirks: Add DELAY_INIT and NO_LPM for Prolific Mass Storage Card Reader
Pawel Laszczak pawell@cadence.com usb: hub: lack of clearing xHC resources
Claudiu Beznea claudiu.beznea.uj@bp.renesas.com usb: renesas_usbhs: Use devm_usb_get_phy()
Marc Zyngier maz@kernel.org xhci: Restrict USB4 tunnel detection for USB3 devices to Intel hosts
Claudiu Beznea claudiu.beznea.uj@bp.renesas.com usb: renesas_usbhs: Call clk_put()
Christian Heusel christian@heusel.eu Revert "drivers/card_reader/rtsx_usb: Restore interrupt based detection"
Fabrizio Castro fabrizio.castro.jz@renesas.com gpio: rcar: Fix missing of_node_put() call
Justin Iurman justin.iurman@uliege.be net: ipv6: fix missing dst ref drop in ila lwtunnel
Justin Iurman justin.iurman@uliege.be net: ipv6: fix dst ref loop in ila lwtunnel
Matt Johnston matt@codeconstruct.com.au mctp i3c: handle NULL header address
Lorenzo Bianconi lorenzo@kernel.org net: dsa: mt7530: Fix traffic flooding for MMIO devices
Dan Carpenter dan.carpenter@linaro.org nvme-tcp: fix signedness bug in nvme_tcp_init_connection()
Zecheng Li zecheng@google.com sched/fair: Fix potential memory corruption in child_cfs_rq_on_list
Uday Shankar ushankar@purestorage.com ublk: set_params: properly check if parameters can be applied
Jason Xing kerneljasonxing@gmail.com net-timestamp: support TCP GSO case for a few missing flags
Eric Sandeen sandeen@redhat.com exfat: short-circuit zero-byte writes in exfat_file_write_iter
Namjae Jeon linkinjeon@kernel.org exfat: fix soft lockup in exfat_clear_bitmap
Yuezhang Mo Yuezhang.Mo@sony.com exfat: fix just enough dentries but allocate a new cluster to dir
Jarkko Sakkinen jarkko@kernel.org x86/sgx: Fix size overflows in sgx_encl_create()
Oscar Maes oscmaes92@gmail.com vlan: enforce underlying device type
Maxime Chevallier maxime.chevallier@bootlin.com net: ethtool: netlink: Allow NULL nlattrs when getting a phy_device
Jakub Kicinski kuba@kernel.org net: ethtool: plumb PHY stats to PHY drivers
Oleksij Rempel o.rempel@pengutronix.de ethtool: linkstate: migrate linkstate functions to support multi-PHY setups
Jiayuan Chen jiayuan.chen@linux.dev ppp: Fix KMSAN uninit-value warning with bpf
Luca Weiss luca.weiss@fairphone.com net: ipa: Enable checksum for IPA_ENDPOINT_AP_MODEM_{RX,TX} for v4.7
Luca Weiss luca.weiss@fairphone.com net: ipa: Fix QSB data for v4.7
Luca Weiss luca.weiss@fairphone.com net: ipa: Fix v4.7 resource group names
Vicki Pfau vi@endrift.com HID: hid-steam: Fix use-after-free when detaching device
Maarten Lankhorst dev@lankhorst.se drm/xe: Remove double pageflip
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915: Plumb 'dsb' all way to the plane hooks
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915/color: Extract intel_color_modeset()
Peiyang Wang wangpeiyang1@huawei.com net: hns3: make sure ptp clock is unregister and freed if hclge_ptp_get_cycle returns an error
Nikolay Aleksandrov razor@blackwall.org be2net: fix sleeping while atomic bugs in be_ndo_bridge_getlink
Philipp Stanner phasta@kernel.org drm/sched: Fix preprocessor guard
Xinghuo Chen xinghuo.chen@foxmail.com hwmon: fix a NULL vs IS_ERR_OR_NULL() check in xgene_hwmon_probe()
Eric Dumazet edumazet@google.com llc: do not use skb_get() before dev_queue_xmit()
Murad Masimov m.masimov@mt-integration.ru ALSA: usx2y: validate nrpacks module parameter on probe
Alessio Belle alessio.belle@imgtec.com drm/imagination: Fix timestamps in firmware traces
Masami Hiramatsu (Google) mhiramat@kernel.org tracing: probe-events: Remove unused MAX_ARG_BUF_LEN macro
Erik Schumacher erik.schumacher@iris-sensing.com hwmon: (ad7314) Validate leading zero bits and return error
Maud Spierings maudspierings@gocontroll.com hwmon: (ntc_thermistor) Fix the ncpXXxh103 sensor table
Titus Rwantare titusr@google.com hwmon: (pmbus) Initialise page count in pmbus_identify()
Peter Zijlstra peterz@infradead.org perf/core: Fix pmus_lock vs. pmus_srcu ordering
Vitaliy Shevtsov v.shevtsov@mt-integration.ru caif_virtio: fix wrong pointer check in cfv_probe()
Antoine Tenart atenart@kernel.org net: gso: fix ownership in __udp_gso_segment
Antheas Kapenekakis lkml@antheas.dev ALSA: hda/realtek: Remove (revert) duplicate Ally X config
Meir Elisha meir.elisha@volumez.com nvmet-tcp: Fix a possible sporadic response drops in weakly ordered arch
Maurizio Lombardi mlombard@redhat.com nvme-tcp: fix potential memory corruption in nvme_tcp_recv_pdu()
Maurizio Lombardi mlombard@redhat.com nvme-tcp: add basic support for the C2HTermReq PDU
Salah Triki salah.triki@gmail.com bluetooth: btusb: Initialize .owner field of force_poll_sync_fops
Dave Airlie airlied@redhat.com drm/nouveau: select FW caching
Thomas Zimmermann tzimmermann@suse.de drm/nouveau: Run DRM default client setup
Thomas Zimmermann tzimmermann@suse.de drm/fbdev-ttm: Support struct drm_driver.fbdev_probe
Thomas Zimmermann tzimmermann@suse.de drm: Add client-agnostic setup helper
Thomas Zimmermann tzimmermann@suse.de drm/fbdev: Add memory-agnostic fbdev client
Thomas Zimmermann tzimmermann@suse.de drm/fbdev-helper: Move color-mode lookup into 4CC format helper
Johannes Berg johannes.berg@intel.com wifi: mac80211: fix vendor-specific inheritance
Johannes Berg johannes.berg@intel.com wifi: mac80211: fix MLE non-inheritance parsing
Ilan Peer ilan.peer@intel.com wifi: mac80211: Support parsing EPCS ML element
Keith Busch kbusch@kernel.org nvme-ioctl: fix leaked requests on mapping error
Keith Busch kbusch@kernel.org nvme-pci: use sgls for all user requests if possible
Keith Busch kbusch@kernel.org nvme-pci: add support for sgl metadata
Kees Cook kees@kernel.org coredump: Only sort VMAs when core_sort_vma sysctl is set
Zhang Lixu lixu.zhang@intel.com HID: intel-ish-hid: Fix use-after-free issue in ishtp_hid_remove()
Zhang Lixu lixu.zhang@intel.com HID: intel-ish-hid: Fix use-after-free issue in hid_ishtp_cl_remove()
Yu-Chun Lin eleanor15x@gmail.com HID: google: fix unused variable warning under !CONFIG_ACPI
Ilan Peer ilan.peer@intel.com wifi: iwlwifi: Fix A-MSDU TSO preparation
Ilan Peer ilan.peer@intel.com wifi: iwlwifi: Free pages allocated when failing to build A-MSDU
Johannes Berg johannes.berg@intel.com wifi: iwlwifi: limit printed string from FW file
Emmanuel Grumbach emmanuel.grumbach@intel.com wifi: iwlwifi: mvm: don't try to talk to a dead firmware
Johannes Berg johannes.berg@intel.com wifi: iwlwifi: mvm: clean up ROC on failure
Ma Wupeng mawupeng1@huawei.com mm: memory-hotplug: check folio ref count first in do_migrate_range
Ma Wupeng mawupeng1@huawei.com hwpoison, memory_hotplug: lock folio before unmap hwpoisoned folio
Brian Geffon bgeffon@google.com mm: fix finish_fault() handling for large folios
Ryan Roberts ryan.roberts@arm.com mm: don't skip arch_sync_kernel_mappings() in error paths
Ma Wupeng mawupeng1@huawei.com mm: memory-failure: update ttu flag inside unmap_poisoned_folio
Lorenzo Stoakes lorenzo.stoakes@oracle.com mm: abort vma_modify() on merge out of memory failure
Hao Zhang zhanghao1@kylinos.cn mm/page_alloc: fix uninitialized variable
Olivier Gayot olivier.gayot@canonical.com block: fix conversion of GPT partition name to 7-bit
Suren Baghdasaryan surenb@google.com userfaultfd: do not block on locking a large folio with raised refcount
Mike Snitzer snitzer@kernel.org NFS: fix nfs_release_folio() to not deadlock via kcompactd writeback
Heiko Carstens hca@linux.ibm.com s390/traps: Fix test_monitor_call() inline assembly
Sebastian Andrzej Siewior bigeasy@linutronix.de dma: kmsan: export kmsan_handle_dma() for modules
Haoxiang Li haoxiang_li2024@163.com rapidio: fix an API misues when rio_add_net() fails
Haoxiang Li haoxiang_li2024@163.com rapidio: add check for rio_add_net() in rio_scan_alloc_net()
SeongJae Park sj@kernel.org selftests/damon/damon_nr_regions: sort collected regiosn before checking with min/max boundaries
SeongJae Park sj@kernel.org selftests/damon/damon_nr_regions: set ops update for merge results check to 100ms
SeongJae Park sj@kernel.org selftests/damon/damos_quota: make real expectation of quota exceeds
SeongJae Park sj@kernel.org selftests/damon/damos_quota_goal: handle minimum quota that cannot be further reduced
Vitaliy Shevtsov v.shevtsov@mt-integration.ru wifi: nl80211: reject cooked mode if it is set along with other flags
Nikita Zhandarovich n.zhandarovich@fintech.ru wifi: cfg80211: regulatory: improve invalid hints checking
Haoxiang Li haoxiang_li2024@163.com Bluetooth: Add check for mgmt_alloc_skb() in mgmt_device_connected()
Haoxiang Li haoxiang_li2024@163.com Bluetooth: Add check for mgmt_alloc_skb() in mgmt_remote_name()
Thomas Hellström thomas.hellstrom@linux.intel.com drm/xe/userptr: Unmap userptrs in the mmu notifier
Matthew Auld matthew.auld@intel.com drm/xe/userptr: properly setup pfn_flags_mask
Thomas Hellström thomas.hellstrom@linux.intel.com drm/xe: Fix fault mode invalidation with unbind
Tvrtko Ursulin tvrtko.ursulin@igalia.com drm/xe: Fix GT "for each engine" workarounds
Krister Johansen kjlx@templeofstupid.com mptcp: fix 'scheduling while atomic' in mptcp_pm_nl_append_new_local_addr
Thomas Hellström thomas.hellstrom@linux.intel.com drm/xe/vm: Validate userptr during gpu vma prefetching
Thomas Hellström thomas.hellstrom@linux.intel.com drm/xe/vm: Fix a misplaced #endif
Thomas Hellström thomas.hellstrom@linux.intel.com drm/xe/hmm: Don't dereference struct page pointers without notifier lock
Thomas Hellström thomas.hellstrom@linux.intel.com drm/xe/hmm: Style- and include fixes
Matthew Brost matthew.brost@intel.com drm/xe: Add staging tree for VM binds
Ahmed S. Darwish darwi@linutronix.de x86/cpu: Properly parse CPUID leaf 0x2 TLB descriptor 0x63
Ahmed S. Darwish darwi@linutronix.de x86/cpu: Validate CPUID leaf 0x2 EDX output
Ahmed S. Darwish darwi@linutronix.de x86/cacheinfo: Validate CPUID leaf 0x2 EDX output
Ard Biesheuvel ardb@kernel.org x86/boot: Sanitize boot params before parsing command line
Mingcong Bai jeffbai@aosc.io platform/x86: thinkpad_acpi: Add battery quirk for ThinkPad X131e
John Hubbard jhubbard@nvidia.com Revert "selftests/mm: remove local __NR_* definitions"
Gabriel Krisman Bertazi krisman@suse.de Revert "mm/page_alloc.c: don't show protection in zone's ->lowmem_reserve[] for empty zone"
Richard Thier u9vata@gmail.com drm/radeon: Fix rs400_gpu_init for ATI mobility radeon Xpress 200M
Brendan King Brendan.King@imgtec.com drm/imagination: only init job done fences once
Brendan King Brendan.King@imgtec.com drm/imagination: Hold drm_gem_gpuva lock for unmap
Brendan King Brendan.King@imgtec.com drm/imagination: avoid deadlock on fence release
Kenneth Feng kenneth.feng@amd.com drm/amd/pm: always allow ih interrupt from fw
Andrew Martin Andrew.Martin@amd.com drm/amdkfd: Fix NULL Pointer Dereference in KFD queue
Ma Ke make24@iscas.ac.cn drm/amd/display: Fix null check for pipe_ctx->plane_state in resource_build_scaling_params
Paul Fertser fercerpav@gmail.com hwmon: (peci/dimmtemp) Do not provide fake thresholds data
Haoxiang Li haoxiang_li2024@163.com btrfs: fix a leaked chunk map issue in read_one_chunk()
Kailang Yang kailang@realtek.com ALSA: hda/realtek: update ALC222 depop optimize
Kailang Yang kailang@realtek.com ALSA: hda/realtek - add supported Mic Mute LED for Lenovo platform
Hoku Ishibe me@hokuishi.be ALSA: hda: intel: Add Dell ALC3271 to power_save denylist
Takashi Iwai tiwai@suse.de ALSA: seq: Avoid module auto-load handling at event delivery
Koichiro Den koichiro.den@canonical.com gpio: aggregator: protect driver attr handlers against module unload
Niklas Söderlund niklas.soderlund+renesas@ragnatech.se gpio: rcar: Use raw_spinlock to protect register access
Namjae Jeon linkinjeon@kernel.org ksmbd: fix bug on trap in smb2_lock
Namjae Jeon linkinjeon@kernel.org ksmbd: fix use-after-free in smb2_lock
Namjae Jeon linkinjeon@kernel.org ksmbd: fix out-of-bounds in parse_sec_desc()
Namjae Jeon linkinjeon@kernel.org ksmbd: fix type confusion via race condition when using ipc_msg_send_request
Daniil Dulov d.dulov@aladdin.ru HID: appleir: Fix potential NULL dereference at raw event handle
Bibo Mao maobibo@loongson.cn LoongArch: KVM: Fix GPA size issue about VM
Bibo Mao maobibo@loongson.cn LoongArch: KVM: Reload guest CSR registers after sleep
Bibo Mao maobibo@loongson.cn LoongArch: KVM: Add interrupt checking for AVEC
Bibo Mao maobibo@loongson.cn LoongArch: Set max_pfn with the PFN of the last page
Huacai Chen chenhuacai@kernel.org LoongArch: Use polling play_dead() when resuming from hibernation
Tiezhu Yang yangtiezhu@loongson.cn LoongArch: Convert unreachable() to BUG()
Philipp Stanner phasta@kernel.org stmmac: loongson: Pass correct arg to PCI function
Masami Hiramatsu (Google) mhiramat@kernel.org tracing: tprobe-events: Reject invalid tracepoint name
Masami Hiramatsu (Google) mhiramat@kernel.org tracing: tprobe-events: Fix a memory leak when tprobe with $retval
Rob Herring (Arm) robh@kernel.org Revert "of: reserved-memory: Fix using wrong number of cells to get property 'alignment'"
Asahi Lina lina@asahilina.net rust: alloc: Fix `ArrayLayout` allocations
Gary Guo gary@garyguo.net rust: use custom FFI integer types
Gary Guo gary@garyguo.net rust: map `__kernel_size_t` and friends also to usize/isize
Gary Guo gary@garyguo.net rust: fix size_t in bindgen prototypes of C builtins
Ethan D. Twardy ethan.twardy@gmail.com rust: kbuild: expand rusttest target for macros
Thomas Böhler witcher@wiredspace.de drm/panic: allow verbose version check
Thomas Böhler witcher@wiredspace.de drm/panic: allow verbose boolean for clarity
Thomas Böhler witcher@wiredspace.de drm/panic: correctly indent continuation of line in list item
Thomas Böhler witcher@wiredspace.de drm/panic: remove redundant field when assigning value
Thomas Böhler witcher@wiredspace.de drm/panic: prefer eliding lifetimes
Thomas Böhler witcher@wiredspace.de drm/panic: remove unnecessary borrow in alignment_pattern
Thomas Böhler witcher@wiredspace.de drm/panic: avoid reimplementing Iterator::find
Danilo Krummrich dakr@kernel.org MAINTAINERS: add entry for the Rust `alloc` module
Danilo Krummrich dakr@kernel.org kbuild: rust: remove the `alloc` crate and `GlobalAlloc`
Danilo Krummrich dakr@kernel.org rust: alloc: update module comment of alloc.rs
Danilo Krummrich dakr@kernel.org rust: str: test: replace `alloc::format`
Danilo Krummrich dakr@kernel.org rust: alloc: implement `Cmalloc` in module allocator_test
Danilo Krummrich dakr@kernel.org rust: alloc: implement `contains` for `Flags`
Danilo Krummrich dakr@kernel.org rust: error: check for config `test` in `Error::name`
Danilo Krummrich dakr@kernel.org rust: error: use `core::alloc::LayoutError`
Danilo Krummrich dakr@kernel.org rust: alloc: add `Vec` to prelude
Danilo Krummrich dakr@kernel.org rust: alloc: remove `VecExt` extension
Danilo Krummrich dakr@kernel.org rust: treewide: switch to the kernel `Vec` type
Danilo Krummrich dakr@kernel.org rust: alloc: implement `collect` for `IntoIter`
Danilo Krummrich dakr@kernel.org rust: alloc: implement `IntoIterator` for `Vec`
Danilo Krummrich dakr@kernel.org rust: alloc: implement kernel `Vec` type
Benno Lossin benno.lossin@proton.me rust: alloc: introduce `ArrayLayout`
Danilo Krummrich dakr@kernel.org rust: alloc: add `Box` to prelude
Danilo Krummrich dakr@kernel.org rust: alloc: remove extension of std's `Box`
Danilo Krummrich dakr@kernel.org rust: treewide: switch to our kernel `Box` type
Danilo Krummrich dakr@kernel.org rust: alloc: implement kernel `Box`
Danilo Krummrich dakr@kernel.org rust: alloc: add __GFP_NOWARN to `Flags`
Danilo Krummrich dakr@kernel.org rust: alloc: implement `KVmalloc` allocator
Danilo Krummrich dakr@kernel.org rust: alloc: implement `Vmalloc` allocator
Danilo Krummrich dakr@kernel.org rust: alloc: add module `allocator_test`
Danilo Krummrich dakr@kernel.org rust: alloc: implement `Allocator` for `Kmalloc`
Danilo Krummrich dakr@kernel.org rust: alloc: make `allocator` module public
Danilo Krummrich dakr@kernel.org rust: alloc: implement `ReallocFunc`
Danilo Krummrich dakr@kernel.org rust: alloc: rename `KernelAllocator` to `Kmalloc`
Danilo Krummrich dakr@kernel.org rust: alloc: separate `aligned_size` from `krealloc_aligned`
Danilo Krummrich dakr@kernel.org rust: alloc: add `Allocator` trait
Filipe Xavier felipe_life@live.com rust: error: optimize error type to use nonzero
Filipe Xavier felipe_life@live.com rust: error: make conversion functions public
Miguel Ojeda ojeda@kernel.org Documentation: rust: discuss `#[expect(...)]` in the guidelines
Miguel Ojeda ojeda@kernel.org rust: start using the `#[expect(...)]` attribute
Miguel Ojeda ojeda@kernel.org Documentation: rust: add coding guidelines on lints
Miguel Ojeda ojeda@kernel.org rust: enable Clippy's `check-private-items`
Miguel Ojeda ojeda@kernel.org rust: provide proper code documentation titles
Miguel Ojeda ojeda@kernel.org rust: replace `clippy::dbg_macro` with `disallowed_macros`
Miguel Ojeda ojeda@kernel.org rust: introduce `.clippy.toml`
Miguel Ojeda ojeda@kernel.org rust: sync: remove unneeded `#[allow(clippy::non_send_fields_in_send_ty)]`
Miguel Ojeda ojeda@kernel.org rust: init: remove unneeded `#[allow(clippy::disallowed_names)]`
Miguel Ojeda ojeda@kernel.org rust: enable `rustdoc::unescaped_backticks` lint
Miguel Ojeda ojeda@kernel.org rust: enable `clippy::ignored_unit_patterns` lint
Miguel Ojeda ojeda@kernel.org rust: enable `clippy::unnecessary_safety_doc` lint
Miguel Ojeda ojeda@kernel.org rust: enable `clippy::unnecessary_safety_comment` lint
Miguel Ojeda ojeda@kernel.org rust: enable `clippy::undocumented_unsafe_blocks` lint
Miguel Ojeda ojeda@kernel.org rust: types: avoid repetition in `{As,From}Bytes` impls
Miguel Ojeda ojeda@kernel.org rust: sort global Rust flags
Miguel Ojeda ojeda@kernel.org rust: workqueue: remove unneeded ``#[allow(clippy::new_ret_no_self)]`
Peter Zijlstra peterz@infradead.org loongarch: Use ASM_REACHABLE
Borislav Petkov (AMD) bp@alien8.de x86/microcode/AMD: Add some forgotten models to the SHA check
Qu Wenruo wqu@suse.com btrfs: fix data overwriting bug during buffered write when block size < page size
Steve French stfrench@microsoft.com smb311: failure to open files of length 1040 when mounting with SMB3.1.1 POSIX extensions
Pali Rohár pali@kernel.org cifs: Remove symlink member from cifs_open_info_data union
Johan Korsnes johan.korsnes@remarkable.no gpio: vf610: add locking to gpio direction functions
Bartosz Golaszewski bartosz.golaszewski@linaro.org gpio: vf610: use generic device_get_match_data()
Imre Deak imre.deak@intel.com drm/i915/dsi: Use TRANS_DDI_FUNC_CTL's own port width macro
Jani Nikula jani.nikula@intel.com drm/i915/dsi: convert to struct intel_display
Yutaro Ohno yutaro.ono.418@gmail.com rust: block: fix formatting in GenDisk doc
Andrew Cooper andrew.cooper3@citrix.com x86/amd_nb: Use rdmsr_safe() in amd_get_mmconfig_range()
-------------
Diffstat:
.clippy.toml | 9 + .gitignore | 1 + Documentation/admin-guide/sysctl/kernel.rst | 11 + Documentation/rust/coding-guidelines.rst | 146 ++++ MAINTAINERS | 8 + Makefile | 24 +- arch/arm64/include/asm/hugetlb.h | 4 +- arch/arm64/mm/hugetlbpage.c | 59 +- arch/loongarch/include/asm/bug.h | 13 +- arch/loongarch/include/asm/hugetlb.h | 6 +- arch/loongarch/kernel/machine_kexec.c | 4 +- arch/loongarch/kernel/setup.c | 3 + arch/loongarch/kernel/smp.c | 47 +- arch/loongarch/kvm/exit.c | 6 + arch/loongarch/kvm/main.c | 7 + arch/loongarch/kvm/vcpu.c | 2 +- arch/loongarch/kvm/vm.c | 6 +- arch/mips/include/asm/hugetlb.h | 6 +- arch/parisc/include/asm/hugetlb.h | 2 +- arch/parisc/mm/hugetlbpage.c | 2 +- arch/powerpc/include/asm/hugetlb.h | 6 +- arch/powerpc/kvm/e500_mmu_host.c | 19 +- arch/riscv/include/asm/hugetlb.h | 3 +- arch/riscv/mm/hugetlbpage.c | 2 +- arch/s390/include/asm/hugetlb.h | 17 +- arch/s390/kernel/traps.c | 6 +- arch/s390/mm/hugetlbpage.c | 4 +- arch/sparc/include/asm/hugetlb.h | 2 +- arch/sparc/mm/hugetlbpage.c | 2 +- arch/x86/boot/compressed/pgtable_64.c | 2 + arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kernel/amd_nb.c | 9 +- arch/x86/kernel/cpu/cacheinfo.c | 2 +- arch/x86/kernel/cpu/intel.c | 52 +- arch/x86/kernel/cpu/microcode/amd.c | 6 + arch/x86/kernel/cpu/sgx/ioctl.c | 7 + arch/x86/kvm/cpuid.c | 2 +- arch/x86/kvm/svm/sev.c | 13 +- arch/x86/kvm/svm/svm.c | 49 ++ arch/x86/kvm/svm/svm.h | 2 +- arch/x86/kvm/svm/vmenter.S | 10 +- arch/x86/kvm/vmx/vmx.c | 8 +- arch/x86/kvm/vmx/vmx.h | 2 - arch/x86/kvm/x86.c | 2 + arch/x86/mm/init.c | 23 +- block/partitions/efi.c | 2 +- drivers/base/core.c | 1 + drivers/block/rnull.rs | 4 +- drivers/block/ublk_drv.c | 7 +- drivers/bluetooth/btusb.c | 1 + drivers/bus/mhi/host/pci_generic.c | 5 +- drivers/cdx/cdx.c | 6 +- drivers/char/misc.c | 2 +- drivers/gpio/gpio-aggregator.c | 20 +- drivers/gpio/gpio-rcar.c | 31 +- drivers/gpio/gpio-vf610.c | 11 +- drivers/gpu/drm/Kconfig | 12 + drivers/gpu/drm/Makefile | 6 +- drivers/gpu/drm/amd/amdkfd/kfd_queue.c | 4 +- drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 3 +- drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c | 12 +- drivers/gpu/drm/drm_client_setup.c | 66 ++ drivers/gpu/drm/drm_fb_helper.c | 85 +- drivers/gpu/drm/drm_fbdev_client.c | 141 ++++ drivers/gpu/drm/drm_fbdev_ttm.c | 142 ++-- drivers/gpu/drm/drm_fourcc.c | 30 +- drivers/gpu/drm/drm_panic_qr.rs | 25 +- drivers/gpu/drm/i915/display/i9xx_plane.c | 22 +- drivers/gpu/drm/i915/display/icl_dsi.c | 448 +++++----- drivers/gpu/drm/i915/display/icl_dsi.h | 4 +- drivers/gpu/drm/i915/display/intel_atomic_plane.c | 49 +- drivers/gpu/drm/i915/display/intel_atomic_plane.h | 19 +- drivers/gpu/drm/i915/display/intel_color.c | 17 + drivers/gpu/drm/i915/display/intel_color.h | 1 + drivers/gpu/drm/i915/display/intel_cursor.c | 101 +-- drivers/gpu/drm/i915/display/intel_ddi.c | 2 +- drivers/gpu/drm/i915/display/intel_de.h | 11 + drivers/gpu/drm/i915/display/intel_display.c | 58 +- drivers/gpu/drm/i915/display/intel_display_types.h | 16 +- drivers/gpu/drm/i915/display/intel_sprite.c | 27 +- drivers/gpu/drm/i915/display/skl_universal_plane.c | 305 +++---- drivers/gpu/drm/imagination/pvr_fw_meta.c | 6 +- drivers/gpu/drm/imagination/pvr_fw_trace.c | 4 +- drivers/gpu/drm/imagination/pvr_queue.c | 18 +- drivers/gpu/drm/imagination/pvr_queue.h | 4 + drivers/gpu/drm/imagination/pvr_vm.c | 134 ++- drivers/gpu/drm/imagination/pvr_vm.h | 3 + drivers/gpu/drm/nouveau/Kconfig | 2 + drivers/gpu/drm/nouveau/nouveau_drm.c | 10 +- drivers/gpu/drm/radeon/r300.c | 3 +- drivers/gpu/drm/radeon/radeon_asic.h | 1 + drivers/gpu/drm/radeon/rs400.c | 18 +- drivers/gpu/drm/scheduler/gpu_scheduler_trace.h | 4 +- drivers/gpu/drm/xe/display/xe_plane_initial.c | 10 - drivers/gpu/drm/xe/xe_gt.c | 4 +- drivers/gpu/drm/xe/xe_hmm.c | 194 +++-- drivers/gpu/drm/xe/xe_hmm.h | 7 + drivers/gpu/drm/xe/xe_pt.c | 96 +-- drivers/gpu/drm/xe/xe_pt_walk.c | 3 +- drivers/gpu/drm/xe/xe_pt_walk.h | 4 + drivers/gpu/drm/xe/xe_vm.c | 100 ++- drivers/gpu/drm/xe/xe_vm.h | 10 +- drivers/gpu/drm/xe/xe_vm_types.h | 8 +- drivers/hid/hid-appleir.c | 2 +- drivers/hid/hid-google-hammer.c | 2 + drivers/hid/hid-steam.c | 2 +- drivers/hid/intel-ish-hid/ishtp-hid-client.c | 2 +- drivers/hid/intel-ish-hid/ishtp-hid.c | 4 +- drivers/hwmon/ad7314.c | 10 + drivers/hwmon/ntc_thermistor.c | 66 +- drivers/hwmon/peci/dimmtemp.c | 10 +- drivers/hwmon/pmbus/pmbus.c | 2 + drivers/hwmon/xgene-hwmon.c | 2 +- drivers/hwtracing/intel_th/pci.c | 15 + drivers/iio/adc/ad7192.c | 2 +- drivers/iio/adc/at91-sama5d2_adc.c | 68 +- drivers/iio/dac/ad3552r.c | 6 + drivers/iio/filter/admv8818.c | 14 +- drivers/iio/light/apds9306.c | 4 +- drivers/misc/cardreader/rtsx_usb.c | 15 - drivers/misc/eeprom/digsy_mtc_eeprom.c | 2 +- drivers/misc/mei/hw-me-regs.h | 2 + drivers/misc/mei/pci-me.c | 2 + drivers/misc/mei/vsc-tp.c | 2 +- drivers/net/caif/caif_virtio.c | 2 +- drivers/net/dsa/mt7530.c | 8 +- drivers/net/ethernet/emulex/benet/be.h | 2 +- drivers/net/ethernet/emulex/benet/be_cmds.c | 197 +++-- drivers/net/ethernet/emulex/benet/be_main.c | 2 +- .../net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c | 2 +- .../net/ethernet/stmicro/stmmac/dwmac-loongson.c | 6 +- drivers/net/ipa/data/ipa_data-v4.7.c | 18 +- drivers/net/mctp/mctp-i3c.c | 3 + drivers/net/phy/phy.c | 43 + drivers/net/phy/phy_device.c | 2 + drivers/net/ppp/ppp_generic.c | 28 +- drivers/net/wireless/intel/iwlwifi/iwl-drv.c | 2 +- drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c | 7 + .../net/wireless/intel/iwlwifi/mvm/time-event.c | 2 + drivers/net/wireless/intel/iwlwifi/pcie/internal.h | 5 +- drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c | 6 +- drivers/net/wireless/intel/iwlwifi/pcie/tx.c | 20 +- drivers/nvme/host/ioctl.c | 20 +- drivers/nvme/host/nvme.h | 7 + drivers/nvme/host/pci.c | 147 +++- drivers/nvme/host/tcp.c | 81 +- drivers/nvme/target/tcp.c | 15 +- drivers/of/of_reserved_mem.c | 4 +- drivers/platform/x86/thinkpad_acpi.c | 1 + drivers/rapidio/devices/rio_mport_cdev.c | 3 +- drivers/rapidio/rio-scan.c | 5 +- drivers/slimbus/messaging.c | 5 +- drivers/usb/atm/cxacru.c | 13 +- drivers/usb/core/hub.c | 33 + drivers/usb/core/quirks.c | 4 + drivers/usb/dwc3/core.c | 85 +- drivers/usb/dwc3/core.h | 2 +- drivers/usb/dwc3/drd.c | 4 +- drivers/usb/dwc3/gadget.c | 10 +- drivers/usb/gadget/composite.c | 17 +- drivers/usb/gadget/function/u_ether.c | 4 +- drivers/usb/host/xhci-hub.c | 8 + drivers/usb/host/xhci-mem.c | 3 +- drivers/usb/host/xhci-pci.c | 18 +- drivers/usb/host/xhci.h | 2 +- drivers/usb/renesas_usbhs/common.c | 6 +- drivers/usb/renesas_usbhs/mod_gadget.c | 2 +- drivers/usb/typec/tcpm/tcpci_rt1711h.c | 11 + drivers/usb/typec/ucsi/ucsi.c | 25 +- drivers/usb/typec/ucsi/ucsi.h | 2 + drivers/usb/typec/ucsi/ucsi_acpi.c | 21 +- drivers/usb/typec/ucsi/ucsi_ccg.c | 1 + drivers/usb/typec/ucsi/ucsi_glink.c | 1 + drivers/usb/typec/ucsi/ucsi_stm32g0.c | 1 + drivers/usb/typec/ucsi/ucsi_yoga_c630.c | 1 + drivers/virt/acrn/hsm.c | 6 +- fs/btrfs/file.c | 9 +- fs/btrfs/volumes.c | 1 + fs/coredump.c | 15 +- fs/exfat/balloc.c | 10 +- fs/exfat/exfat_fs.h | 2 +- fs/exfat/fatent.c | 11 +- fs/exfat/file.c | 2 +- fs/exfat/namei.c | 2 +- fs/netfs/read_collect.c | 21 +- fs/netfs/read_pgpriv2.c | 5 +- fs/nfs/file.c | 3 +- fs/smb/client/cifsglob.h | 6 +- fs/smb/client/inode.c | 2 +- fs/smb/client/reparse.h | 28 +- fs/smb/client/smb1ops.c | 4 +- fs/smb/client/smb2inode.c | 4 + fs/smb/client/smb2ops.c | 3 +- fs/smb/server/smb2pdu.c | 8 +- fs/smb/server/smbacl.c | 16 + fs/smb/server/transport_ipc.c | 1 + include/asm-generic/hugetlb.h | 2 +- include/drm/drm_client_setup.h | 26 + include/drm/drm_drv.h | 18 + include/drm/drm_fbdev_client.h | 19 + include/drm/drm_fbdev_ttm.h | 13 + include/drm/drm_fourcc.h | 1 + include/linux/compaction.h | 5 + include/linux/ethtool.h | 23 + include/linux/hugetlb.h | 4 +- include/linux/nvme-tcp.h | 2 + include/linux/nvme.h | 1 + include/linux/phy.h | 36 + include/linux/phylib_stubs.h | 42 + include/linux/sched.h | 2 +- kernel/events/core.c | 4 +- kernel/events/uprobes.c | 2 +- kernel/sched/fair.c | 6 +- kernel/trace/trace_fprobe.c | 15 + kernel/trace/trace_probe.h | 2 +- mm/compaction.c | 3 + mm/hugetlb.c | 4 +- mm/internal.h | 5 +- mm/kasan/kasan_test_rust.rs | 3 +- mm/kmsan/hooks.c | 1 + mm/memory-failure.c | 63 +- mm/memory.c | 21 +- mm/memory_hotplug.c | 28 +- mm/page_alloc.c | 4 +- mm/userfaultfd.c | 17 +- mm/vma.c | 12 +- mm/vmalloc.c | 4 +- net/8021q/vlan.c | 3 +- net/bluetooth/mgmt.c | 5 + net/ethtool/cabletest.c | 8 +- net/ethtool/linkstate.c | 26 +- net/ethtool/netlink.c | 6 +- net/ethtool/netlink.h | 5 +- net/ethtool/phy.c | 2 +- net/ethtool/plca.c | 6 +- net/ethtool/pse-pd.c | 4 +- net/ethtool/stats.c | 18 + net/ethtool/strset.c | 2 +- net/ipv4/tcp_offload.c | 11 +- net/ipv4/udp_offload.c | 8 +- net/ipv6/ila/ila_lwt.c | 4 +- net/llc/llc_s_ac.c | 49 +- net/mac80211/ieee80211_i.h | 2 + net/mac80211/mlme.c | 1 + net/mac80211/parse.c | 164 +++- net/mptcp/pm_netlink.c | 18 +- net/wireless/nl80211.c | 5 + net/wireless/reg.c | 3 +- rust/Makefile | 94 ++- rust/bindgen_parameters | 5 + rust/bindings/bindings_helper.h | 1 + rust/bindings/lib.rs | 6 + rust/exports.c | 1 - rust/ffi.rs | 48 ++ rust/helpers/helpers.c | 1 + rust/helpers/slab.c | 6 + rust/helpers/vmalloc.c | 9 + rust/kernel/alloc.rs | 150 +++- rust/kernel/alloc/allocator.rs | 214 +++-- rust/kernel/alloc/allocator_test.rs | 95 +++ rust/kernel/alloc/box_ext.rs | 89 -- rust/kernel/alloc/kbox.rs | 456 ++++++++++ rust/kernel/alloc/kvec.rs | 913 +++++++++++++++++++++ rust/kernel/alloc/layout.rs | 91 ++ rust/kernel/alloc/vec_ext.rs | 185 ----- rust/kernel/block/mq/gen_disk.rs | 6 +- rust/kernel/block/mq/operations.rs | 18 +- rust/kernel/block/mq/raw_writer.rs | 2 +- rust/kernel/block/mq/tag_set.rs | 2 +- rust/kernel/error.rs | 82 +- rust/kernel/firmware.rs | 2 +- rust/kernel/init.rs | 127 +-- rust/kernel/init/__internal.rs | 13 +- rust/kernel/init/macros.rs | 18 +- rust/kernel/ioctl.rs | 2 +- rust/kernel/lib.rs | 5 +- rust/kernel/list.rs | 1 + rust/kernel/list/arc_field.rs | 2 +- rust/kernel/net/phy.rs | 16 +- rust/kernel/prelude.rs | 5 +- rust/kernel/print.rs | 5 +- rust/kernel/rbtree.rs | 49 +- rust/kernel/std_vendor.rs | 12 +- rust/kernel/str.rs | 46 +- rust/kernel/sync/arc.rs | 25 +- rust/kernel/sync/arc/std_vendor.rs | 2 + rust/kernel/sync/condvar.rs | 7 +- rust/kernel/sync/lock.rs | 8 +- rust/kernel/sync/lock/mutex.rs | 4 +- rust/kernel/sync/lock/spinlock.rs | 4 +- rust/kernel/sync/locked_by.rs | 2 +- rust/kernel/task.rs | 8 +- rust/kernel/time.rs | 4 +- rust/kernel/types.rs | 140 ++-- rust/kernel/uaccess.rs | 48 +- rust/kernel/workqueue.rs | 29 +- rust/macros/lib.rs | 14 +- rust/macros/module.rs | 8 +- rust/uapi/lib.rs | 6 + samples/rust/rust_minimal.rs | 4 +- samples/rust/rust_print.rs | 1 + scripts/Makefile.build | 4 +- scripts/generate_rust_analyzer.py | 11 +- sound/core/seq/seq_clientmgr.c | 46 +- sound/pci/hda/Kconfig | 1 + sound/pci/hda/hda_intel.c | 2 + sound/pci/hda/patch_realtek.c | 107 ++- sound/usb/usx2y/usbusx2y.c | 11 + sound/usb/usx2y/usbusx2y.h | 26 + sound/usb/usx2y/usbusx2yaudio.c | 27 - tools/testing/selftests/bpf/benchs/bench_trigger.c | 3 +- tools/testing/selftests/bpf/bpf_util.h | 9 + .../selftests/bpf/map_tests/task_storage_map.c | 3 +- .../testing/selftests/bpf/prog_tests/bpf_cookie.c | 2 +- tools/testing/selftests/bpf/prog_tests/bpf_iter.c | 6 +- .../selftests/bpf/prog_tests/cgrp_local_storage.c | 10 +- .../testing/selftests/bpf/prog_tests/core_reloc.c | 2 +- .../selftests/bpf/prog_tests/linked_funcs.c | 2 +- .../selftests/bpf/prog_tests/ns_current_pid_tgid.c | 2 +- .../selftests/bpf/prog_tests/rcu_read_lock.c | 4 +- .../selftests/bpf/prog_tests/task_local_storage.c | 8 +- .../selftests/bpf/prog_tests/uprobe_multi_test.c | 2 +- tools/testing/selftests/damon/damon_nr_regions.py | 2 + tools/testing/selftests/damon/damos_quota.py | 9 +- tools/testing/selftests/damon/damos_quota_goal.py | 3 + tools/testing/selftests/mm/hugepage-mremap.c | 2 +- tools/testing/selftests/mm/ksm_functional_tests.c | 8 +- tools/testing/selftests/mm/memfd_secret.c | 14 +- tools/testing/selftests/mm/mkdirty.c | 8 +- tools/testing/selftests/mm/mlock2.h | 1 - tools/testing/selftests/mm/protection_keys.c | 2 +- tools/testing/selftests/mm/uffd-common.c | 4 + tools/testing/selftests/mm/uffd-stress.c | 15 +- tools/testing/selftests/mm/uffd-unit-tests.c | 14 +- usr/include/Makefile | 2 +- 335 files changed, 6013 insertions(+), 2454 deletions(-)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew Cooper andrew.cooper3@citrix.com
commit 14cb5d83068ecf15d2da6f7d0e9ea9edbcbc0457 upstream.
Xen doesn't offer MSR_FAM10H_MMIO_CONF_BASE to all guests. This results in the following warning:
unchecked MSR access error: RDMSR from 0xc0010058 at rIP: 0xffffffff8101d19f (xen_do_read_msr+0x7f/0xa0) Call Trace: xen_read_msr+0x1e/0x30 amd_get_mmconfig_range+0x2b/0x80 quirk_amd_mmconfig_area+0x28/0x100 pnp_fixup_device+0x39/0x50 __pnp_add_device+0xf/0x150 pnp_add_device+0x3d/0x100 pnpacpi_add_device_handler+0x1f9/0x280 acpi_ns_get_device_callback+0x104/0x1c0 acpi_ns_walk_namespace+0x1d0/0x260 acpi_get_devices+0x8a/0xb0 pnpacpi_init+0x50/0x80 do_one_initcall+0x46/0x2e0 kernel_init_freeable+0x1da/0x2f0 kernel_init+0x16/0x1b0 ret_from_fork+0x30/0x50 ret_from_fork_asm+0x1b/0x30
based on quirks for a "PNP0c01" device. Treating MMCFG as disabled is the right course of action, so no change is needed there.
This was most likely exposed by fixing the Xen MSR accessors to not be silently-safe.
Fixes: 3fac3734c43a ("xen/pv: support selecting safe/unsafe msr accesses") Signed-off-by: Andrew Cooper andrew.cooper3@citrix.com Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lore.kernel.org/r/20250307002846.3026685-1-andrew.cooper3@citrix.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/amd_nb.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)
--- a/arch/x86/kernel/amd_nb.c +++ b/arch/x86/kernel/amd_nb.c @@ -405,7 +405,6 @@ bool __init early_is_amd_nb(u32 device)
struct resource *amd_get_mmconfig_range(struct resource *res) { - u32 address; u64 base, msr; unsigned int segn_busn_bits;
@@ -413,13 +412,11 @@ struct resource *amd_get_mmconfig_range( boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) return NULL;
- /* assume all cpus from fam10h have mmconfig */ - if (boot_cpu_data.x86 < 0x10) + /* Assume CPUs from Fam10h have mmconfig, although not all VMs do */ + if (boot_cpu_data.x86 < 0x10 || + rdmsrl_safe(MSR_FAM10H_MMIO_CONF_BASE, &msr)) return NULL;
- address = MSR_FAM10H_MMIO_CONF_BASE; - rdmsrl(address, msr); - /* mmconfig is not enabled */ if (!(msr & FAM10H_MMIO_CONF_ENABLE)) return NULL;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yutaro Ohno yutaro.ono.418@gmail.com
commit 0c5928deada15a8d075516e6e0d9ee19011bb000 upstream.
Align bullet points and improve indentation in the `Invariants` section of the `GenDisk` struct documentation for better readability.
[ Yutaro is also working on implementing the lint we suggested to catch this sort of issue in upstream Rust:
https://github.com/rust-lang/rust-clippy/issues/13601 https://github.com/rust-lang/rust-clippy/pull/13711
Thanks a lot! - Miguel ]
Fixes: 3253aba3408a ("rust: block: introduce `kernel::block::mq` module") Signed-off-by: Yutaro Ohno yutaro.ono.418@gmail.com Reviewed-by: Boqun Feng boqun.feng@gmail.com Acked-by: Andreas Hindborg a.hindborg@kernel.org Link: https://lore.kernel.org/r/ZxkcU5yTFCagg_lX@ohnotp Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/block/mq/gen_disk.rs | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/rust/kernel/block/mq/gen_disk.rs +++ b/rust/kernel/block/mq/gen_disk.rs @@ -174,9 +174,9 @@ impl GenDiskBuilder { /// /// # Invariants /// -/// - `gendisk` must always point to an initialized and valid `struct gendisk`. -/// - `gendisk` was added to the VFS through a call to -/// `bindings::device_add_disk`. +/// - `gendisk` must always point to an initialized and valid `struct gendisk`. +/// - `gendisk` was added to the VFS through a call to +/// `bindings::device_add_disk`. pub struct GenDisk<T: Operations> { _tagset: Arc<TagSet<T>>, gendisk: *mut bindings::gendisk,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jani Nikula jani.nikula@intel.com
[ Upstream commit 7c05c58c15d49b75eefaa24154cce771f1db955b ]
struct intel_display will replace struct drm_i915_private as the main device pointer for display code. Switch ICL DSI code over to it.
Reviewed-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Jani Nikula jani.nikula@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/f62a3616ef15e02cf19c5d041656fc... Stable-dep-of: 879f70382ff3 ("drm/i915/dsi: Use TRANS_DDI_FUNC_CTL's own port width macro") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/icl_dsi.c | 444 ++++++++++++----------- drivers/gpu/drm/i915/display/icl_dsi.h | 4 +- drivers/gpu/drm/i915/display/intel_ddi.c | 2 +- 3 files changed, 227 insertions(+), 223 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c index 293efc1f841df..e2a88e5a97479 100644 --- a/drivers/gpu/drm/i915/display/icl_dsi.c +++ b/drivers/gpu/drm/i915/display/icl_dsi.c @@ -50,38 +50,38 @@ #include "skl_scaler.h" #include "skl_universal_plane.h"
-static int header_credits_available(struct drm_i915_private *dev_priv, +static int header_credits_available(struct intel_display *display, enum transcoder dsi_trans) { - return (intel_de_read(dev_priv, DSI_CMD_TXCTL(dsi_trans)) & FREE_HEADER_CREDIT_MASK) + return (intel_de_read(display, DSI_CMD_TXCTL(dsi_trans)) & FREE_HEADER_CREDIT_MASK) >> FREE_HEADER_CREDIT_SHIFT; }
-static int payload_credits_available(struct drm_i915_private *dev_priv, +static int payload_credits_available(struct intel_display *display, enum transcoder dsi_trans) { - return (intel_de_read(dev_priv, DSI_CMD_TXCTL(dsi_trans)) & FREE_PLOAD_CREDIT_MASK) + return (intel_de_read(display, DSI_CMD_TXCTL(dsi_trans)) & FREE_PLOAD_CREDIT_MASK) >> FREE_PLOAD_CREDIT_SHIFT; }
-static bool wait_for_header_credits(struct drm_i915_private *dev_priv, +static bool wait_for_header_credits(struct intel_display *display, enum transcoder dsi_trans, int hdr_credit) { - if (wait_for_us(header_credits_available(dev_priv, dsi_trans) >= + if (wait_for_us(header_credits_available(display, dsi_trans) >= hdr_credit, 100)) { - drm_err(&dev_priv->drm, "DSI header credits not released\n"); + drm_err(display->drm, "DSI header credits not released\n"); return false; }
return true; }
-static bool wait_for_payload_credits(struct drm_i915_private *dev_priv, +static bool wait_for_payload_credits(struct intel_display *display, enum transcoder dsi_trans, int payld_credit) { - if (wait_for_us(payload_credits_available(dev_priv, dsi_trans) >= + if (wait_for_us(payload_credits_available(display, dsi_trans) >= payld_credit, 100)) { - drm_err(&dev_priv->drm, "DSI payload credits not released\n"); + drm_err(display->drm, "DSI payload credits not released\n"); return false; }
@@ -98,7 +98,7 @@ static enum transcoder dsi_port_to_transcoder(enum port port)
static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); struct mipi_dsi_device *dsi; enum port port; @@ -108,8 +108,8 @@ static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder) /* wait for header/payload credits to be released */ for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - wait_for_header_credits(dev_priv, dsi_trans, MAX_HEADER_CREDIT); - wait_for_payload_credits(dev_priv, dsi_trans, MAX_PLOAD_CREDIT); + wait_for_header_credits(display, dsi_trans, MAX_HEADER_CREDIT); + wait_for_payload_credits(display, dsi_trans, MAX_PLOAD_CREDIT); }
/* send nop DCS command */ @@ -119,22 +119,22 @@ static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder) dsi->channel = 0; ret = mipi_dsi_dcs_nop(dsi); if (ret < 0) - drm_err(&dev_priv->drm, + drm_err(display->drm, "error sending DCS NOP command\n"); }
/* wait for header credits to be released */ for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - wait_for_header_credits(dev_priv, dsi_trans, MAX_HEADER_CREDIT); + wait_for_header_credits(display, dsi_trans, MAX_HEADER_CREDIT); }
/* wait for LP TX in progress bit to be cleared */ for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - if (wait_for_us(!(intel_de_read(dev_priv, DSI_LP_MSG(dsi_trans)) & + if (wait_for_us(!(intel_de_read(display, DSI_LP_MSG(dsi_trans)) & LPTX_IN_PROGRESS), 20)) - drm_err(&dev_priv->drm, "LPTX bit not cleared\n"); + drm_err(display->drm, "LPTX bit not cleared\n"); } }
@@ -142,7 +142,7 @@ static int dsi_send_pkt_payld(struct intel_dsi_host *host, const struct mipi_dsi_packet *packet) { struct intel_dsi *intel_dsi = host->intel_dsi; - struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev); + struct intel_display *display = to_intel_display(&intel_dsi->base); enum transcoder dsi_trans = dsi_port_to_transcoder(host->port); const u8 *data = packet->payload; u32 len = packet->payload_length; @@ -150,20 +150,20 @@ static int dsi_send_pkt_payld(struct intel_dsi_host *host,
/* payload queue can accept *256 bytes*, check limit */ if (len > MAX_PLOAD_CREDIT * 4) { - drm_err(&i915->drm, "payload size exceeds max queue limit\n"); + drm_err(display->drm, "payload size exceeds max queue limit\n"); return -EINVAL; }
for (i = 0; i < len; i += 4) { u32 tmp = 0;
- if (!wait_for_payload_credits(i915, dsi_trans, 1)) + if (!wait_for_payload_credits(display, dsi_trans, 1)) return -EBUSY;
for (j = 0; j < min_t(u32, len - i, 4); j++) tmp |= *data++ << 8 * j;
- intel_de_write(i915, DSI_CMD_TXPYLD(dsi_trans), tmp); + intel_de_write(display, DSI_CMD_TXPYLD(dsi_trans), tmp); }
return 0; @@ -174,14 +174,14 @@ static int dsi_send_pkt_hdr(struct intel_dsi_host *host, bool enable_lpdt) { struct intel_dsi *intel_dsi = host->intel_dsi; - struct drm_i915_private *dev_priv = to_i915(intel_dsi->base.base.dev); + struct intel_display *display = to_intel_display(&intel_dsi->base); enum transcoder dsi_trans = dsi_port_to_transcoder(host->port); u32 tmp;
- if (!wait_for_header_credits(dev_priv, dsi_trans, 1)) + if (!wait_for_header_credits(display, dsi_trans, 1)) return -EBUSY;
- tmp = intel_de_read(dev_priv, DSI_CMD_TXHDR(dsi_trans)); + tmp = intel_de_read(display, DSI_CMD_TXHDR(dsi_trans));
if (packet->payload) tmp |= PAYLOAD_PRESENT; @@ -200,15 +200,14 @@ static int dsi_send_pkt_hdr(struct intel_dsi_host *host, tmp |= ((packet->header[0] & DT_MASK) << DT_SHIFT); tmp |= (packet->header[1] << PARAM_WC_LOWER_SHIFT); tmp |= (packet->header[2] << PARAM_WC_UPPER_SHIFT); - intel_de_write(dev_priv, DSI_CMD_TXHDR(dsi_trans), tmp); + intel_de_write(display, DSI_CMD_TXHDR(dsi_trans), tmp);
return 0; }
void icl_dsi_frame_update(struct intel_crtc_state *crtc_state) { - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + struct intel_display *display = to_intel_display(crtc_state); u32 mode_flags; enum port port;
@@ -226,12 +225,13 @@ void icl_dsi_frame_update(struct intel_crtc_state *crtc_state) else return;
- intel_de_rmw(dev_priv, DSI_CMD_FRMCTL(port), 0, DSI_FRAME_UPDATE_REQUEST); + intel_de_rmw(display, DSI_CMD_FRMCTL(port), 0, + DSI_FRAME_UPDATE_REQUEST); }
static void dsi_program_swing_and_deemphasis(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum phy phy; u32 tmp, mask, val; @@ -245,31 +245,31 @@ static void dsi_program_swing_and_deemphasis(struct intel_encoder *encoder) mask = SCALING_MODE_SEL_MASK | RTERM_SELECT_MASK; val = SCALING_MODE_SEL(0x2) | TAP2_DISABLE | TAP3_DISABLE | RTERM_SELECT(0x6); - tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_LN(0, phy)); + tmp = intel_de_read(display, ICL_PORT_TX_DW5_LN(0, phy)); tmp &= ~mask; tmp |= val; - intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), tmp); - intel_de_rmw(dev_priv, ICL_PORT_TX_DW5_AUX(phy), mask, val); + intel_de_write(display, ICL_PORT_TX_DW5_GRP(phy), tmp); + intel_de_rmw(display, ICL_PORT_TX_DW5_AUX(phy), mask, val);
mask = SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK | RCOMP_SCALAR_MASK; val = SWING_SEL_UPPER(0x2) | SWING_SEL_LOWER(0x2) | RCOMP_SCALAR(0x98); - tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW2_LN(0, phy)); + tmp = intel_de_read(display, ICL_PORT_TX_DW2_LN(0, phy)); tmp &= ~mask; tmp |= val; - intel_de_write(dev_priv, ICL_PORT_TX_DW2_GRP(phy), tmp); - intel_de_rmw(dev_priv, ICL_PORT_TX_DW2_AUX(phy), mask, val); + intel_de_write(display, ICL_PORT_TX_DW2_GRP(phy), tmp); + intel_de_rmw(display, ICL_PORT_TX_DW2_AUX(phy), mask, val);
mask = POST_CURSOR_1_MASK | POST_CURSOR_2_MASK | CURSOR_COEFF_MASK; val = POST_CURSOR_1(0x0) | POST_CURSOR_2(0x0) | CURSOR_COEFF(0x3f); - intel_de_rmw(dev_priv, ICL_PORT_TX_DW4_AUX(phy), mask, val); + intel_de_rmw(display, ICL_PORT_TX_DW4_AUX(phy), mask, val);
/* Bspec: must not use GRP register for write */ for (lane = 0; lane <= 3; lane++) - intel_de_rmw(dev_priv, ICL_PORT_TX_DW4_LN(lane, phy), + intel_de_rmw(display, ICL_PORT_TX_DW4_LN(lane, phy), mask, val); } } @@ -277,13 +277,13 @@ static void dsi_program_swing_and_deemphasis(struct intel_encoder *encoder) static void configure_dual_link_mode(struct intel_encoder *encoder, const struct intel_crtc_state *pipe_config) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); i915_reg_t dss_ctl1_reg, dss_ctl2_reg; u32 dss_ctl1;
/* FIXME: Move all DSS handling to intel_vdsc.c */ - if (DISPLAY_VER(dev_priv) >= 12) { + if (DISPLAY_VER(display) >= 12) { struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
dss_ctl1_reg = ICL_PIPE_DSS_CTL1(crtc->pipe); @@ -293,7 +293,7 @@ static void configure_dual_link_mode(struct intel_encoder *encoder, dss_ctl2_reg = DSS_CTL2; }
- dss_ctl1 = intel_de_read(dev_priv, dss_ctl1_reg); + dss_ctl1 = intel_de_read(display, dss_ctl1_reg); dss_ctl1 |= SPLITTER_ENABLE; dss_ctl1 &= ~OVERLAP_PIXELS_MASK; dss_ctl1 |= OVERLAP_PIXELS(intel_dsi->pixel_overlap); @@ -308,19 +308,19 @@ static void configure_dual_link_mode(struct intel_encoder *encoder, dl_buffer_depth = hactive / 2 + intel_dsi->pixel_overlap;
if (dl_buffer_depth > MAX_DL_BUFFER_TARGET_DEPTH) - drm_err(&dev_priv->drm, + drm_err(display->drm, "DL buffer depth exceed max value\n");
dss_ctl1 &= ~LEFT_DL_BUF_TARGET_DEPTH_MASK; dss_ctl1 |= LEFT_DL_BUF_TARGET_DEPTH(dl_buffer_depth); - intel_de_rmw(dev_priv, dss_ctl2_reg, RIGHT_DL_BUF_TARGET_DEPTH_MASK, + intel_de_rmw(display, dss_ctl2_reg, RIGHT_DL_BUF_TARGET_DEPTH_MASK, RIGHT_DL_BUF_TARGET_DEPTH(dl_buffer_depth)); } else { /* Interleave */ dss_ctl1 |= DUAL_LINK_MODE_INTERLEAVE; }
- intel_de_write(dev_priv, dss_ctl1_reg, dss_ctl1); + intel_de_write(display, dss_ctl1_reg, dss_ctl1); }
/* aka DSI 8X clock */ @@ -341,6 +341,7 @@ static int afe_clk(struct intel_encoder *encoder, static void gen11_dsi_program_esc_clk_div(struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state) { + struct intel_display *display = to_intel_display(encoder); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port; @@ -360,33 +361,34 @@ static void gen11_dsi_program_esc_clk_div(struct intel_encoder *encoder, }
for_each_dsi_port(port, intel_dsi->ports) { - intel_de_write(dev_priv, ICL_DSI_ESC_CLK_DIV(port), + intel_de_write(display, ICL_DSI_ESC_CLK_DIV(port), esc_clk_div_m & ICL_ESC_CLK_DIV_MASK); - intel_de_posting_read(dev_priv, ICL_DSI_ESC_CLK_DIV(port)); + intel_de_posting_read(display, ICL_DSI_ESC_CLK_DIV(port)); }
for_each_dsi_port(port, intel_dsi->ports) { - intel_de_write(dev_priv, ICL_DPHY_ESC_CLK_DIV(port), + intel_de_write(display, ICL_DPHY_ESC_CLK_DIV(port), esc_clk_div_m & ICL_ESC_CLK_DIV_MASK); - intel_de_posting_read(dev_priv, ICL_DPHY_ESC_CLK_DIV(port)); + intel_de_posting_read(display, ICL_DPHY_ESC_CLK_DIV(port)); }
if (IS_ALDERLAKE_S(dev_priv) || IS_ALDERLAKE_P(dev_priv)) { for_each_dsi_port(port, intel_dsi->ports) { - intel_de_write(dev_priv, ADL_MIPIO_DW(port, 8), + intel_de_write(display, ADL_MIPIO_DW(port, 8), esc_clk_div_m_phy & TX_ESC_CLK_DIV_PHY); - intel_de_posting_read(dev_priv, ADL_MIPIO_DW(port, 8)); + intel_de_posting_read(display, ADL_MIPIO_DW(port, 8)); } } }
-static void get_dsi_io_power_domains(struct drm_i915_private *dev_priv, - struct intel_dsi *intel_dsi) +static void get_dsi_io_power_domains(struct intel_dsi *intel_dsi) { + struct intel_display *display = to_intel_display(&intel_dsi->base); + struct drm_i915_private *dev_priv = to_i915(display->drm); enum port port;
for_each_dsi_port(port, intel_dsi->ports) { - drm_WARN_ON(&dev_priv->drm, intel_dsi->io_wakeref[port]); + drm_WARN_ON(display->drm, intel_dsi->io_wakeref[port]); intel_dsi->io_wakeref[port] = intel_display_power_get(dev_priv, port == PORT_A ? @@ -397,15 +399,15 @@ static void get_dsi_io_power_domains(struct drm_i915_private *dev_priv,
static void gen11_dsi_enable_io_power(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port;
for_each_dsi_port(port, intel_dsi->ports) - intel_de_rmw(dev_priv, ICL_DSI_IO_MODECTL(port), + intel_de_rmw(display, ICL_DSI_IO_MODECTL(port), 0, COMBO_PHY_MODE_DSI);
- get_dsi_io_power_domains(dev_priv, intel_dsi); + get_dsi_io_power_domains(intel_dsi); }
static void gen11_dsi_power_up_lanes(struct intel_encoder *encoder) @@ -421,6 +423,7 @@ static void gen11_dsi_power_up_lanes(struct intel_encoder *encoder)
static void gen11_dsi_config_phy_lanes_sequence(struct intel_encoder *encoder) { + struct intel_display *display = to_intel_display(encoder); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum phy phy; @@ -429,32 +432,33 @@ static void gen11_dsi_config_phy_lanes_sequence(struct intel_encoder *encoder)
/* Step 4b(i) set loadgen select for transmit and aux lanes */ for_each_dsi_phy(phy, intel_dsi->phys) { - intel_de_rmw(dev_priv, ICL_PORT_TX_DW4_AUX(phy), LOADGEN_SELECT, 0); + intel_de_rmw(display, ICL_PORT_TX_DW4_AUX(phy), + LOADGEN_SELECT, 0); for (lane = 0; lane <= 3; lane++) - intel_de_rmw(dev_priv, ICL_PORT_TX_DW4_LN(lane, phy), + intel_de_rmw(display, ICL_PORT_TX_DW4_LN(lane, phy), LOADGEN_SELECT, lane != 2 ? LOADGEN_SELECT : 0); }
/* Step 4b(ii) set latency optimization for transmit and aux lanes */ for_each_dsi_phy(phy, intel_dsi->phys) { - intel_de_rmw(dev_priv, ICL_PORT_TX_DW2_AUX(phy), + intel_de_rmw(display, ICL_PORT_TX_DW2_AUX(phy), FRC_LATENCY_OPTIM_MASK, FRC_LATENCY_OPTIM_VAL(0x5)); - tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW2_LN(0, phy)); + tmp = intel_de_read(display, ICL_PORT_TX_DW2_LN(0, phy)); tmp &= ~FRC_LATENCY_OPTIM_MASK; tmp |= FRC_LATENCY_OPTIM_VAL(0x5); - intel_de_write(dev_priv, ICL_PORT_TX_DW2_GRP(phy), tmp); + intel_de_write(display, ICL_PORT_TX_DW2_GRP(phy), tmp);
/* For EHL, TGL, set latency optimization for PCS_DW1 lanes */ if (IS_JASPERLAKE(dev_priv) || IS_ELKHARTLAKE(dev_priv) || - (DISPLAY_VER(dev_priv) >= 12)) { - intel_de_rmw(dev_priv, ICL_PORT_PCS_DW1_AUX(phy), + (DISPLAY_VER(display) >= 12)) { + intel_de_rmw(display, ICL_PORT_PCS_DW1_AUX(phy), LATENCY_OPTIM_MASK, LATENCY_OPTIM_VAL(0));
- tmp = intel_de_read(dev_priv, + tmp = intel_de_read(display, ICL_PORT_PCS_DW1_LN(0, phy)); tmp &= ~LATENCY_OPTIM_MASK; tmp |= LATENCY_OPTIM_VAL(0x1); - intel_de_write(dev_priv, ICL_PORT_PCS_DW1_GRP(phy), + intel_de_write(display, ICL_PORT_PCS_DW1_GRP(phy), tmp); } } @@ -463,17 +467,17 @@ static void gen11_dsi_config_phy_lanes_sequence(struct intel_encoder *encoder)
static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); u32 tmp; enum phy phy;
/* clear common keeper enable bit */ for_each_dsi_phy(phy, intel_dsi->phys) { - tmp = intel_de_read(dev_priv, ICL_PORT_PCS_DW1_LN(0, phy)); + tmp = intel_de_read(display, ICL_PORT_PCS_DW1_LN(0, phy)); tmp &= ~COMMON_KEEPER_EN; - intel_de_write(dev_priv, ICL_PORT_PCS_DW1_GRP(phy), tmp); - intel_de_rmw(dev_priv, ICL_PORT_PCS_DW1_AUX(phy), COMMON_KEEPER_EN, 0); + intel_de_write(display, ICL_PORT_PCS_DW1_GRP(phy), tmp); + intel_de_rmw(display, ICL_PORT_PCS_DW1_AUX(phy), COMMON_KEEPER_EN, 0); }
/* @@ -482,14 +486,15 @@ static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder) * as part of lane phy sequence configuration */ for_each_dsi_phy(phy, intel_dsi->phys) - intel_de_rmw(dev_priv, ICL_PORT_CL_DW5(phy), 0, SUS_CLOCK_CONFIG); + intel_de_rmw(display, ICL_PORT_CL_DW5(phy), 0, + SUS_CLOCK_CONFIG);
/* Clear training enable to change swing values */ for_each_dsi_phy(phy, intel_dsi->phys) { - tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_LN(0, phy)); + tmp = intel_de_read(display, ICL_PORT_TX_DW5_LN(0, phy)); tmp &= ~TX_TRAINING_EN; - intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), tmp); - intel_de_rmw(dev_priv, ICL_PORT_TX_DW5_AUX(phy), TX_TRAINING_EN, 0); + intel_de_write(display, ICL_PORT_TX_DW5_GRP(phy), tmp); + intel_de_rmw(display, ICL_PORT_TX_DW5_AUX(phy), TX_TRAINING_EN, 0); }
/* Program swing and de-emphasis */ @@ -497,26 +502,26 @@ static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder)
/* Set training enable to trigger update */ for_each_dsi_phy(phy, intel_dsi->phys) { - tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_LN(0, phy)); + tmp = intel_de_read(display, ICL_PORT_TX_DW5_LN(0, phy)); tmp |= TX_TRAINING_EN; - intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), tmp); - intel_de_rmw(dev_priv, ICL_PORT_TX_DW5_AUX(phy), 0, TX_TRAINING_EN); + intel_de_write(display, ICL_PORT_TX_DW5_GRP(phy), tmp); + intel_de_rmw(display, ICL_PORT_TX_DW5_AUX(phy), 0, TX_TRAINING_EN); } }
static void gen11_dsi_enable_ddi_buffer(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port;
for_each_dsi_port(port, intel_dsi->ports) { - intel_de_rmw(dev_priv, DDI_BUF_CTL(port), 0, DDI_BUF_CTL_ENABLE); + intel_de_rmw(display, DDI_BUF_CTL(port), 0, DDI_BUF_CTL_ENABLE);
- if (wait_for_us(!(intel_de_read(dev_priv, DDI_BUF_CTL(port)) & + if (wait_for_us(!(intel_de_read(display, DDI_BUF_CTL(port)) & DDI_BUF_IS_IDLE), 500)) - drm_err(&dev_priv->drm, "DDI port:%c buffer idle\n", + drm_err(display->drm, "DDI port:%c buffer idle\n", port_name(port)); } } @@ -525,6 +530,7 @@ static void gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state) { + struct intel_display *display = to_intel_display(encoder); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port; @@ -532,12 +538,12 @@ gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder,
/* Program DPHY clock lanes timings */ for_each_dsi_port(port, intel_dsi->ports) - intel_de_write(dev_priv, DPHY_CLK_TIMING_PARAM(port), + intel_de_write(display, DPHY_CLK_TIMING_PARAM(port), intel_dsi->dphy_reg);
/* Program DPHY data lanes timings */ for_each_dsi_port(port, intel_dsi->ports) - intel_de_write(dev_priv, DPHY_DATA_TIMING_PARAM(port), + intel_de_write(display, DPHY_DATA_TIMING_PARAM(port), intel_dsi->dphy_data_lane_reg);
/* @@ -546,10 +552,10 @@ gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder, * a value '0' inside TA_PARAM_REGISTERS otherwise * leave all fields at HW default values. */ - if (DISPLAY_VER(dev_priv) == 11) { + if (DISPLAY_VER(display) == 11) { if (afe_clk(encoder, crtc_state) <= 800000) { for_each_dsi_port(port, intel_dsi->ports) - intel_de_rmw(dev_priv, DPHY_TA_TIMING_PARAM(port), + intel_de_rmw(display, DPHY_TA_TIMING_PARAM(port), TA_SURE_MASK, TA_SURE_OVERRIDE | TA_SURE(0)); } @@ -557,7 +563,7 @@ gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder,
if (IS_JASPERLAKE(dev_priv) || IS_ELKHARTLAKE(dev_priv)) { for_each_dsi_phy(phy, intel_dsi->phys) - intel_de_rmw(dev_priv, ICL_DPHY_CHKN(phy), + intel_de_rmw(display, ICL_DPHY_CHKN(phy), 0, ICL_DPHY_CHKN_AFE_OVER_PPI_STRAP); } } @@ -566,30 +572,30 @@ static void gen11_dsi_setup_timings(struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port;
/* Program T-INIT master registers */ for_each_dsi_port(port, intel_dsi->ports) - intel_de_rmw(dev_priv, ICL_DSI_T_INIT_MASTER(port), + intel_de_rmw(display, ICL_DSI_T_INIT_MASTER(port), DSI_T_INIT_MASTER_MASK, intel_dsi->init_count);
/* shadow register inside display core */ for_each_dsi_port(port, intel_dsi->ports) - intel_de_write(dev_priv, DSI_CLK_TIMING_PARAM(port), + intel_de_write(display, DSI_CLK_TIMING_PARAM(port), intel_dsi->dphy_reg);
/* shadow register inside display core */ for_each_dsi_port(port, intel_dsi->ports) - intel_de_write(dev_priv, DSI_DATA_TIMING_PARAM(port), + intel_de_write(display, DSI_DATA_TIMING_PARAM(port), intel_dsi->dphy_data_lane_reg);
/* shadow register inside display core */ - if (DISPLAY_VER(dev_priv) == 11) { + if (DISPLAY_VER(display) == 11) { if (afe_clk(encoder, crtc_state) <= 800000) { for_each_dsi_port(port, intel_dsi->ports) { - intel_de_rmw(dev_priv, DSI_TA_TIMING_PARAM(port), + intel_de_rmw(display, DSI_TA_TIMING_PARAM(port), TA_SURE_MASK, TA_SURE_OVERRIDE | TA_SURE(0)); } @@ -599,45 +605,45 @@ gen11_dsi_setup_timings(struct intel_encoder *encoder,
static void gen11_dsi_gate_clocks(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); u32 tmp; enum phy phy;
- mutex_lock(&dev_priv->display.dpll.lock); - tmp = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0); + mutex_lock(&display->dpll.lock); + tmp = intel_de_read(display, ICL_DPCLKA_CFGCR0); for_each_dsi_phy(phy, intel_dsi->phys) tmp |= ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
- intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, tmp); - mutex_unlock(&dev_priv->display.dpll.lock); + intel_de_write(display, ICL_DPCLKA_CFGCR0, tmp); + mutex_unlock(&display->dpll.lock); }
static void gen11_dsi_ungate_clocks(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); u32 tmp; enum phy phy;
- mutex_lock(&dev_priv->display.dpll.lock); - tmp = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0); + mutex_lock(&display->dpll.lock); + tmp = intel_de_read(display, ICL_DPCLKA_CFGCR0); for_each_dsi_phy(phy, intel_dsi->phys) tmp &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
- intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, tmp); - mutex_unlock(&dev_priv->display.dpll.lock); + intel_de_write(display, ICL_DPCLKA_CFGCR0, tmp); + mutex_unlock(&display->dpll.lock); }
static bool gen11_dsi_is_clock_enabled(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); bool clock_enabled = false; enum phy phy; u32 tmp;
- tmp = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0); + tmp = intel_de_read(display, ICL_DPCLKA_CFGCR0);
for_each_dsi_phy(phy, intel_dsi->phys) { if (!(tmp & ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy))) @@ -650,36 +656,36 @@ static bool gen11_dsi_is_clock_enabled(struct intel_encoder *encoder) static void gen11_dsi_map_pll(struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); struct intel_shared_dpll *pll = crtc_state->shared_dpll; enum phy phy; u32 val;
- mutex_lock(&dev_priv->display.dpll.lock); + mutex_lock(&display->dpll.lock);
- val = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0); + val = intel_de_read(display, ICL_DPCLKA_CFGCR0); for_each_dsi_phy(phy, intel_dsi->phys) { val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy); val |= ICL_DPCLKA_CFGCR0_DDI_CLK_SEL(pll->info->id, phy); } - intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, val); + intel_de_write(display, ICL_DPCLKA_CFGCR0, val);
for_each_dsi_phy(phy, intel_dsi->phys) { val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy); } - intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, val); + intel_de_write(display, ICL_DPCLKA_CFGCR0, val);
- intel_de_posting_read(dev_priv, ICL_DPCLKA_CFGCR0); + intel_de_posting_read(display, ICL_DPCLKA_CFGCR0);
- mutex_unlock(&dev_priv->display.dpll.lock); + mutex_unlock(&display->dpll.lock); }
static void gen11_dsi_configure_transcoder(struct intel_encoder *encoder, const struct intel_crtc_state *pipe_config) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc); enum pipe pipe = crtc->pipe; @@ -689,7 +695,7 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - tmp = intel_de_read(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans)); + tmp = intel_de_read(display, DSI_TRANS_FUNC_CONF(dsi_trans));
if (intel_dsi->eotp_pkt) tmp &= ~EOTP_DISABLED; @@ -745,7 +751,7 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder, } }
- if (DISPLAY_VER(dev_priv) >= 12) { + if (DISPLAY_VER(display) >= 12) { if (is_vid_mode(intel_dsi)) tmp |= BLANKING_PACKET_ENABLE; } @@ -778,15 +784,15 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder, tmp |= TE_SOURCE_GPIO; }
- intel_de_write(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans), tmp); + intel_de_write(display, DSI_TRANS_FUNC_CONF(dsi_trans), tmp); }
/* enable port sync mode if dual link */ if (intel_dsi->dual_link) { for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - intel_de_rmw(dev_priv, - TRANS_DDI_FUNC_CTL2(dev_priv, dsi_trans), + intel_de_rmw(display, + TRANS_DDI_FUNC_CTL2(display, dsi_trans), 0, PORT_SYNC_MODE_ENABLE); }
@@ -798,8 +804,8 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder, dsi_trans = dsi_port_to_transcoder(port);
/* select data lane width */ - tmp = intel_de_read(dev_priv, - TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans)); + tmp = intel_de_read(display, + TRANS_DDI_FUNC_CTL(display, dsi_trans)); tmp &= ~DDI_PORT_WIDTH_MASK; tmp |= DDI_PORT_WIDTH(intel_dsi->lane_count);
@@ -825,16 +831,16 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
/* enable DDI buffer */ tmp |= TRANS_DDI_FUNC_ENABLE; - intel_de_write(dev_priv, - TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans), tmp); + intel_de_write(display, + TRANS_DDI_FUNC_CTL(display, dsi_trans), tmp); }
/* wait for link ready */ for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - if (wait_for_us((intel_de_read(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans)) & + if (wait_for_us((intel_de_read(display, DSI_TRANS_FUNC_CONF(dsi_trans)) & LINK_READY), 2500)) - drm_err(&dev_priv->drm, "DSI link not ready\n"); + drm_err(display->drm, "DSI link not ready\n"); } }
@@ -842,7 +848,7 @@ static void gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; @@ -909,17 +915,17 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
/* minimum hactive as per bspec: 256 pixels */ if (adjusted_mode->crtc_hdisplay < 256) - drm_err(&dev_priv->drm, "hactive is less then 256 pixels\n"); + drm_err(display->drm, "hactive is less then 256 pixels\n");
/* if RGB666 format, then hactive must be multiple of 4 pixels */ if (intel_dsi->pixel_format == MIPI_DSI_FMT_RGB666 && hactive % 4 != 0) - drm_err(&dev_priv->drm, + drm_err(display->drm, "hactive pixels are not multiple of 4\n");
/* program TRANS_HTOTAL register */ for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - intel_de_write(dev_priv, TRANS_HTOTAL(dev_priv, dsi_trans), + intel_de_write(display, TRANS_HTOTAL(display, dsi_trans), HACTIVE(hactive - 1) | HTOTAL(htotal - 1)); }
@@ -928,12 +934,12 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder, if (intel_dsi->video_mode == NON_BURST_SYNC_PULSE) { /* BSPEC: hsync size should be atleast 16 pixels */ if (hsync_size < 16) - drm_err(&dev_priv->drm, + drm_err(display->drm, "hsync size < 16 pixels\n"); }
if (hback_porch < 16) - drm_err(&dev_priv->drm, "hback porch < 16 pixels\n"); + drm_err(display->drm, "hback porch < 16 pixels\n");
if (intel_dsi->dual_link) { hsync_start /= 2; @@ -942,8 +948,8 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - intel_de_write(dev_priv, - TRANS_HSYNC(dev_priv, dsi_trans), + intel_de_write(display, + TRANS_HSYNC(display, dsi_trans), HSYNC_START(hsync_start - 1) | HSYNC_END(hsync_end - 1)); } } @@ -957,22 +963,22 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder, * struct drm_display_mode. * For interlace mode: program required pixel minus 2 */ - intel_de_write(dev_priv, TRANS_VTOTAL(dev_priv, dsi_trans), + intel_de_write(display, TRANS_VTOTAL(display, dsi_trans), VACTIVE(vactive - 1) | VTOTAL(vtotal - 1)); }
if (vsync_end < vsync_start || vsync_end > vtotal) - drm_err(&dev_priv->drm, "Invalid vsync_end value\n"); + drm_err(display->drm, "Invalid vsync_end value\n");
if (vsync_start < vactive) - drm_err(&dev_priv->drm, "vsync_start less than vactive\n"); + drm_err(display->drm, "vsync_start less than vactive\n");
/* program TRANS_VSYNC register for video mode only */ if (is_vid_mode(intel_dsi)) { for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - intel_de_write(dev_priv, - TRANS_VSYNC(dev_priv, dsi_trans), + intel_de_write(display, + TRANS_VSYNC(display, dsi_trans), VSYNC_START(vsync_start - 1) | VSYNC_END(vsync_end - 1)); } } @@ -986,8 +992,8 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder, if (is_vid_mode(intel_dsi)) { for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - intel_de_write(dev_priv, - TRANS_VSYNCSHIFT(dev_priv, dsi_trans), + intel_de_write(display, + TRANS_VSYNCSHIFT(display, dsi_trans), vsync_shift); } } @@ -998,11 +1004,11 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder, * FIXME get rid of these local hacks and do it right, * this will not handle eg. delayed vblank correctly. */ - if (DISPLAY_VER(dev_priv) >= 12) { + if (DISPLAY_VER(display) >= 12) { for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - intel_de_write(dev_priv, - TRANS_VBLANK(dev_priv, dsi_trans), + intel_de_write(display, + TRANS_VBLANK(display, dsi_trans), VBLANK_START(vactive - 1) | VBLANK_END(vtotal - 1)); } } @@ -1010,20 +1016,20 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
static void gen11_dsi_enable_transcoder(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port; enum transcoder dsi_trans;
for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - intel_de_rmw(dev_priv, TRANSCONF(dev_priv, dsi_trans), 0, + intel_de_rmw(display, TRANSCONF(display, dsi_trans), 0, TRANSCONF_ENABLE);
/* wait for transcoder to be enabled */ - if (intel_de_wait_for_set(dev_priv, TRANSCONF(dev_priv, dsi_trans), + if (intel_de_wait_for_set(display, TRANSCONF(display, dsi_trans), TRANSCONF_STATE_ENABLE, 10)) - drm_err(&dev_priv->drm, + drm_err(display->drm, "DSI transcoder not enabled\n"); } } @@ -1031,7 +1037,7 @@ static void gen11_dsi_enable_transcoder(struct intel_encoder *encoder) static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port; enum transcoder dsi_trans; @@ -1055,21 +1061,21 @@ static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder, dsi_trans = dsi_port_to_transcoder(port);
/* program hst_tx_timeout */ - intel_de_rmw(dev_priv, DSI_HSTX_TO(dsi_trans), + intel_de_rmw(display, DSI_HSTX_TO(dsi_trans), HSTX_TIMEOUT_VALUE_MASK, HSTX_TIMEOUT_VALUE(hs_tx_timeout));
/* FIXME: DSI_CALIB_TO */
/* program lp_rx_host timeout */ - intel_de_rmw(dev_priv, DSI_LPRX_HOST_TO(dsi_trans), + intel_de_rmw(display, DSI_LPRX_HOST_TO(dsi_trans), LPRX_TIMEOUT_VALUE_MASK, LPRX_TIMEOUT_VALUE(lp_rx_timeout));
/* FIXME: DSI_PWAIT_TO */
/* program turn around timeout */ - intel_de_rmw(dev_priv, DSI_TA_TO(dsi_trans), + intel_de_rmw(display, DSI_TA_TO(dsi_trans), TA_TIMEOUT_VALUE_MASK, TA_TIMEOUT_VALUE(ta_timeout)); } @@ -1078,7 +1084,7 @@ static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder, static void gen11_dsi_config_util_pin(struct intel_encoder *encoder, bool enable) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); u32 tmp;
@@ -1090,7 +1096,7 @@ static void gen11_dsi_config_util_pin(struct intel_encoder *encoder, if (is_vid_mode(intel_dsi) || (intel_dsi->ports & BIT(PORT_B))) return;
- tmp = intel_de_read(dev_priv, UTIL_PIN_CTL); + tmp = intel_de_read(display, UTIL_PIN_CTL);
if (enable) { tmp |= UTIL_PIN_DIRECTION_INPUT; @@ -1098,7 +1104,7 @@ static void gen11_dsi_config_util_pin(struct intel_encoder *encoder, } else { tmp &= ~UTIL_PIN_ENABLE; } - intel_de_write(dev_priv, UTIL_PIN_CTL, tmp); + intel_de_write(display, UTIL_PIN_CTL, tmp); }
static void @@ -1136,7 +1142,7 @@ gen11_dsi_enable_port_and_phy(struct intel_encoder *encoder,
static void gen11_dsi_powerup_panel(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); struct mipi_dsi_device *dsi; enum port port; @@ -1152,14 +1158,14 @@ static void gen11_dsi_powerup_panel(struct intel_encoder *encoder) * FIXME: This uses the number of DW's currently in the payload * receive queue. This is probably not what we want here. */ - tmp = intel_de_read(dev_priv, DSI_CMD_RXCTL(dsi_trans)); + tmp = intel_de_read(display, DSI_CMD_RXCTL(dsi_trans)); tmp &= NUMBER_RX_PLOAD_DW_MASK; /* multiply "Number Rx Payload DW" by 4 to get max value */ tmp = tmp * 4; dsi = intel_dsi->dsi_hosts[port]->device; ret = mipi_dsi_set_maximum_return_packet_size(dsi, tmp); if (ret < 0) - drm_err(&dev_priv->drm, + drm_err(display->drm, "error setting max return pkt size%d\n", tmp); }
@@ -1219,10 +1225,10 @@ static void gen11_dsi_pre_enable(struct intel_atomic_state *state, static void icl_apply_kvmr_pipe_a_wa(struct intel_encoder *encoder, enum pipe pipe, bool enable) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder);
- if (DISPLAY_VER(dev_priv) == 11 && pipe == PIPE_B) - intel_de_rmw(dev_priv, CHICKEN_PAR1_1, + if (DISPLAY_VER(display) == 11 && pipe == PIPE_B) + intel_de_rmw(display, CHICKEN_PAR1_1, IGNORE_KVMR_PIPE_A, enable ? IGNORE_KVMR_PIPE_A : 0); } @@ -1235,13 +1241,13 @@ static void icl_apply_kvmr_pipe_a_wa(struct intel_encoder *encoder, */ static void adlp_set_lp_hs_wakeup_gb(struct intel_encoder *encoder) { - struct drm_i915_private *i915 = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port;
- if (DISPLAY_VER(i915) == 13) { + if (DISPLAY_VER(display) == 13) { for_each_dsi_port(port, intel_dsi->ports) - intel_de_rmw(i915, TGL_DSI_CHKN_REG(port), + intel_de_rmw(display, TGL_DSI_CHKN_REG(port), TGL_DSI_CHKN_LSHS_GB_MASK, TGL_DSI_CHKN_LSHS_GB(4)); } @@ -1275,7 +1281,7 @@ static void gen11_dsi_enable(struct intel_atomic_state *state,
static void gen11_dsi_disable_transcoder(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port; enum transcoder dsi_trans; @@ -1284,13 +1290,13 @@ static void gen11_dsi_disable_transcoder(struct intel_encoder *encoder) dsi_trans = dsi_port_to_transcoder(port);
/* disable transcoder */ - intel_de_rmw(dev_priv, TRANSCONF(dev_priv, dsi_trans), + intel_de_rmw(display, TRANSCONF(display, dsi_trans), TRANSCONF_ENABLE, 0);
/* wait for transcoder to be disabled */ - if (intel_de_wait_for_clear(dev_priv, TRANSCONF(dev_priv, dsi_trans), + if (intel_de_wait_for_clear(display, TRANSCONF(display, dsi_trans), TRANSCONF_STATE_ENABLE, 50)) - drm_err(&dev_priv->drm, + drm_err(display->drm, "DSI trancoder not disabled\n"); } } @@ -1307,7 +1313,7 @@ static void gen11_dsi_powerdown_panel(struct intel_encoder *encoder)
static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port; enum transcoder dsi_trans; @@ -1316,29 +1322,29 @@ static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder) /* disable periodic update mode */ if (is_cmd_mode(intel_dsi)) { for_each_dsi_port(port, intel_dsi->ports) - intel_de_rmw(dev_priv, DSI_CMD_FRMCTL(port), + intel_de_rmw(display, DSI_CMD_FRMCTL(port), DSI_PERIODIC_FRAME_UPDATE_ENABLE, 0); }
/* put dsi link in ULPS */ for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - tmp = intel_de_read(dev_priv, DSI_LP_MSG(dsi_trans)); + tmp = intel_de_read(display, DSI_LP_MSG(dsi_trans)); tmp |= LINK_ENTER_ULPS; tmp &= ~LINK_ULPS_TYPE_LP11; - intel_de_write(dev_priv, DSI_LP_MSG(dsi_trans), tmp); + intel_de_write(display, DSI_LP_MSG(dsi_trans), tmp);
- if (wait_for_us((intel_de_read(dev_priv, DSI_LP_MSG(dsi_trans)) & + if (wait_for_us((intel_de_read(display, DSI_LP_MSG(dsi_trans)) & LINK_IN_ULPS), 10)) - drm_err(&dev_priv->drm, "DSI link not in ULPS\n"); + drm_err(display->drm, "DSI link not in ULPS\n"); }
/* disable ddi function */ for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - intel_de_rmw(dev_priv, - TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans), + intel_de_rmw(display, + TRANS_DDI_FUNC_CTL(display, dsi_trans), TRANS_DDI_FUNC_ENABLE, 0); }
@@ -1346,8 +1352,8 @@ static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder) if (intel_dsi->dual_link) { for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - intel_de_rmw(dev_priv, - TRANS_DDI_FUNC_CTL2(dev_priv, dsi_trans), + intel_de_rmw(display, + TRANS_DDI_FUNC_CTL2(display, dsi_trans), PORT_SYNC_MODE_ENABLE, 0); } } @@ -1355,18 +1361,18 @@ static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder)
static void gen11_dsi_disable_port(struct intel_encoder *encoder) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port;
gen11_dsi_ungate_clocks(encoder); for_each_dsi_port(port, intel_dsi->ports) { - intel_de_rmw(dev_priv, DDI_BUF_CTL(port), DDI_BUF_CTL_ENABLE, 0); + intel_de_rmw(display, DDI_BUF_CTL(port), DDI_BUF_CTL_ENABLE, 0);
- if (wait_for_us((intel_de_read(dev_priv, DDI_BUF_CTL(port)) & + if (wait_for_us((intel_de_read(display, DDI_BUF_CTL(port)) & DDI_BUF_IS_IDLE), 8)) - drm_err(&dev_priv->drm, + drm_err(display->drm, "DDI port:%c buffer not idle\n", port_name(port)); } @@ -1375,6 +1381,7 @@ static void gen11_dsi_disable_port(struct intel_encoder *encoder)
static void gen11_dsi_disable_io_power(struct intel_encoder *encoder) { + struct intel_display *display = to_intel_display(encoder); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum port port; @@ -1392,7 +1399,7 @@ static void gen11_dsi_disable_io_power(struct intel_encoder *encoder)
/* set mode to DDI */ for_each_dsi_port(port, intel_dsi->ports) - intel_de_rmw(dev_priv, ICL_DSI_IO_MODECTL(port), + intel_de_rmw(display, ICL_DSI_IO_MODECTL(port), COMBO_PHY_MODE_DSI, 0); }
@@ -1504,8 +1511,7 @@ static void gen11_dsi_get_timings(struct intel_encoder *encoder,
static bool gen11_dsi_is_periodic_cmd_mode(struct intel_dsi *intel_dsi) { - struct drm_device *dev = intel_dsi->base.base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); + struct intel_display *display = to_intel_display(&intel_dsi->base); enum transcoder dsi_trans; u32 val;
@@ -1514,7 +1520,7 @@ static bool gen11_dsi_is_periodic_cmd_mode(struct intel_dsi *intel_dsi) else dsi_trans = TRANSCODER_DSI_0;
- val = intel_de_read(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans)); + val = intel_de_read(display, DSI_TRANS_FUNC_CONF(dsi_trans)); return (val & DSI_PERIODIC_FRAME_UPDATE_ENABLE); }
@@ -1557,7 +1563,7 @@ static void gen11_dsi_get_config(struct intel_encoder *encoder, static void gen11_dsi_sync_state(struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_crtc *intel_crtc; enum pipe pipe;
@@ -1568,9 +1574,9 @@ static void gen11_dsi_sync_state(struct intel_encoder *encoder, pipe = intel_crtc->pipe;
/* wa verify 1409054076:icl,jsl,ehl */ - if (DISPLAY_VER(dev_priv) == 11 && pipe == PIPE_B && - !(intel_de_read(dev_priv, CHICKEN_PAR1_1) & IGNORE_KVMR_PIPE_A)) - drm_dbg_kms(&dev_priv->drm, + if (DISPLAY_VER(display) == 11 && pipe == PIPE_B && + !(intel_de_read(display, CHICKEN_PAR1_1) & IGNORE_KVMR_PIPE_A)) + drm_dbg_kms(display->drm, "[ENCODER:%d:%s] BIOS left IGNORE_KVMR_PIPE_A cleared with pipe B enabled\n", encoder->base.base.id, encoder->base.name); @@ -1579,9 +1585,9 @@ static void gen11_dsi_sync_state(struct intel_encoder *encoder, static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder, struct intel_crtc_state *crtc_state) { - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config; - int dsc_max_bpc = DISPLAY_VER(dev_priv) >= 12 ? 12 : 10; + int dsc_max_bpc = DISPLAY_VER(display) >= 12 ? 12 : 10; bool use_dsc; int ret;
@@ -1606,12 +1612,12 @@ static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder, return ret;
/* DSI specific sanity checks on the common code */ - drm_WARN_ON(&dev_priv->drm, vdsc_cfg->vbr_enable); - drm_WARN_ON(&dev_priv->drm, vdsc_cfg->simple_422); - drm_WARN_ON(&dev_priv->drm, + drm_WARN_ON(display->drm, vdsc_cfg->vbr_enable); + drm_WARN_ON(display->drm, vdsc_cfg->simple_422); + drm_WARN_ON(display->drm, vdsc_cfg->pic_width % vdsc_cfg->slice_width); - drm_WARN_ON(&dev_priv->drm, vdsc_cfg->slice_height < 8); - drm_WARN_ON(&dev_priv->drm, + drm_WARN_ON(display->drm, vdsc_cfg->slice_height < 8); + drm_WARN_ON(display->drm, vdsc_cfg->pic_height % vdsc_cfg->slice_height);
ret = drm_dsc_compute_rc_parameters(vdsc_cfg); @@ -1627,7 +1633,7 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder, struct intel_crtc_state *pipe_config, struct drm_connector_state *conn_state) { - struct drm_i915_private *i915 = to_i915(encoder->base.dev); + struct intel_display *display = to_intel_display(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); struct intel_connector *intel_connector = intel_dsi->attached_connector; struct drm_display_mode *adjusted_mode = @@ -1661,7 +1667,7 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder, pipe_config->clock_set = true;
if (gen11_dsi_dsc_compute_config(encoder, pipe_config)) - drm_dbg_kms(&i915->drm, "Attempting to use DSC failed\n"); + drm_dbg_kms(display->drm, "Attempting to use DSC failed\n");
pipe_config->port_clock = afe_clk(encoder, pipe_config) / 5;
@@ -1679,15 +1685,13 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder, static void gen11_dsi_get_power_domains(struct intel_encoder *encoder, struct intel_crtc_state *crtc_state) { - struct drm_i915_private *i915 = to_i915(encoder->base.dev); - - get_dsi_io_power_domains(i915, - enc_to_intel_dsi(encoder)); + get_dsi_io_power_domains(enc_to_intel_dsi(encoder)); }
static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder, enum pipe *pipe) { + struct intel_display *display = to_intel_display(encoder); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); enum transcoder dsi_trans; @@ -1703,8 +1707,8 @@ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
for_each_dsi_port(port, intel_dsi->ports) { dsi_trans = dsi_port_to_transcoder(port); - tmp = intel_de_read(dev_priv, - TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans)); + tmp = intel_de_read(display, + TRANS_DDI_FUNC_CTL(display, dsi_trans)); switch (tmp & TRANS_DDI_EDP_INPUT_MASK) { case TRANS_DDI_EDP_INPUT_A_ON: *pipe = PIPE_A; @@ -1719,11 +1723,11 @@ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder, *pipe = PIPE_D; break; default: - drm_err(&dev_priv->drm, "Invalid PIPE input\n"); + drm_err(display->drm, "Invalid PIPE input\n"); goto out; }
- tmp = intel_de_read(dev_priv, TRANSCONF(dev_priv, dsi_trans)); + tmp = intel_de_read(display, TRANSCONF(display, dsi_trans)); ret = tmp & TRANSCONF_ENABLE; } out: @@ -1833,8 +1837,7 @@ static const struct mipi_dsi_host_ops gen11_dsi_host_ops = {
static void icl_dphy_param_init(struct intel_dsi *intel_dsi) { - struct drm_device *dev = intel_dsi->base.base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); + struct intel_display *display = to_intel_display(&intel_dsi->base); struct intel_connector *connector = intel_dsi->attached_connector; struct mipi_config *mipi_config = connector->panel.vbt.dsi.config; u32 tlpx_ns; @@ -1858,7 +1861,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi) */ prepare_cnt = DIV_ROUND_UP(ths_prepare_ns * 4, tlpx_ns); if (prepare_cnt > ICL_PREPARE_CNT_MAX) { - drm_dbg_kms(&dev_priv->drm, "prepare_cnt out of range (%d)\n", + drm_dbg_kms(display->drm, "prepare_cnt out of range (%d)\n", prepare_cnt); prepare_cnt = ICL_PREPARE_CNT_MAX; } @@ -1867,7 +1870,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi) clk_zero_cnt = DIV_ROUND_UP(mipi_config->tclk_prepare_clkzero - ths_prepare_ns, tlpx_ns); if (clk_zero_cnt > ICL_CLK_ZERO_CNT_MAX) { - drm_dbg_kms(&dev_priv->drm, + drm_dbg_kms(display->drm, "clk_zero_cnt out of range (%d)\n", clk_zero_cnt); clk_zero_cnt = ICL_CLK_ZERO_CNT_MAX; } @@ -1875,7 +1878,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi) /* trail cnt in escape clocks*/ trail_cnt = DIV_ROUND_UP(tclk_trail_ns, tlpx_ns); if (trail_cnt > ICL_TRAIL_CNT_MAX) { - drm_dbg_kms(&dev_priv->drm, "trail_cnt out of range (%d)\n", + drm_dbg_kms(display->drm, "trail_cnt out of range (%d)\n", trail_cnt); trail_cnt = ICL_TRAIL_CNT_MAX; } @@ -1883,7 +1886,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi) /* tclk pre count in escape clocks */ tclk_pre_cnt = DIV_ROUND_UP(mipi_config->tclk_pre, tlpx_ns); if (tclk_pre_cnt > ICL_TCLK_PRE_CNT_MAX) { - drm_dbg_kms(&dev_priv->drm, + drm_dbg_kms(display->drm, "tclk_pre_cnt out of range (%d)\n", tclk_pre_cnt); tclk_pre_cnt = ICL_TCLK_PRE_CNT_MAX; } @@ -1892,7 +1895,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi) hs_zero_cnt = DIV_ROUND_UP(mipi_config->ths_prepare_hszero - ths_prepare_ns, tlpx_ns); if (hs_zero_cnt > ICL_HS_ZERO_CNT_MAX) { - drm_dbg_kms(&dev_priv->drm, "hs_zero_cnt out of range (%d)\n", + drm_dbg_kms(display->drm, "hs_zero_cnt out of range (%d)\n", hs_zero_cnt); hs_zero_cnt = ICL_HS_ZERO_CNT_MAX; } @@ -1900,7 +1903,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi) /* hs exit zero cnt in escape clocks */ exit_zero_cnt = DIV_ROUND_UP(mipi_config->ths_exit, tlpx_ns); if (exit_zero_cnt > ICL_EXIT_ZERO_CNT_MAX) { - drm_dbg_kms(&dev_priv->drm, + drm_dbg_kms(display->drm, "exit_zero_cnt out of range (%d)\n", exit_zero_cnt); exit_zero_cnt = ICL_EXIT_ZERO_CNT_MAX; @@ -1942,10 +1945,9 @@ static void icl_dsi_add_properties(struct intel_connector *connector) fixed_mode->vdisplay); }
-void icl_dsi_init(struct drm_i915_private *dev_priv, +void icl_dsi_init(struct intel_display *display, const struct intel_bios_encoder_data *devdata) { - struct intel_display *display = &dev_priv->display; struct intel_dsi *intel_dsi; struct intel_encoder *encoder; struct intel_connector *intel_connector; @@ -1973,7 +1975,8 @@ void icl_dsi_init(struct drm_i915_private *dev_priv, encoder->devdata = devdata;
/* register DSI encoder with DRM subsystem */ - drm_encoder_init(&dev_priv->drm, &encoder->base, &gen11_dsi_encoder_funcs, + drm_encoder_init(display->drm, &encoder->base, + &gen11_dsi_encoder_funcs, DRM_MODE_ENCODER_DSI, "DSI %c", port_name(port));
encoder->pre_pll_enable = gen11_dsi_pre_pll_enable; @@ -1998,7 +2001,8 @@ void icl_dsi_init(struct drm_i915_private *dev_priv, encoder->shutdown = intel_dsi_shutdown;
/* register DSI connector with DRM subsystem */ - drm_connector_init(&dev_priv->drm, connector, &gen11_dsi_connector_funcs, + drm_connector_init(display->drm, connector, + &gen11_dsi_connector_funcs, DRM_MODE_CONNECTOR_DSI); drm_connector_helper_add(connector, &gen11_dsi_connector_helper_funcs); connector->display_info.subpixel_order = SubPixelHorizontalRGB; @@ -2011,12 +2015,12 @@ void icl_dsi_init(struct drm_i915_private *dev_priv,
intel_bios_init_panel_late(display, &intel_connector->panel, encoder->devdata, NULL);
- mutex_lock(&dev_priv->drm.mode_config.mutex); + mutex_lock(&display->drm->mode_config.mutex); intel_panel_add_vbt_lfp_fixed_mode(intel_connector); - mutex_unlock(&dev_priv->drm.mode_config.mutex); + mutex_unlock(&display->drm->mode_config.mutex);
if (!intel_panel_preferred_fixed_mode(intel_connector)) { - drm_err(&dev_priv->drm, "DSI fixed mode info missing\n"); + drm_err(display->drm, "DSI fixed mode info missing\n"); goto err; }
@@ -2029,10 +2033,10 @@ void icl_dsi_init(struct drm_i915_private *dev_priv, else intel_dsi->ports = BIT(port);
- if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.bl_ports & ~intel_dsi->ports)) + if (drm_WARN_ON(display->drm, intel_connector->panel.vbt.dsi.bl_ports & ~intel_dsi->ports)) intel_connector->panel.vbt.dsi.bl_ports &= intel_dsi->ports;
- if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.cabc_ports & ~intel_dsi->ports)) + if (drm_WARN_ON(display->drm, intel_connector->panel.vbt.dsi.cabc_ports & ~intel_dsi->ports)) intel_connector->panel.vbt.dsi.cabc_ports &= intel_dsi->ports;
for_each_dsi_port(port, intel_dsi->ports) { @@ -2046,7 +2050,7 @@ void icl_dsi_init(struct drm_i915_private *dev_priv, }
if (!intel_dsi_vbt_init(intel_dsi, MIPI_DSI_GENERIC_PANEL_ID)) { - drm_dbg_kms(&dev_priv->drm, "no device found\n"); + drm_dbg_kms(display->drm, "no device found\n"); goto err; }
diff --git a/drivers/gpu/drm/i915/display/icl_dsi.h b/drivers/gpu/drm/i915/display/icl_dsi.h index 43fa7d72eeb18..099fc50e35b41 100644 --- a/drivers/gpu/drm/i915/display/icl_dsi.h +++ b/drivers/gpu/drm/i915/display/icl_dsi.h @@ -6,11 +6,11 @@ #ifndef __ICL_DSI_H__ #define __ICL_DSI_H__
-struct drm_i915_private; struct intel_bios_encoder_data; struct intel_crtc_state; +struct intel_display;
-void icl_dsi_init(struct drm_i915_private *dev_priv, +void icl_dsi_init(struct intel_display *display, const struct intel_bios_encoder_data *devdata); void icl_dsi_frame_update(struct intel_crtc_state *crtc_state);
diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c index 2f1d9ce87ceb0..34dee523f0b61 100644 --- a/drivers/gpu/drm/i915/display/intel_ddi.c +++ b/drivers/gpu/drm/i915/display/intel_ddi.c @@ -4888,7 +4888,7 @@ void intel_ddi_init(struct intel_display *display, if (!assert_has_icl_dsi(dev_priv)) return;
- icl_dsi_init(dev_priv, devdata); + icl_dsi_init(display, devdata); return; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Imre Deak imre.deak@intel.com
[ Upstream commit 879f70382ff3e92fc854589ada3453e3f5f5b601 ]
The format of the port width field in the DDI_BUF_CTL and the TRANS_DDI_FUNC_CTL registers are different starting with MTL, where the x3 lane mode for HDMI FRL has a different encoding in the two registers. To account for this use the TRANS_DDI_FUNC_CTL's own port width macro.
Cc: stable@vger.kernel.org # v6.5+ Fixes: b66a8abaa48a ("drm/i915/display/mtl: Fill port width in DDI_BUF_/TRANS_DDI_FUNC_/PORT_BUF_CTL for HDMI") Reviewed-by: Jani Nikula jani.nikula@intel.com Signed-off-by: Imre Deak imre.deak@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250214142001.552916-2-imre.d... (cherry picked from commit 76120b3a304aec28fef4910204b81a12db8974da) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/icl_dsi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c index e2a88e5a97479..4e95b8eda23f7 100644 --- a/drivers/gpu/drm/i915/display/icl_dsi.c +++ b/drivers/gpu/drm/i915/display/icl_dsi.c @@ -806,8 +806,8 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder, /* select data lane width */ tmp = intel_de_read(display, TRANS_DDI_FUNC_CTL(display, dsi_trans)); - tmp &= ~DDI_PORT_WIDTH_MASK; - tmp |= DDI_PORT_WIDTH(intel_dsi->lane_count); + tmp &= ~TRANS_DDI_PORT_WIDTH_MASK; + tmp |= TRANS_DDI_PORT_WIDTH(intel_dsi->lane_count);
/* select input pipe */ tmp &= ~TRANS_DDI_EDP_INPUT_MASK;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bartosz Golaszewski bartosz.golaszewski@linaro.org
[ Upstream commit 1b35c124f961b355dafb1906c591191bd0b37417 ]
There's no need to use the OF-specific variant to get the match data. Switch to using device_get_match_data() and with that remove the of.h include. Also remove of_irq.h as none of its interfaces is used here and order the includes in alphabetical order.
Reviewed-by: Linus Walleij linus.walleij@linaro.org Link: https://lore.kernel.org/r/20241007102549.34926-1-brgl@bgdev.pl Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Stable-dep-of: 4e667a196809 ("gpio: vf610: add locking to gpio direction functions") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-vf610.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c index 27eff741fe9a2..c4f34a347cb6e 100644 --- a/drivers/gpio/gpio-vf610.c +++ b/drivers/gpio/gpio-vf610.c @@ -15,10 +15,9 @@ #include <linux/io.h> #include <linux/ioport.h> #include <linux/irq.h> -#include <linux/platform_device.h> -#include <linux/of.h> -#include <linux/of_irq.h> #include <linux/pinctrl/consumer.h> +#include <linux/platform_device.h> +#include <linux/property.h>
#define VF610_GPIO_PER_PORT 32
@@ -297,7 +296,7 @@ static int vf610_gpio_probe(struct platform_device *pdev) if (!port) return -ENOMEM;
- port->sdata = of_device_get_match_data(dev); + port->sdata = device_get_match_data(dev);
dual_base = port->sdata->have_dual_base;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Korsnes johan.korsnes@remarkable.no
[ Upstream commit 4e667a1968099c6deadee2313ecd648f8f0a8956 ]
Add locking to `vf610_gpio_direction_input|output()` functions. Without this locking, a race condition exists between concurrent calls to these functions, potentially leading to incorrect GPIO direction settings.
To verify the correctness of this fix, a `trylock` patch was applied, where after a couple of reboots the race was confirmed. I.e., one user had to wait before acquiring the lock. With this patch the race has not been encountered. It's worth mentioning that any type of debugging (printing, tracing, etc.) would "resolve"/hide the issue.
Fixes: 659d8a62311f ("gpio: vf610: add imx7ulp support") Signed-off-by: Johan Korsnes johan.korsnes@remarkable.no Reviewed-by: Linus Walleij linus.walleij@linaro.org Reviewed-by: Haibo Chen haibo.chen@nxp.com Cc: Bartosz Golaszewski brgl@bgdev.pl Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250217091643.679644-1-johan.korsnes@remarkable.n... Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-vf610.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c index c4f34a347cb6e..c36a9dbccd4dd 100644 --- a/drivers/gpio/gpio-vf610.c +++ b/drivers/gpio/gpio-vf610.c @@ -36,6 +36,7 @@ struct vf610_gpio_port { struct clk *clk_port; struct clk *clk_gpio; int irq; + spinlock_t lock; /* protect gpio direction registers */ };
#define GPIO_PDOR 0x00 @@ -124,6 +125,7 @@ static int vf610_gpio_direction_input(struct gpio_chip *chip, unsigned int gpio) u32 val;
if (port->sdata->have_paddr) { + guard(spinlock_irqsave)(&port->lock); val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR); val &= ~mask; vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR); @@ -142,6 +144,7 @@ static int vf610_gpio_direction_output(struct gpio_chip *chip, unsigned int gpio vf610_gpio_set(chip, gpio, value);
if (port->sdata->have_paddr) { + guard(spinlock_irqsave)(&port->lock); val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR); val |= mask; vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR); @@ -297,6 +300,7 @@ static int vf610_gpio_probe(struct platform_device *pdev) return -ENOMEM;
port->sdata = device_get_match_data(dev); + spin_lock_init(&port->lock);
dual_base = port->sdata->have_dual_base;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pali Rohár pali@kernel.org
[ Upstream commit 65c49767dd4fc058673f9259fda1772fd398eaa7 ]
Member 'symlink' is part of the union in struct cifs_open_info_data. Its value is assigned on few places, but is always read through another union member 'reparse_point'. So to make code more readable, always use only 'reparse_point' member and drop whole union structure. No function change.
Signed-off-by: Pali Rohár pali@kernel.org Acked-by: Paulo Alcantara (Red Hat) pc@manguebit.com Signed-off-by: Steve French stfrench@microsoft.com Stable-dep-of: 9df23801c83d ("smb311: failure to open files of length 1040 when mounting with SMB3.1.1 POSIX extensions") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/smb/client/cifsglob.h | 5 +---- fs/smb/client/inode.c | 2 +- fs/smb/client/smb1ops.c | 4 ++-- 3 files changed, 4 insertions(+), 7 deletions(-)
diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 05274121e46f0..0979feb30bedb 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -209,10 +209,7 @@ struct cifs_cred {
struct cifs_open_info_data { bool adjust_tz; - union { - bool reparse_point; - bool symlink; - }; + bool reparse_point; struct { /* ioctl response buffer */ struct { diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c index e11e67af760f4..a3f0835e12be3 100644 --- a/fs/smb/client/inode.c +++ b/fs/smb/client/inode.c @@ -968,7 +968,7 @@ cifs_get_file_info(struct file *filp) /* TODO: add support to query reparse tag */ data.adjust_tz = false; if (data.symlink_target) { - data.symlink = true; + data.reparse_point = true; data.reparse.tag = IO_REPARSE_TAG_SYMLINK; } path = build_path_from_dentry(dentry, page); diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c index c70f4961c4eb7..bd791aa54681f 100644 --- a/fs/smb/client/smb1ops.c +++ b/fs/smb/client/smb1ops.c @@ -551,7 +551,7 @@ static int cifs_query_path_info(const unsigned int xid, int rc; FILE_ALL_INFO fi = {};
- data->symlink = false; + data->reparse_point = false; data->adjust_tz = false;
/* could do find first instead but this returns more info */ @@ -592,7 +592,7 @@ static int cifs_query_path_info(const unsigned int xid, /* Need to check if this is a symbolic link or not */ tmprc = CIFS_open(xid, &oparms, &oplock, NULL); if (tmprc == -EOPNOTSUPP) - data->symlink = true; + data->reparse_point = true; else if (tmprc == 0) CIFSSMBClose(xid, tcon, fid.netfid); }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steve French stfrench@microsoft.com
[ Upstream commit 9df23801c83d3e12b4c09be39d37d2be385e52f9 ]
If a file size has bits 0x410 = ATTR_DIRECTORY | ATTR_REPARSE set then during queryinfo (stat) the file is regarded as a directory and subsequent opens can fail. A simple test example is trying to open any file 1040 bytes long when mounting with "posix" (SMB3.1.1 POSIX/Linux Extensions).
The cause of this bug is that Attributes field in smb2_file_all_info struct occupies the same place that EndOfFile field in smb311_posix_qinfo, and sometimes the latter struct is incorrectly processed as if it was the first one.
Reported-by: Oleh Nykyforchyn oleh.nyk@gmail.com Tested-by: Oleh Nykyforchyn oleh.nyk@gmail.com Acked-by: Paulo Alcantara (Red Hat) pc@manguebit.com Cc: stable@vger.kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/smb/client/cifsglob.h | 1 + fs/smb/client/reparse.h | 28 ++++++++++++++++++++++------ fs/smb/client/smb2inode.c | 4 ++++ fs/smb/client/smb2ops.c | 3 ++- 4 files changed, 29 insertions(+), 7 deletions(-)
diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 0979feb30bedb..b630beb757a44 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -210,6 +210,7 @@ struct cifs_cred { struct cifs_open_info_data { bool adjust_tz; bool reparse_point; + bool contains_posix_file_info; struct { /* ioctl response buffer */ struct { diff --git a/fs/smb/client/reparse.h b/fs/smb/client/reparse.h index ff05b0e75c928..f080f92cb1e74 100644 --- a/fs/smb/client/reparse.h +++ b/fs/smb/client/reparse.h @@ -97,14 +97,30 @@ static inline bool reparse_inode_match(struct inode *inode,
static inline bool cifs_open_data_reparse(struct cifs_open_info_data *data) { - struct smb2_file_all_info *fi = &data->fi; - u32 attrs = le32_to_cpu(fi->Attributes); + u32 attrs; bool ret;
- ret = data->reparse_point || (attrs & ATTR_REPARSE); - if (ret) - attrs |= ATTR_REPARSE; - fi->Attributes = cpu_to_le32(attrs); + if (data->contains_posix_file_info) { + struct smb311_posix_qinfo *fi = &data->posix_fi; + + attrs = le32_to_cpu(fi->DosAttributes); + if (data->reparse_point) { + attrs |= ATTR_REPARSE; + fi->DosAttributes = cpu_to_le32(attrs); + } + + } else { + struct smb2_file_all_info *fi = &data->fi; + + attrs = le32_to_cpu(fi->Attributes); + if (data->reparse_point) { + attrs |= ATTR_REPARSE; + fi->Attributes = cpu_to_le32(attrs); + } + } + + ret = attrs & ATTR_REPARSE; + return ret; }
diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c index 7dfd3eb3847b3..6048b3fed3e78 100644 --- a/fs/smb/client/smb2inode.c +++ b/fs/smb/client/smb2inode.c @@ -648,6 +648,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon, switch (cmds[i]) { case SMB2_OP_QUERY_INFO: idata = in_iov[i].iov_base; + idata->contains_posix_file_info = false; if (rc == 0 && cfile && cfile->symlink_target) { idata->symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL); if (!idata->symlink_target) @@ -671,6 +672,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon, break; case SMB2_OP_POSIX_QUERY_INFO: idata = in_iov[i].iov_base; + idata->contains_posix_file_info = true; if (rc == 0 && cfile && cfile->symlink_target) { idata->symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL); if (!idata->symlink_target) @@ -768,6 +770,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon, idata = in_iov[i].iov_base; idata->reparse.io.iov = *iov; idata->reparse.io.buftype = resp_buftype[i + 1]; + idata->contains_posix_file_info = false; /* BB VERIFY */ rbuf = reparse_buf_ptr(iov); if (IS_ERR(rbuf)) { rc = PTR_ERR(rbuf); @@ -789,6 +792,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon, case SMB2_OP_QUERY_WSL_EA: if (!rc) { idata = in_iov[i].iov_base; + idata->contains_posix_file_info = false; qi_rsp = rsp_iov[i + 1].iov_base; data[0] = (u8 *)qi_rsp + le16_to_cpu(qi_rsp->OutputBufferOffset); size[0] = le32_to_cpu(qi_rsp->OutputBufferLength); diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index e8da63d29a28f..516be8c0b2a9b 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -1001,6 +1001,7 @@ static int smb2_query_file_info(const unsigned int xid, struct cifs_tcon *tcon, if (!data->symlink_target) return -ENOMEM; } + data->contains_posix_file_info = false; return SMB2_query_info(xid, tcon, fid->persistent_fid, fid->volatile_fid, &data->fi); }
@@ -5177,7 +5178,7 @@ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode, FILE_CREATE, CREATE_NOT_DIR | CREATE_OPTION_SPECIAL, ACL_NO_MODE); oparms.fid = &fid; - + idata.contains_posix_file_info = false; rc = server->ops->open(xid, &oparms, &oplock, &idata); if (rc) goto out;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Qu Wenruo wqu@suse.com
[ Upstream commit efa11fd269c139e29b71ec21bc9c9c0063fde40d ]
[BUG] When running generic/418 with a btrfs whose block size < page size (subpage cases), it always fails.
And the following minimal reproducer is more than enough to trigger it reliably:
workload() { mkfs.btrfs -s 4k -f $dev > /dev/null dmesg -C mount $dev $mnt $fsstree_dir/src/dio-invalidate-cache -r -b 4096 -n 3 -i 1 -f $mnt/diotest ret=$? umount $mnt stop_trace if [ $ret -ne 0 ]; then fail fi }
for (( i = 0; i < 1024; i++)); do echo "=== $i/$runtime ===" workload done
[CAUSE] With extra trace printk added to the following functions: - btrfs_buffered_write() * Which folio is touched * The file offset (start) where the buffered write is at * How many bytes are copied * The content of the write (the first 2 bytes)
- submit_one_sector() * Which folio is touched * The position inside the folio * The content of the page cache (the first 2 bytes)
- pagecache_isize_extended() * The parameters of the function itself * The parameters of the folio_zero_range()
Which are enough to show the problem:
22.158114: btrfs_buffered_write: folio pos=0 start=0 copied=4096 content=0x0101 22.158161: submit_one_sector: r/i=5/257 folio=0 pos=0 content=0x0101 22.158609: btrfs_buffered_write: folio pos=0 start=4096 copied=4096 content=0x0101 22.158634: btrfs_buffered_write: folio pos=0 start=8192 copied=4096 content=0x0101 22.158650: pagecache_isize_extended: folio=0 from=4096 to=8192 bsize=4096 zero off=4096 len=8192 22.158682: submit_one_sector: r/i=5/257 folio=0 pos=4096 content=0x0000 22.158686: submit_one_sector: r/i=5/257 folio=0 pos=8192 content=0x0101
The tool dio-invalidate-cache will start 3 threads, each doing a buffered write with 0x01 at offset 0, 4096 and 8192, do a fsync, then do a direct read, and compare the read buffer with the write buffer.
Note that all 3 btrfs_buffered_write() are writing the correct 0x01 into the page cache.
But at submit_one_sector(), at file offset 4096, the content is zeroed out, by pagecache_isize_extended().
The race happens like this: Thread A is writing into range [4K, 8K). Thread B is writing into range [8K, 12k).
Thread A | Thread B -------------------------------------+------------------------------------ btrfs_buffered_write() | btrfs_buffered_write() |- old_isize = 4K; | |- old_isize = 4096; |- btrfs_inode_lock() | | |- write into folio range [4K, 8K) | | |- pagecache_isize_extended() | | | extend isize from 4096 to 8192 | | | no folio_zero_range() called | | |- btrfs_inode_lock() | | | |- btrfs_inode_lock() | |- write into folio range [8K, 12K) | |- pagecache_isize_extended() | | calling folio_zero_range(4K, 8K) | | This is caused by the old_isize is | | grabbed too early, without any | | inode lock. | |- btrfs_inode_unlock()
The @old_isize is grabbed without inode lock, causing race between two buffered write threads and making pagecache_isize_extended() to zero range which is still containing cached data.
And this is only affecting subpage btrfs, because for regular blocksize == page size case, the function pagecache_isize_extended() will do nothing if the block size >= page size.
[FIX] Grab the old i_size while holding the inode lock. This means each buffered write thread will have a stable view of the old inode size, thus avoid the above race.
CC: stable@vger.kernel.org # 5.15+ Fixes: 5e8b9ef30392 ("btrfs: move pos increment and pagecache extension to btrfs_buffered_write") Reviewed-by: Filipe Manana fdmanana@suse.com Signed-off-by: Qu Wenruo wqu@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/file.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index 848cb2c3d9dde..78c4a3765002e 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -1200,7 +1200,7 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i) ssize_t ret; bool only_release_metadata = false; bool force_page_uptodate = false; - loff_t old_isize = i_size_read(inode); + loff_t old_isize; unsigned int ilock_flags = 0; const bool nowait = (iocb->ki_flags & IOCB_NOWAIT); unsigned int bdp_flags = (nowait ? BDP_ASYNC : 0); @@ -1212,6 +1212,13 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i) if (ret < 0) return ret;
+ /* + * We can only trust the isize with inode lock held, or it can race with + * other buffered writes and cause incorrect call of + * pagecache_isize_extended() to overwrite existing data. + */ + old_isize = i_size_read(inode); + ret = generic_write_checks(iocb, i); if (ret <= 0) goto out;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Borislav Petkov (AMD) bp@alien8.de
commit 058a6bec37c6c3b826158f6d26b75de43816a880 upstream.
Add some more forgotten models to the SHA check.
Fixes: 50cef76d5cb0 ("x86/microcode/AMD: Load only SHA256-checksummed patches") Reported-by: Toralf Förster toralf.foerster@gmx.de Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Ingo Molnar mingo@kernel.org Tested-by: Toralf Förster toralf.foerster@gmx.de Link: https://lore.kernel.org/r/20250307220256.11816-1-bp@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/microcode/amd.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -175,23 +175,29 @@ static bool need_sha_check(u32 cur_rev) { switch (cur_rev >> 8) { case 0x80012: return cur_rev <= 0x800126f; break; + case 0x80082: return cur_rev <= 0x800820f; break; case 0x83010: return cur_rev <= 0x830107c; break; case 0x86001: return cur_rev <= 0x860010e; break; case 0x86081: return cur_rev <= 0x8608108; break; case 0x87010: return cur_rev <= 0x8701034; break; case 0x8a000: return cur_rev <= 0x8a0000a; break; + case 0xa0010: return cur_rev <= 0xa00107a; break; case 0xa0011: return cur_rev <= 0xa0011da; break; case 0xa0012: return cur_rev <= 0xa001243; break; + case 0xa0082: return cur_rev <= 0xa00820e; break; case 0xa1011: return cur_rev <= 0xa101153; break; case 0xa1012: return cur_rev <= 0xa10124e; break; case 0xa1081: return cur_rev <= 0xa108109; break; case 0xa2010: return cur_rev <= 0xa20102f; break; case 0xa2012: return cur_rev <= 0xa201212; break; + case 0xa4041: return cur_rev <= 0xa404109; break; + case 0xa5000: return cur_rev <= 0xa500013; break; case 0xa6012: return cur_rev <= 0xa60120a; break; case 0xa7041: return cur_rev <= 0xa704109; break; case 0xa7052: return cur_rev <= 0xa705208; break; case 0xa7080: return cur_rev <= 0xa708009; break; case 0xa70c0: return cur_rev <= 0xa70C009; break; + case 0xaa001: return cur_rev <= 0xaa00116; break; case 0xaa002: return cur_rev <= 0xaa00218; break; default: break; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra peterz@infradead.org
commit 624bde3465f660e54a7cd4c1efc3e536349fead5 upstream.
annotate_reachable() is unreliable since the compiler is free to place random code inbetween two consecutive asm() statements.
This removes the last and only annotate_reachable() user.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Acked-by: Josh Poimboeuf jpoimboe@kernel.org Link: https://lore.kernel.org/r/20241128094312.133437051@infradead.org Closes: https://lore.kernel.org/loongarch/20250307214943.372210-1-ojeda@kernel.org/ Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/loongarch/include/asm/bug.h | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
--- a/arch/loongarch/include/asm/bug.h +++ b/arch/loongarch/include/asm/bug.h @@ -4,6 +4,7 @@
#include <asm/break.h> #include <linux/stringify.h> +#include <linux/objtool.h>
#ifndef CONFIG_DEBUG_BUGVERBOSE #define _BUGVERBOSE_LOCATION(file, line) @@ -33,25 +34,25 @@
#define ASM_BUG_FLAGS(flags) \ __BUG_ENTRY(flags) \ - break BRK_BUG + break BRK_BUG;
#define ASM_BUG() ASM_BUG_FLAGS(0)
-#define __BUG_FLAGS(flags) \ - asm_inline volatile (__stringify(ASM_BUG_FLAGS(flags))); +#define __BUG_FLAGS(flags, extra) \ + asm_inline volatile (__stringify(ASM_BUG_FLAGS(flags)) \ + extra);
#define __WARN_FLAGS(flags) \ do { \ instrumentation_begin(); \ - __BUG_FLAGS(BUGFLAG_WARNING|(flags)); \ - annotate_reachable(); \ + __BUG_FLAGS(BUGFLAG_WARNING|(flags), ASM_REACHABLE); \ instrumentation_end(); \ } while (0)
#define BUG() \ do { \ instrumentation_begin(); \ - __BUG_FLAGS(0); \ + __BUG_FLAGS(0, ""); \ unreachable(); \ } while (0)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 024f9676a6d236132119832a90fb9a1a9115b41a upstream.
Perform the same clean commit b2516f7af9d2 ("rust: kernel: remove `#[allow(clippy::new_ret_no_self)]`") did for a case that appeared in workqueue in parallel in commit 7324b88975c5 ("rust: workqueue: add helper for defining work_struct fields"):
Clippy triggered a false positive on its `new_ret_no_self` lint when using the `pin_init!` macro. Since Rust 1.67.0, that does not happen anymore, since Clippy learnt to not warn about `-> impl Trait<Self>` [1][2].
The kernel nowadays uses Rust 1.72.1, thus remove the `#[allow]`.
Link: https://github.com/rust-lang/rust-clippy/issues/7344 [1] Link: https://github.com/rust-lang/rust-clippy/pull/9733 [2]
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Trevor Gross tmgross@umich.edu Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-2-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/workqueue.rs | 1 - 1 file changed, 1 deletion(-)
--- a/rust/kernel/workqueue.rs +++ b/rust/kernel/workqueue.rs @@ -366,7 +366,6 @@ unsafe impl<T: ?Sized, const ID: u64> Sy impl<T: ?Sized, const ID: u64> Work<T, ID> { /// Creates a new instance of [`Work`]. #[inline] - #[allow(clippy::new_ret_no_self)] pub fn new(name: &'static CStr, key: &'static LockClassKey) -> impl PinInit<Self> where T: WorkItem<ID>,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit a135aa3d30d28f26eb28a0ff5d48b387b0e0755f upstream.
Sort the global Rust flags so that it is easier to follow along when we have more, like this patch series does.
Reviewed-by: Trevor Gross tmgross@umich.edu Reviewed-by: Alice Ryhl aliceryhl@google.com Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-3-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Makefile | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/Makefile +++ b/Makefile @@ -446,19 +446,19 @@ KBUILD_USERLDFLAGS := $(USERLDFLAGS) export rust_common_flags := --edition=2021 \ -Zbinary_dep_depinfo=y \ -Astable_features \ - -Dunsafe_op_in_unsafe_fn \ -Dnon_ascii_idents \ + -Dunsafe_op_in_unsafe_fn \ + -Wmissing_docs \ -Wrust_2018_idioms \ -Wunreachable_pub \ - -Wmissing_docs \ - -Wrustdoc::missing_crate_level_docs \ -Wclippy::all \ + -Wclippy::dbg_macro \ -Wclippy::mut_mut \ -Wclippy::needless_bitwise_bool \ -Wclippy::needless_continue \ -Aclippy::needless_lifetimes \ -Wclippy::no_mangle_with_rust_abi \ - -Wclippy::dbg_macro + -Wrustdoc::missing_crate_level_docs
KBUILD_HOSTCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(HOST_LFS_CFLAGS) \ $(HOSTCFLAGS) -I $(srctree)/scripts/include
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 567cdff53e71de56ae67eaf4309db38778b7bcd3 upstream.
In order to provide `// SAFETY` comments for every `unsafe impl`, we would need to repeat them, which is not very useful and would be harder to read.
We could perhaps allow the lint (ideally within a small module), but we can take the chance to avoid the repetition of the `impl`s themselves too by using a small local macro, like in other places where we have had to do this sort of thing.
Thus add the straightforward `impl_{from,as}bytes!` macros and use them to implement `FromBytes`.
This, in turn, will allow us in the next patch to place a `// SAFETY` comment that defers to the actual invocation of the macro.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Trevor Gross tmgross@umich.edu Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-4-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/types.rs | 68 ++++++++++++++++++++++++++------------------------- 1 file changed, 35 insertions(+), 33 deletions(-)
--- a/rust/kernel/types.rs +++ b/rust/kernel/types.rs @@ -481,21 +481,22 @@ pub enum Either<L, R> { /// All bit-patterns must be valid for this type. This type must not have interior mutability. pub unsafe trait FromBytes {}
-// SAFETY: All bit patterns are acceptable values of the types below. -unsafe impl FromBytes for u8 {} -unsafe impl FromBytes for u16 {} -unsafe impl FromBytes for u32 {} -unsafe impl FromBytes for u64 {} -unsafe impl FromBytes for usize {} -unsafe impl FromBytes for i8 {} -unsafe impl FromBytes for i16 {} -unsafe impl FromBytes for i32 {} -unsafe impl FromBytes for i64 {} -unsafe impl FromBytes for isize {} -// SAFETY: If all bit patterns are acceptable for individual values in an array, then all bit -// patterns are also acceptable for arrays of that type. -unsafe impl<T: FromBytes> FromBytes for [T] {} -unsafe impl<T: FromBytes, const N: usize> FromBytes for [T; N] {} +macro_rules! impl_frombytes { + ($($({$($generics:tt)*})? $t:ty, )*) => { + $(unsafe impl$($($generics)*)? FromBytes for $t {})* + }; +} + +impl_frombytes! { + // SAFETY: All bit patterns are acceptable values of the types below. + u8, u16, u32, u64, usize, + i8, i16, i32, i64, isize, + + // SAFETY: If all bit patterns are acceptable for individual values in an array, then all bit + // patterns are also acceptable for arrays of that type. + {<T: FromBytes>} [T], + {<T: FromBytes, const N: usize>} [T; N], +}
/// Types that can be viewed as an immutable slice of initialized bytes. /// @@ -514,21 +515,22 @@ unsafe impl<T: FromBytes, const N: usize /// mutability. pub unsafe trait AsBytes {}
-// SAFETY: Instances of the following types have no uninitialized portions. -unsafe impl AsBytes for u8 {} -unsafe impl AsBytes for u16 {} -unsafe impl AsBytes for u32 {} -unsafe impl AsBytes for u64 {} -unsafe impl AsBytes for usize {} -unsafe impl AsBytes for i8 {} -unsafe impl AsBytes for i16 {} -unsafe impl AsBytes for i32 {} -unsafe impl AsBytes for i64 {} -unsafe impl AsBytes for isize {} -unsafe impl AsBytes for bool {} -unsafe impl AsBytes for char {} -unsafe impl AsBytes for str {} -// SAFETY: If individual values in an array have no uninitialized portions, then the array itself -// does not have any uninitialized portions either. -unsafe impl<T: AsBytes> AsBytes for [T] {} -unsafe impl<T: AsBytes, const N: usize> AsBytes for [T; N] {} +macro_rules! impl_asbytes { + ($($({$($generics:tt)*})? $t:ty, )*) => { + $(unsafe impl$($($generics)*)? AsBytes for $t {})* + }; +} + +impl_asbytes! { + // SAFETY: Instances of the following types have no uninitialized portions. + u8, u16, u32, u64, usize, + i8, i16, i32, i64, isize, + bool, + char, + str, + + // SAFETY: If individual values in an array have no uninitialized portions, then the array + // itself does not have any uninitialized portions either. + {<T: AsBytes>} [T], + {<T: AsBytes, const N: usize>} [T; N], +}
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit db4f72c904cb116e2bf56afdd67fc5167a607a7b upstream.
Checking that we are not missing any `// SAFETY` comments in our `unsafe` blocks is something we have wanted to do for a long time, as well as cleaning up the remaining cases that were not documented [1].
Back when Rust for Linux started, this was something that could have been done via a script, like Rust's `tidy`. Soon after, in Rust 1.58.0, Clippy implemented the `undocumented_unsafe_blocks` lint [2].
Even though the lint has a few false positives, e.g. in some cases where attributes appear between the comment and the `unsafe` block [3], there are workarounds and the lint seems quite usable already.
Thus enable the lint now.
We still have a few cases to clean up, so just allow those for the moment by writing a `TODO` comment -- some of those may be good candidates for new contributors.
Link: https://github.com/Rust-for-Linux/linux/issues/351 [1] Link: https://rust-lang.github.io/rust-clippy/master/#/undocumented_unsafe_blocks [2] Link: https://github.com/rust-lang/rust-clippy/issues/13189 [3] Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Trevor Gross tmgross@umich.edu Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-5-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Makefile | 1 + mm/kasan/kasan_test_rust.rs | 1 + rust/bindings/lib.rs | 1 + rust/kernel/alloc/allocator.rs | 2 ++ rust/kernel/error.rs | 9 ++++++--- rust/kernel/init.rs | 5 +++++ rust/kernel/init/__internal.rs | 2 ++ rust/kernel/init/macros.rs | 9 +++++++++ rust/kernel/list.rs | 1 + rust/kernel/print.rs | 2 ++ rust/kernel/str.rs | 7 ++++--- rust/kernel/sync/condvar.rs | 2 +- rust/kernel/sync/lock.rs | 6 +++--- rust/kernel/types.rs | 4 ++++ rust/kernel/workqueue.rs | 4 ++++ rust/uapi/lib.rs | 1 + 16 files changed, 47 insertions(+), 10 deletions(-)
--- a/Makefile +++ b/Makefile @@ -458,6 +458,7 @@ export rust_common_flags := --edition=20 -Wclippy::needless_continue \ -Aclippy::needless_lifetimes \ -Wclippy::no_mangle_with_rust_abi \ + -Wclippy::undocumented_unsafe_blocks \ -Wrustdoc::missing_crate_level_docs
KBUILD_HOSTCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(HOST_LFS_CFLAGS) \ --- a/mm/kasan/kasan_test_rust.rs +++ b/mm/kasan/kasan_test_rust.rs @@ -17,5 +17,6 @@ pub extern "C" fn kasan_test_rust_uaf() } let ptr: *mut u8 = addr_of_mut!(v[2048]); drop(v); + // SAFETY: Incorrect, on purpose. unsafe { *ptr } } --- a/rust/bindings/lib.rs +++ b/rust/bindings/lib.rs @@ -25,6 +25,7 @@ )]
#[allow(dead_code)] +#[allow(clippy::undocumented_unsafe_blocks)] mod bindings_raw { // Use glob import here to expose all helpers. // Symbols defined within the module will take precedence to the glob import. --- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -31,6 +31,7 @@ pub(crate) unsafe fn krealloc_aligned(pt unsafe { bindings::krealloc(ptr as *const core::ffi::c_void, size, flags.0) as *mut u8 } }
+// SAFETY: TODO. unsafe impl GlobalAlloc for KernelAllocator { unsafe fn alloc(&self, layout: Layout) -> *mut u8 { // SAFETY: `ptr::null_mut()` is null and `layout` has a non-zero size by the function safety @@ -39,6 +40,7 @@ unsafe impl GlobalAlloc for KernelAlloca }
unsafe fn dealloc(&self, ptr: *mut u8, _layout: Layout) { + // SAFETY: TODO. unsafe { bindings::kfree(ptr as *const core::ffi::c_void); } --- a/rust/kernel/error.rs +++ b/rust/kernel/error.rs @@ -171,9 +171,11 @@ impl fmt::Debug for Error { match self.name() { // Print out number if no name can be found. None => f.debug_tuple("Error").field(&-self.0).finish(), - // SAFETY: These strings are ASCII-only. Some(name) => f - .debug_tuple(unsafe { core::str::from_utf8_unchecked(name) }) + .debug_tuple( + // SAFETY: These strings are ASCII-only. + unsafe { core::str::from_utf8_unchecked(name) }, + ) .finish(), } } @@ -277,6 +279,8 @@ pub(crate) fn from_err_ptr<T>(ptr: *mut if unsafe { bindings::IS_ERR(const_ptr) } { // SAFETY: The FFI function does not deref the pointer. let err = unsafe { bindings::PTR_ERR(const_ptr) }; + + #[allow(clippy::unnecessary_cast)] // CAST: If `IS_ERR()` returns `true`, // then `PTR_ERR()` is guaranteed to return a // negative value greater-or-equal to `-bindings::MAX_ERRNO`, @@ -286,7 +290,6 @@ pub(crate) fn from_err_ptr<T>(ptr: *mut // // SAFETY: `IS_ERR()` ensures `err` is a // negative value greater-or-equal to `-bindings::MAX_ERRNO`. - #[allow(clippy::unnecessary_cast)] return Err(unsafe { Error::from_errno_unchecked(err as core::ffi::c_int) }); } Ok(ptr) --- a/rust/kernel/init.rs +++ b/rust/kernel/init.rs @@ -541,6 +541,7 @@ macro_rules! stack_try_pin_init { /// } /// pin_init!(&this in Buf { /// buf: [0; 64], +/// // SAFETY: TODO. /// ptr: unsafe { addr_of_mut!((*this.as_ptr()).buf).cast() }, /// pin: PhantomPinned, /// }); @@ -875,6 +876,7 @@ pub unsafe trait PinInit<T: ?Sized, E = /// } /// /// let foo = pin_init!(Foo { + /// // SAFETY: TODO. /// raw <- unsafe { /// Opaque::ffi_init(|s| { /// init_foo(s); @@ -1162,6 +1164,7 @@ where // SAFETY: Every type can be initialized by-value. unsafe impl<T, E> Init<T, E> for T { unsafe fn __init(self, slot: *mut T) -> Result<(), E> { + // SAFETY: TODO. unsafe { slot.write(self) }; Ok(()) } @@ -1170,6 +1173,7 @@ unsafe impl<T, E> Init<T, E> for T { // SAFETY: Every type can be initialized by-value. `__pinned_init` calls `__init`. unsafe impl<T, E> PinInit<T, E> for T { unsafe fn __pinned_init(self, slot: *mut T) -> Result<(), E> { + // SAFETY: TODO. unsafe { self.__init(slot) } } } @@ -1411,6 +1415,7 @@ pub fn zeroed<T: Zeroable>() -> impl Ini
macro_rules! impl_zeroable { ($($({$($generics:tt)*})? $t:ty, )*) => { + // SAFETY: Safety comments written in the macro invocation. $(unsafe impl$($($generics)*)? Zeroable for $t {})* }; } --- a/rust/kernel/init/__internal.rs +++ b/rust/kernel/init/__internal.rs @@ -112,10 +112,12 @@ impl<T: ?Sized> Clone for AllData<T> {
impl<T: ?Sized> Copy for AllData<T> {}
+// SAFETY: TODO. unsafe impl<T: ?Sized> InitData for AllData<T> { type Datee = T; }
+// SAFETY: TODO. unsafe impl<T: ?Sized> HasInitData for T { type InitData = AllData<T>;
--- a/rust/kernel/init/macros.rs +++ b/rust/kernel/init/macros.rs @@ -513,6 +513,7 @@ macro_rules! __pinned_drop { } ), ) => { + // SAFETY: TODO. unsafe $($impl_sig)* { // Inherit all attributes and the type/ident tokens for the signature. $(#[$($attr)*])* @@ -872,6 +873,7 @@ macro_rules! __pin_data { } }
+ // SAFETY: TODO. unsafe impl<$($impl_generics)*> $crate::init::__internal::PinData for __ThePinData<$($ty_generics)*> where $($whr)* @@ -997,6 +999,7 @@ macro_rules! __pin_data { slot: *mut $p_type, init: impl $crate::init::PinInit<$p_type, E>, ) -> ::core::result::Result<(), E> { + // SAFETY: TODO. unsafe { $crate::init::PinInit::__pinned_init(init, slot) } } )* @@ -1007,6 +1010,7 @@ macro_rules! __pin_data { slot: *mut $type, init: impl $crate::init::Init<$type, E>, ) -> ::core::result::Result<(), E> { + // SAFETY: TODO. unsafe { $crate::init::Init::__init(init, slot) } } )* @@ -1121,6 +1125,8 @@ macro_rules! __init_internal { // no possibility of returning without `unsafe`. struct __InitOk; // Get the data about fields from the supplied type. + // + // SAFETY: TODO. let data = unsafe { use $crate::init::__internal::$has_data; // Here we abuse `paste!` to retokenize `$t`. Declarative macros have some internal @@ -1176,6 +1182,7 @@ macro_rules! __init_internal { let init = move |slot| -> ::core::result::Result<(), $err> { init(slot).map(|__InitOk| ()) }; + // SAFETY: TODO. let init = unsafe { $crate::init::$construct_closure::<_, $err>(init) }; init }}; @@ -1324,6 +1331,8 @@ macro_rules! __init_internal { // Endpoint, nothing more to munch, create the initializer. // Since we are in the closure that is never called, this will never get executed. // We abuse `slot` to get the correct type inference here: + // + // SAFETY: TODO. unsafe { // Here we abuse `paste!` to retokenize `$t`. Declarative macros have some internal // information that is associated to already parsed fragments, so a path fragment --- a/rust/kernel/list.rs +++ b/rust/kernel/list.rs @@ -354,6 +354,7 @@ impl<T: ?Sized + ListItem<ID>, const ID: /// /// `item` must not be in a different linked list (with the same id). pub unsafe fn remove(&mut self, item: &T) -> Option<ListArc<T, ID>> { + // SAFETY: TODO. let mut item = unsafe { ListLinks::fields(T::view_links(item)) }; // SAFETY: The user provided a reference, and reference are never dangling. // --- a/rust/kernel/print.rs +++ b/rust/kernel/print.rs @@ -23,6 +23,7 @@ unsafe extern "C" fn rust_fmt_argument( use fmt::Write; // SAFETY: The C contract guarantees that `buf` is valid if it's less than `end`. let mut w = unsafe { RawFormatter::from_ptrs(buf.cast(), end.cast()) }; + // SAFETY: TODO. let _ = w.write_fmt(unsafe { *(ptr as *const fmt::Arguments<'_>) }); w.pos().cast() } @@ -102,6 +103,7 @@ pub unsafe fn call_printk( ) { // `_printk` does not seem to fail in any path. #[cfg(CONFIG_PRINTK)] + // SAFETY: TODO. unsafe { bindings::_printk( format_string.as_ptr() as _, --- a/rust/kernel/str.rs +++ b/rust/kernel/str.rs @@ -162,10 +162,10 @@ impl CStr { /// Returns the length of this string with `NUL`. #[inline] pub const fn len_with_nul(&self) -> usize { - // SAFETY: This is one of the invariant of `CStr`. - // We add a `unreachable_unchecked` here to hint the optimizer that - // the value returned from this function is non-zero. if self.0.is_empty() { + // SAFETY: This is one of the invariant of `CStr`. + // We add a `unreachable_unchecked` here to hint the optimizer that + // the value returned from this function is non-zero. unsafe { core::hint::unreachable_unchecked() }; } self.0.len() @@ -301,6 +301,7 @@ impl CStr { /// ``` #[inline] pub unsafe fn as_str_unchecked(&self) -> &str { + // SAFETY: TODO. unsafe { core::str::from_utf8_unchecked(self.as_bytes()) } }
--- a/rust/kernel/sync/condvar.rs +++ b/rust/kernel/sync/condvar.rs @@ -92,8 +92,8 @@ pub struct CondVar { _pin: PhantomPinned, }
-// SAFETY: `CondVar` only uses a `struct wait_queue_head`, which is safe to use on any thread. #[allow(clippy::non_send_fields_in_send_ty)] +// SAFETY: `CondVar` only uses a `struct wait_queue_head`, which is safe to use on any thread. unsafe impl Send for CondVar {}
// SAFETY: `CondVar` only uses a `struct wait_queue_head`, which is safe to use on multiple threads --- a/rust/kernel/sync/lock.rs +++ b/rust/kernel/sync/lock.rs @@ -150,9 +150,9 @@ impl<T: ?Sized, B: Backend> Guard<'_, T, // SAFETY: The caller owns the lock, so it is safe to unlock it. unsafe { B::unlock(self.lock.state.get(), &self.state) };
- // SAFETY: The lock was just unlocked above and is being relocked now. - let _relock = - ScopeGuard::new(|| unsafe { B::relock(self.lock.state.get(), &mut self.state) }); + let _relock = ScopeGuard::new(|| + // SAFETY: The lock was just unlocked above and is being relocked now. + unsafe { B::relock(self.lock.state.get(), &mut self.state) });
cb() } --- a/rust/kernel/types.rs +++ b/rust/kernel/types.rs @@ -410,6 +410,7 @@ impl<T: AlwaysRefCounted> ARef<T> { /// /// struct Empty {} /// + /// # // SAFETY: TODO. /// unsafe impl AlwaysRefCounted for Empty { /// fn inc_ref(&self) {} /// unsafe fn dec_ref(_obj: NonNull<Self>) {} @@ -417,6 +418,7 @@ impl<T: AlwaysRefCounted> ARef<T> { /// /// let mut data = Empty {}; /// let ptr = NonNull::<Empty>::new(&mut data as *mut _).unwrap(); + /// # // SAFETY: TODO. /// let data_ref: ARef<Empty> = unsafe { ARef::from_raw(ptr) }; /// let raw_ptr: NonNull<Empty> = ARef::into_raw(data_ref); /// @@ -483,6 +485,7 @@ pub unsafe trait FromBytes {}
macro_rules! impl_frombytes { ($($({$($generics:tt)*})? $t:ty, )*) => { + // SAFETY: Safety comments written in the macro invocation. $(unsafe impl$($($generics)*)? FromBytes for $t {})* }; } @@ -517,6 +520,7 @@ pub unsafe trait AsBytes {}
macro_rules! impl_asbytes { ($($({$($generics:tt)*})? $t:ty, )*) => { + // SAFETY: Safety comments written in the macro invocation. $(unsafe impl$($($generics)*)? AsBytes for $t {})* }; } --- a/rust/kernel/workqueue.rs +++ b/rust/kernel/workqueue.rs @@ -519,6 +519,7 @@ impl_has_work! { impl{T} HasWork<Self> for ClosureWork<T> { self.work } }
+// SAFETY: TODO. unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Arc<T> where T: WorkItem<ID, Pointer = Self>, @@ -536,6 +537,7 @@ where } }
+// SAFETY: TODO. unsafe impl<T, const ID: u64> RawWorkItem<ID> for Arc<T> where T: WorkItem<ID, Pointer = Self>, @@ -564,6 +566,7 @@ where } }
+// SAFETY: TODO. unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<Box<T>> where T: WorkItem<ID, Pointer = Self>, @@ -583,6 +586,7 @@ where } }
+// SAFETY: TODO. unsafe impl<T, const ID: u64> RawWorkItem<ID> for Pin<Box<T>> where T: WorkItem<ID, Pointer = Self>, --- a/rust/uapi/lib.rs +++ b/rust/uapi/lib.rs @@ -14,6 +14,7 @@ #![cfg_attr(test, allow(unsafe_op_in_unsafe_fn))] #![allow( clippy::all, + clippy::undocumented_unsafe_blocks, dead_code, missing_docs, non_camel_case_types,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit c28bfe76e4ba707775a205b0274710de7aa1e31c upstream.
In Rust 1.67.0, Clippy added the `unnecessary_safety_comment` lint [1], which is the "inverse" of `undocumented_unsafe_blocks`: it finds places where safe code has a `// SAFETY` comment attached.
The lint currently finds 3 places where we had such mistakes, thus it seems already quite useful.
Thus clean those and enable it.
Link: https://rust-lang.github.io/rust-clippy/master/index.html#/unnecessary_safet... [1] Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Trevor Gross tmgross@umich.edu Reviewed-by: Vincenzo Palazzo vincenzopalazzodev@gmail.com Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-6-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Makefile | 1 + rust/kernel/sync/arc.rs | 2 +- rust/kernel/workqueue.rs | 4 ++-- 3 files changed, 4 insertions(+), 3 deletions(-)
--- a/Makefile +++ b/Makefile @@ -459,6 +459,7 @@ export rust_common_flags := --edition=20 -Aclippy::needless_lifetimes \ -Wclippy::no_mangle_with_rust_abi \ -Wclippy::undocumented_unsafe_blocks \ + -Wclippy::unnecessary_safety_comment \ -Wrustdoc::missing_crate_level_docs
KBUILD_HOSTCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(HOST_LFS_CFLAGS) \ --- a/rust/kernel/sync/arc.rs +++ b/rust/kernel/sync/arc.rs @@ -338,7 +338,7 @@ impl<T: 'static> ForeignOwnable for Arc< }
unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> ArcBorrow<'a, T> { - // SAFETY: By the safety requirement of this function, we know that `ptr` came from + // By the safety requirement of this function, we know that `ptr` came from // a previous call to `Arc::into_foreign`. let inner = NonNull::new(ptr as *mut ArcInner<T>).unwrap();
--- a/rust/kernel/workqueue.rs +++ b/rust/kernel/workqueue.rs @@ -526,7 +526,7 @@ where T: HasWork<T, ID>, { unsafe extern "C" fn run(ptr: *mut bindings::work_struct) { - // SAFETY: The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`. + // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`. let ptr = ptr as *mut Work<T, ID>; // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`. let ptr = unsafe { T::work_container_of(ptr) }; @@ -573,7 +573,7 @@ where T: HasWork<T, ID>, { unsafe extern "C" fn run(ptr: *mut bindings::work_struct) { - // SAFETY: The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`. + // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`. let ptr = ptr as *mut Work<T, ID>; // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`. let ptr = unsafe { T::work_container_of(ptr) };
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 23f42dc054b3c550373eae0c9ae97f1ce1501e0a upstream.
In Rust 1.67.0, Clippy added the `unnecessary_safety_doc` lint [1], which is similar to `unnecessary_safety_comment`, but for `# Safety` sections, i.e. safety preconditions in the documentation.
This is something that should not happen with our coding guidelines in mind. Thus enable the lint to have it machine-checked.
Link: https://rust-lang.github.io/rust-clippy/master/index.html#/unnecessary_safet... [1] Reviewed-by: Trevor Gross tmgross@umich.edu Reviewed-by: Alice Ryhl aliceryhl@google.com Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-7-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Makefile | 1 + 1 file changed, 1 insertion(+)
--- a/Makefile +++ b/Makefile @@ -460,6 +460,7 @@ export rust_common_flags := --edition=20 -Wclippy::no_mangle_with_rust_abi \ -Wclippy::undocumented_unsafe_blocks \ -Wclippy::unnecessary_safety_comment \ + -Wclippy::unnecessary_safety_doc \ -Wrustdoc::missing_crate_level_docs
KBUILD_HOSTCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(HOST_LFS_CFLAGS) \
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 3fcc23397628c2357dbe66df59644e09f72ac725 upstream.
In Rust 1.73.0, Clippy introduced the `ignored_unit_patterns` lint [1]:
Matching with `()` explicitly instead of `_` outlines the fact that the pattern contains no data. Also it would detect a type change that `_` would ignore.
There is only a single case that requires a change:
error: matching over `()` is more explicit --> rust/kernel/types.rs:176:45 | 176 | ScopeGuard::new_with_data((), move |_| cleanup()) | ^ help: use `()` instead of `_`: `()` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#ignored_unit_patte... = note: requested on the command line with `-D clippy::ignored-unit-patterns`
Thus clean it up and enable the lint -- no functional change intended.
Link: https://rust-lang.github.io/rust-clippy/master/index.html#/ignored_unit_patt... [1] Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Trevor Gross tmgross@umich.edu Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-8-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Makefile | 1 + rust/kernel/types.rs | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
--- a/Makefile +++ b/Makefile @@ -453,6 +453,7 @@ export rust_common_flags := --edition=20 -Wunreachable_pub \ -Wclippy::all \ -Wclippy::dbg_macro \ + -Wclippy::ignored_unit_patterns \ -Wclippy::mut_mut \ -Wclippy::needless_bitwise_bool \ -Wclippy::needless_continue \ --- a/rust/kernel/types.rs +++ b/rust/kernel/types.rs @@ -225,7 +225,7 @@ impl<T, F: FnOnce(T)> ScopeGuard<T, F> { impl ScopeGuard<(), fn(())> { /// Creates a new guarded object with the given cleanup function. pub fn new(cleanup: impl FnOnce()) -> ScopeGuard<(), impl FnOnce(())> { - ScopeGuard::new_with_data((), move |_| cleanup()) + ScopeGuard::new_with_data((), move |()| cleanup()) } }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit bef83245f5ed434932aaf07f890142b576dc5d85 upstream.
In Rust 1.71.0, `rustdoc` added the `unescaped_backticks` lint, which detects what are typically typos in Markdown formatting regarding inline code [1], e.g. from the Rust standard library:
/// ... to `deref`/`deref_mut`` must ...
/// ... use [`from_mut`]`. Specifically, ...
It does not seem to have almost any false positives, from the experience of enabling it in the Rust standard library [2], which will be checked starting with Rust 1.82.0. The maintainers also confirmed it is ready to be used.
Thus enable it.
Link: https://doc.rust-lang.org/rustdoc/lints.html#unescaped_backticks [1] Link: https://github.com/rust-lang/rust/pull/128307 [2] Reviewed-by: Trevor Gross tmgross@umich.edu Reviewed-by: Alice Ryhl aliceryhl@google.com Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-9-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Makefile | 3 ++- rust/Makefile | 5 ++++- 2 files changed, 6 insertions(+), 2 deletions(-)
--- a/Makefile +++ b/Makefile @@ -462,7 +462,8 @@ export rust_common_flags := --edition=20 -Wclippy::undocumented_unsafe_blocks \ -Wclippy::unnecessary_safety_comment \ -Wclippy::unnecessary_safety_doc \ - -Wrustdoc::missing_crate_level_docs + -Wrustdoc::missing_crate_level_docs \ + -Wrustdoc::unescaped_backticks
KBUILD_HOSTCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(HOST_LFS_CFLAGS) \ $(HOSTCFLAGS) -I $(srctree)/scripts/include --- a/rust/Makefile +++ b/rust/Makefile @@ -61,7 +61,7 @@ alloc-cfgs = \ quiet_cmd_rustdoc = RUSTDOC $(if $(rustdoc_host),H, ) $< cmd_rustdoc = \ OBJTREE=$(abspath $(objtree)) \ - $(RUSTDOC) $(if $(rustdoc_host),$(rust_common_flags),$(rust_flags)) \ + $(RUSTDOC) $(filter-out $(skip_flags),$(if $(rustdoc_host),$(rust_common_flags),$(rust_flags))) \ $(rustc_target_flags) -L$(objtree)/$(obj) \ -Zunstable-options --generate-link-to-definition \ --output $(rustdoc_output) \ @@ -98,6 +98,9 @@ rustdoc-macros: private rustc_target_fla rustdoc-macros: $(src)/macros/lib.rs FORCE +$(call if_changed,rustdoc)
+# Starting with Rust 1.82.0, skipping `-Wrustdoc::unescaped_backticks` should +# not be needed -- see https://github.com/rust-lang/rust/pull/128307. +rustdoc-core: private skip_flags = -Wrustdoc::unescaped_backticks rustdoc-core: private rustc_target_flags = $(core-cfgs) rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE +$(call if_changed,rustdoc)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit d5cc7ab0a0a99496de1bd933dac242699a417809 upstream.
These few cases, unlike others in the same file, did not need the `allow`.
Thus clean them up.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Trevor Gross tmgross@umich.edu Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-10-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/init.rs | 4 ---- 1 file changed, 4 deletions(-)
--- a/rust/kernel/init.rs +++ b/rust/kernel/init.rs @@ -87,7 +87,6 @@ //! To declare an init macro/function you just return an [`impl PinInit<T, E>`]: //! //! ```rust -//! # #![allow(clippy::disallowed_names)] //! # use kernel::{sync::Mutex, new_mutex, init::PinInit, try_pin_init}; //! #[pin_data] //! struct DriverData { @@ -368,7 +367,6 @@ macro_rules! stack_try_pin_init { /// The syntax is almost identical to that of a normal `struct` initializer: /// /// ```rust -/// # #![allow(clippy::disallowed_names)] /// # use kernel::{init, pin_init, macros::pin_data, init::*}; /// # use core::pin::Pin; /// #[pin_data] @@ -413,7 +411,6 @@ macro_rules! stack_try_pin_init { /// To create an initializer function, simply declare it like this: /// /// ```rust -/// # #![allow(clippy::disallowed_names)] /// # use kernel::{init, pin_init, init::*}; /// # use core::pin::Pin; /// # #[pin_data] @@ -468,7 +465,6 @@ macro_rules! stack_try_pin_init { /// They can also easily embed it into their own `struct`s: /// /// ```rust -/// # #![allow(clippy::disallowed_names)] /// # use kernel::{init, pin_init, macros::pin_data, init::*}; /// # use core::pin::Pin; /// # #[pin_data]
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 5e7c9b84ad08cc7a41b2ddbbbaccb60057da3860 upstream.
Rust 1.58.0 (before Rust was merged into the kernel) made Clippy's `non_send_fields_in_send_ty` lint part of the `suspicious` lint group for a brief window of time [1] until the minor version 1.58.1 got released a week after, where the lint was moved back to `nursery`.
By that time, we had already upgraded to that Rust version, and thus we had `allow`ed the lint here for `CondVar`.
Nowadays, Clippy's `non_send_fields_in_send_ty` would still trigger here if it were enabled.
Moreover, if enabled, `Lock<T, B>` and `Task` would also require an `allow`. Therefore, it does not seem like someone is actually enabling it (in, e.g., a custom flags build).
Finally, the lint does not appear to have had major improvements since then [2].
Thus remove the `allow` since it is unneeded.
Link: https://github.com/rust-lang/rust/blob/master/RELEASES.md#version-1581-2022-... [1] Link: https://github.com/rust-lang/rust-clippy/issues/8045 [2] Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Trevor Gross tmgross@umich.edu Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-11-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/sync/condvar.rs | 1 - 1 file changed, 1 deletion(-)
--- a/rust/kernel/sync/condvar.rs +++ b/rust/kernel/sync/condvar.rs @@ -92,7 +92,6 @@ pub struct CondVar { _pin: PhantomPinned, }
-#[allow(clippy::non_send_fields_in_send_ty)] // SAFETY: `CondVar` only uses a `struct wait_queue_head`, which is safe to use on any thread. unsafe impl Send for CondVar {}
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 7d56786edcbdf58b6367fd7f01d5861214ad1c95 upstream.
Some Clippy lints can be configured/tweaked. We will use these knobs to our advantage in later commits.
This is done via a configuration file, `.clippy.toml` [1]. The file is currently unstable. This may be a problem in the future, but we can adapt as needed. In addition, we proposed adding Clippy to the Rust CI's RFL job [2], so we should be able to catch issues pre-merge.
Thus introduce the file.
Link: https://doc.rust-lang.org/clippy/configuration.html [1] Link: https://github.com/rust-lang/rust/pull/128928 [2] Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Trevor Gross tmgross@umich.edu Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-12-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- .clippy.toml | 1 + .gitignore | 1 + MAINTAINERS | 1 + Makefile | 3 +++ 4 files changed, 6 insertions(+) create mode 100644 .clippy.toml
--- /dev/null +++ b/.clippy.toml @@ -0,0 +1 @@ +# SPDX-License-Identifier: GPL-2.0 --- a/.gitignore +++ b/.gitignore @@ -103,6 +103,7 @@ modules.order # We don't want to ignore the following even if they are dot-files # !.clang-format +!.clippy.toml !.cocciconfig !.editorconfig !.get_maintainer.ignore --- a/MAINTAINERS +++ b/MAINTAINERS @@ -20175,6 +20175,7 @@ B: https://github.com/Rust-for-Linux/lin C: zulip://rust-for-linux.zulipchat.com P: https://rust-for-linux.com/contributing T: git https://github.com/Rust-for-Linux/linux.git rust-next +F: .clippy.toml F: Documentation/rust/ F: rust/ F: samples/rust/ --- a/Makefile +++ b/Makefile @@ -588,6 +588,9 @@ endif # Allows the usage of unstable features in stable compilers. export RUSTC_BOOTSTRAP := 1
+# Allows finding `.clippy.toml` in out-of-srctree builds. +export CLIPPY_CONF_DIR := $(srctree) + export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC HOSTPKG_CONFIG export RUSTC RUSTDOC RUSTFMT RUSTC_OR_CLIPPY_QUIET RUSTC_OR_CLIPPY BINDGEN export HOSTRUSTC KBUILD_HOSTRUSTFLAGS
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 8577c9dca799bd74377f7c30015d8cdc53a53ca2 upstream.
Back when we used Rust 1.60.0 (before Rust was merged in the kernel), we added `-Wclippy::dbg_macro` to the compilation flags. This worked great with our custom `dbg!` macro (vendored from `std`, but slightly modified to use the kernel printing facilities).
However, in the very next version, 1.61.0, it stopped working [1] since the lint started to use a Rust diagnostic item rather than a path to find the `dbg!` macro [1]. This behavior remains until the current nightly (1.83.0).
Therefore, currently, the `dbg_macro` is not doing anything, which explains why we can invoke `dbg!` in samples/rust/rust_print.rs`, as well as why changing the `#[allow()]`s to `#[expect()]`s in `std_vendor.rs` doctests does not work since they are not fulfilled.
One possible workaround is using `rustc_attrs` like the standard library does. However, this is intended to be internal, and we just started supporting several Rust compiler versions, so it is best to avoid it.
Therefore, instead, use `disallowed_macros`. It is a stable lint and is more flexible (in that we can provide different macros), although its diagnostic message(s) are not as nice as the specialized one (yet), and does not allow to set different lint levels per macro/path [2].
In turn, this requires allowing the (intentional) `dbg!` use in the sample, as one would have expected.
Finally, in a single case, the `allow` is fixed to be an inner attribute, since otherwise it was not being applied.
Link: https://github.com/rust-lang/rust-clippy/issues/11303 [1] Link: https://github.com/rust-lang/rust-clippy/issues/11307 [2] Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-13-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- .clippy.toml | 6 ++++++ Makefile | 1 - rust/kernel/std_vendor.rs | 10 +++++----- samples/rust/rust_print.rs | 1 + 4 files changed, 12 insertions(+), 6 deletions(-)
--- a/.clippy.toml +++ b/.clippy.toml @@ -1 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 + +disallowed-macros = [ + # The `clippy::dbg_macro` lint only works with `std::dbg!`, thus we simulate + # it here, see: https://github.com/rust-lang/rust-clippy/issues/11303. + { path = "kernel::dbg", reason = "the `dbg!` macro is intended as a debugging tool" }, +] --- a/Makefile +++ b/Makefile @@ -452,7 +452,6 @@ export rust_common_flags := --edition=20 -Wrust_2018_idioms \ -Wunreachable_pub \ -Wclippy::all \ - -Wclippy::dbg_macro \ -Wclippy::ignored_unit_patterns \ -Wclippy::mut_mut \ -Wclippy::needless_bitwise_bool \ --- a/rust/kernel/std_vendor.rs +++ b/rust/kernel/std_vendor.rs @@ -14,7 +14,7 @@ /// /// ```rust /// let a = 2; -/// # #[allow(clippy::dbg_macro)] +/// # #[allow(clippy::disallowed_macros)] /// let b = dbg!(a * 2) + 1; /// // ^-- prints: [src/main.rs:2] a * 2 = 4 /// assert_eq!(b, 5); @@ -52,7 +52,7 @@ /// With a method call: /// /// ```rust -/// # #[allow(clippy::dbg_macro)] +/// # #[allow(clippy::disallowed_macros)] /// fn foo(n: usize) { /// if dbg!(n.checked_sub(4)).is_some() { /// // ... @@ -71,7 +71,7 @@ /// Naive factorial implementation: /// /// ```rust -/// # #[allow(clippy::dbg_macro)] +/// # #[allow(clippy::disallowed_macros)] /// # { /// fn factorial(n: u32) -> u32 { /// if dbg!(n <= 1) { @@ -118,7 +118,7 @@ /// a tuple (and return it, too): /// /// ``` -/// # #[allow(clippy::dbg_macro)] +/// # #![allow(clippy::disallowed_macros)] /// assert_eq!(dbg!(1usize, 2u32), (1, 2)); /// ``` /// @@ -127,7 +127,7 @@ /// invocations. You can use a 1-tuple directly if you need one: /// /// ``` -/// # #[allow(clippy::dbg_macro)] +/// # #[allow(clippy::disallowed_macros)] /// # { /// assert_eq!(1, dbg!(1u32,)); // trailing comma ignored /// assert_eq!((1,), dbg!((1u32,))); // 1-tuple --- a/samples/rust/rust_print.rs +++ b/samples/rust/rust_print.rs @@ -15,6 +15,7 @@ module! {
struct RustPrint;
+#[allow(clippy::disallowed_macros)] fn arc_print() -> Result { use kernel::sync::*;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 2f390cc589433dfcfedc307a141e103929a6fd4d upstream.
Rust 1.82.0's Clippy is introducing [1][2] a new warn-by-default lint, `too_long_first_doc_paragraph` [3], which is intended to catch titles of code documentation items that are too long (likely because no title was provided and the item documentation starts with a paragraph).
This lint does not currently trigger anywhere, but it does detect a couple cases if checking for private items gets enabled (which we will do in the next commit):
error: first doc comment paragraph is too long --> rust/kernel/init/__internal.rs:18:1 | 18 | / /// This is the module-internal type implementing `PinInit` and `Init`. It is unsafe to create this 19 | | /// type, since the closure needs to fulfill the same safety requirement as the 20 | | /// `__pinned_init`/`__init` functions. | |_ | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#too_long_first_doc... = note: `-D clippy::too-long-first-doc-paragraph` implied by `-D warnings` = help: to override `-D warnings` add `#[allow(clippy::too_long_first_doc_paragraph)]`
error: first doc comment paragraph is too long --> rust/kernel/sync/arc/std_vendor.rs:3:1 | 3 | / //! The contents of this file come from the Rust standard library, hosted in 4 | | //! the https://github.com/rust-lang/rust repository, licensed under 5 | | //! "Apache-2.0 OR MIT" and adapted for kernel use. For copyright details, 6 | | //! see https://github.com/rust-lang/rust/blob/master/COPYRIGHT. | |_ | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#too_long_first_doc...
Thus clean those two instances.
In addition, since we have a second `std_vendor.rs` file with a similar header, do the same there too (even if that one does not trigger the lint, because it is `doc(hidden)`).
Link: https://github.com/rust-lang/rust/pull/129531 [1] Link: https://github.com/rust-lang/rust-clippy/pull/12993 [2] Link: https://rust-lang.github.io/rust-clippy/master/index.html#/too_long_first_do... [3] Reviewed-by: Trevor Gross tmgross@umich.edu Reviewed-by: Alice Ryhl aliceryhl@google.com Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-15-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/init/__internal.rs | 7 ++++--- rust/kernel/std_vendor.rs | 2 ++ rust/kernel/sync/arc/std_vendor.rs | 2 ++ 3 files changed, 8 insertions(+), 3 deletions(-)
--- a/rust/kernel/init/__internal.rs +++ b/rust/kernel/init/__internal.rs @@ -15,9 +15,10 @@ use super::*; /// [this table]: https://doc.rust-lang.org/nomicon/phantom-data.html#table-of-phantomdata-pat... pub(super) type Invariant<T> = PhantomData<fn(*mut T) -> *mut T>;
-/// This is the module-internal type implementing `PinInit` and `Init`. It is unsafe to create this -/// type, since the closure needs to fulfill the same safety requirement as the -/// `__pinned_init`/`__init` functions. +/// Module-internal type implementing `PinInit` and `Init`. +/// +/// It is unsafe to create this type, since the closure needs to fulfill the same safety +/// requirement as the `__pinned_init`/`__init` functions. pub(crate) struct InitClosure<F, T: ?Sized, E>(pub(crate) F, pub(crate) Invariant<(E, T)>);
// SAFETY: While constructing the `InitClosure`, the user promised that it upholds the --- a/rust/kernel/std_vendor.rs +++ b/rust/kernel/std_vendor.rs @@ -1,5 +1,7 @@ // SPDX-License-Identifier: Apache-2.0 OR MIT
+//! Rust standard library vendored code. +//! //! The contents of this file come from the Rust standard library, hosted in //! the https://github.com/rust-lang/rust repository, licensed under //! "Apache-2.0 OR MIT" and adapted for kernel use. For copyright details, --- a/rust/kernel/sync/arc/std_vendor.rs +++ b/rust/kernel/sync/arc/std_vendor.rs @@ -1,5 +1,7 @@ // SPDX-License-Identifier: Apache-2.0 OR MIT
+//! Rust standard library vendored code. +//! //! The contents of this file come from the Rust standard library, hosted in //! the https://github.com/rust-lang/rust repository, licensed under //! "Apache-2.0 OR MIT" and adapted for kernel use. For copyright details,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 624063b9ac97f40cadca32a896aafeb28b1220fd upstream.
In Rust 1.76.0, Clippy added the `check-private-items` lint configuration option. When turned on (the default is off), it makes several lints check private items as well.
In our case, it affects two lints we have enabled [1]: `missing_safety_doc` and `unnecessary_safety_doc`.
It also seems to affect the new `too_long_first_doc_paragraph` lint [2], even though the documentation does not mention it.
Thus allow the few instances remaining we currently hit and enable the lint.
Link: https://doc.rust-lang.org/nightly/clippy/lint_configuration.html#check-priva... [1] Link: https://rust-lang.github.io/rust-clippy/master/index.html#/too_long_first_do... [2] Reviewed-by: Trevor Gross tmgross@umich.edu Reviewed-by: Alice Ryhl aliceryhl@google.com Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-16-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- .clippy.toml | 2 ++ rust/kernel/init.rs | 1 + rust/kernel/init/__internal.rs | 2 ++ rust/kernel/init/macros.rs | 1 + rust/kernel/print.rs | 1 + 5 files changed, 7 insertions(+)
--- a/.clippy.toml +++ b/.clippy.toml @@ -1,5 +1,7 @@ # SPDX-License-Identifier: GPL-2.0
+check-private-items = true + disallowed-macros = [ # The `clippy::dbg_macro` lint only works with `std::dbg!`, thus we simulate # it here, see: https://github.com/rust-lang/rust-clippy/issues/11303. --- a/rust/kernel/init.rs +++ b/rust/kernel/init.rs @@ -125,6 +125,7 @@ //! use core::{ptr::addr_of_mut, marker::PhantomPinned, pin::Pin}; //! # mod bindings { //! # #![allow(non_camel_case_types)] +//! # #![allow(clippy::missing_safety_doc)] //! # pub struct foo; //! # pub unsafe fn init_foo(_ptr: *mut foo) {} //! # pub unsafe fn destroy_foo(_ptr: *mut foo) {} --- a/rust/kernel/init/__internal.rs +++ b/rust/kernel/init/__internal.rs @@ -54,6 +54,7 @@ where pub unsafe trait HasPinData { type PinData: PinData;
+ #[allow(clippy::missing_safety_doc)] unsafe fn __pin_data() -> Self::PinData; }
@@ -83,6 +84,7 @@ pub unsafe trait PinData: Copy { pub unsafe trait HasInitData { type InitData: InitData;
+ #[allow(clippy::missing_safety_doc)] unsafe fn __init_data() -> Self::InitData; }
--- a/rust/kernel/init/macros.rs +++ b/rust/kernel/init/macros.rs @@ -989,6 +989,7 @@ macro_rules! __pin_data { // // The functions are `unsafe` to prevent accidentally calling them. #[allow(dead_code)] + #[allow(clippy::missing_safety_doc)] impl<$($impl_generics)*> $pin_data<$($ty_generics)*> where $($whr)* { --- a/rust/kernel/print.rs +++ b/rust/kernel/print.rs @@ -14,6 +14,7 @@ use core::{ use crate::str::RawFormatter;
// Called from `vsprintf` with format specifier `%pA`. +#[allow(clippy::missing_safety_doc)] #[no_mangle] unsafe extern "C" fn rust_fmt_argument( buf: *mut c_char,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 139d396572ec4ba6e8cc5c02f5c8d5d1139be4b7 upstream.
In the C side, disabling diagnostics locally, i.e. within the source code, is rare (at least in the kernel). Sometimes warnings are manipulated via the flags at the translation unit level, but that is about it.
In Rust, it is easier to change locally the "level" of lints (e.g. allowing them locally). In turn, this means it is easier to globally enable more lints that may trigger a few false positives here and there that need to be allowed locally, but that generally can spot issues or bugs.
Thus document this.
Reviewed-by: Trevor Gross tmgross@umich.edu Reviewed-by: Alice Ryhl aliceryhl@google.com Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-17-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/rust/coding-guidelines.rst | 38 +++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+)
--- a/Documentation/rust/coding-guidelines.rst +++ b/Documentation/rust/coding-guidelines.rst @@ -227,3 +227,41 @@ The equivalent in Rust may look like (ig That is, the equivalent of ``GPIO_LINE_DIRECTION_IN`` would be referred to as ``gpio::LineDirection::In``. In particular, it should not be named ``gpio::gpio_line_direction::GPIO_LINE_DIRECTION_IN``. + + +Lints +----- + +In Rust, it is possible to ``allow`` particular warnings (diagnostics, lints) +locally, making the compiler ignore instances of a given warning within a given +function, module, block, etc. + +It is similar to ``#pragma GCC diagnostic push`` + ``ignored`` + ``pop`` in C +[#]_: + +.. code-block:: c + + #pragma GCC diagnostic push + #pragma GCC diagnostic ignored "-Wunused-function" + static void f(void) {} + #pragma GCC diagnostic pop + +.. [#] In this particular case, the kernel's ``__{always,maybe}_unused`` + attributes (C23's ``[[maybe_unused]]``) may be used; however, the example + is meant to reflect the equivalent lint in Rust discussed afterwards. + +But way less verbose: + +.. code-block:: rust + + #[allow(dead_code)] + fn f() {} + +By that virtue, it makes it possible to comfortably enable more diagnostics by +default (i.e. outside ``W=`` levels). In particular, those that may have some +false positives but that are otherwise quite useful to keep enabled to catch +potential mistakes. + +For more information about diagnostics in Rust, please see: + + https://doc.rust-lang.org/stable/reference/attributes/diagnostics.html
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 1f9ed172545687e5c04c77490a45896be6d2e459 upstream.
In Rust, it is possible to `allow` particular warnings (diagnostics, lints) locally, making the compiler ignore instances of a given warning within a given function, module, block, etc.
It is similar to `#pragma GCC diagnostic push` + `ignored` + `pop` in C:
#pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wunused-function" static void f(void) {} #pragma GCC diagnostic pop
But way less verbose:
#[allow(dead_code)] fn f() {}
By that virtue, it makes it possible to comfortably enable more diagnostics by default (i.e. outside `W=` levels) that may have some false positives but that are otherwise quite useful to keep enabled to catch potential mistakes.
The `#[expect(...)]` attribute [1] takes this further, and makes the compiler warn if the diagnostic was _not_ produced. For instance, the following will ensure that, when `f()` is called somewhere, we will have to remove the attribute:
#[expect(dead_code)] fn f() {}
If we do not, we get a warning from the compiler:
warning: this lint expectation is unfulfilled --> x.rs:3:10 | 3 | #[expect(dead_code)] | ^^^^^^^^^ | = note: `#[warn(unfulfilled_lint_expectations)]` on by default
This means that `expect`s do not get forgotten when they are not needed.
See the next commit for more details, nuances on its usage and documentation on the feature.
The attribute requires the `lint_reasons` [2] unstable feature, but it is becoming stable in 1.81.0 (to be released on 2024-09-05) and it has already been useful to clean things up in this patch series, finding cases where the `allow`s should not have been there.
Thus, enable `lint_reasons` and convert some of our `allow`s to `expect`s where possible.
This feature was also an example of the ongoing collaboration between Rust and the kernel -- we tested it in the kernel early on and found an issue that was quickly resolved [3].
Cc: Fridtjof Stoldt xfrednet@gmail.com Cc: Urgau urgau@numericable.fr Link: https://rust-lang.github.io/rfcs/2383-lint-reasons.html#expect-lint-attribut... [1] Link: https://github.com/rust-lang/rust/issues/54503 [2] Link: https://github.com/rust-lang/rust/issues/114557 [3] Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Trevor Gross tmgross@umich.edu Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-18-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/error.rs | 2 +- rust/kernel/init.rs | 22 +++++++++++----------- rust/kernel/init/__internal.rs | 4 ++-- rust/kernel/init/macros.rs | 10 +++++----- rust/kernel/ioctl.rs | 2 +- rust/kernel/lib.rs | 1 + rust/kernel/list/arc_field.rs | 2 +- rust/kernel/print.rs | 4 ++-- rust/kernel/std_vendor.rs | 10 +++++----- samples/rust/rust_print.rs | 2 +- scripts/Makefile.build | 2 +- 11 files changed, 31 insertions(+), 30 deletions(-)
--- a/rust/kernel/error.rs +++ b/rust/kernel/error.rs @@ -133,7 +133,7 @@ impl Error { }
/// Returns the error encoded as a pointer. - #[allow(dead_code)] + #[expect(dead_code)] pub(crate) fn to_ptr<T>(self) -> *mut T { #[cfg_attr(target_pointer_width = "32", allow(clippy::useless_conversion))] // SAFETY: `self.0` is a valid error due to its invariant. --- a/rust/kernel/init.rs +++ b/rust/kernel/init.rs @@ -35,7 +35,7 @@ //! that you need to write `<-` instead of `:` for fields that you want to initialize in-place. //! //! ```rust -//! # #![allow(clippy::disallowed_names)] +//! # #![expect(clippy::disallowed_names)] //! use kernel::sync::{new_mutex, Mutex}; //! # use core::pin::Pin; //! #[pin_data] @@ -55,7 +55,7 @@ //! (or just the stack) to actually initialize a `Foo`: //! //! ```rust -//! # #![allow(clippy::disallowed_names)] +//! # #![expect(clippy::disallowed_names)] //! # use kernel::sync::{new_mutex, Mutex}; //! # use core::pin::Pin; //! # #[pin_data] @@ -120,12 +120,12 @@ //! `slot` gets called. //! //! ```rust -//! # #![allow(unreachable_pub, clippy::disallowed_names)] +//! # #![expect(unreachable_pub, clippy::disallowed_names)] //! use kernel::{init, types::Opaque}; //! use core::{ptr::addr_of_mut, marker::PhantomPinned, pin::Pin}; //! # mod bindings { -//! # #![allow(non_camel_case_types)] -//! # #![allow(clippy::missing_safety_doc)] +//! # #![expect(non_camel_case_types)] +//! # #![expect(clippy::missing_safety_doc)] //! # pub struct foo; //! # pub unsafe fn init_foo(_ptr: *mut foo) {} //! # pub unsafe fn destroy_foo(_ptr: *mut foo) {} @@ -238,7 +238,7 @@ pub mod macros; /// # Examples /// /// ```rust -/// # #![allow(clippy::disallowed_names)] +/// # #![expect(clippy::disallowed_names)] /// # use kernel::{init, macros::pin_data, pin_init, stack_pin_init, init::*, sync::Mutex, new_mutex}; /// # use core::pin::Pin; /// #[pin_data] @@ -290,7 +290,7 @@ macro_rules! stack_pin_init { /// # Examples /// /// ```rust,ignore -/// # #![allow(clippy::disallowed_names)] +/// # #![expect(clippy::disallowed_names)] /// # use kernel::{init, pin_init, stack_try_pin_init, init::*, sync::Mutex, new_mutex}; /// # use macros::pin_data; /// # use core::{alloc::AllocError, pin::Pin}; @@ -316,7 +316,7 @@ macro_rules! stack_pin_init { /// ``` /// /// ```rust,ignore -/// # #![allow(clippy::disallowed_names)] +/// # #![expect(clippy::disallowed_names)] /// # use kernel::{init, pin_init, stack_try_pin_init, init::*, sync::Mutex, new_mutex}; /// # use macros::pin_data; /// # use core::{alloc::AllocError, pin::Pin}; @@ -438,7 +438,7 @@ macro_rules! stack_try_pin_init { /// Users of `Foo` can now create it like this: /// /// ```rust -/// # #![allow(clippy::disallowed_names)] +/// # #![expect(clippy::disallowed_names)] /// # use kernel::{init, pin_init, macros::pin_data, init::*}; /// # use core::pin::Pin; /// # #[pin_data] @@ -852,7 +852,7 @@ pub unsafe trait PinInit<T: ?Sized, E = /// # Examples /// /// ```rust - /// # #![allow(clippy::disallowed_names)] + /// # #![expect(clippy::disallowed_names)] /// use kernel::{types::Opaque, init::pin_init_from_closure}; /// #[repr(C)] /// struct RawFoo([u8; 16]); @@ -964,7 +964,7 @@ pub unsafe trait Init<T: ?Sized, E = Inf /// # Examples /// /// ```rust - /// # #![allow(clippy::disallowed_names)] + /// # #![expect(clippy::disallowed_names)] /// use kernel::{types::Opaque, init::{self, init_from_closure}}; /// struct Foo { /// buf: [u8; 1_000_000], --- a/rust/kernel/init/__internal.rs +++ b/rust/kernel/init/__internal.rs @@ -54,7 +54,7 @@ where pub unsafe trait HasPinData { type PinData: PinData;
- #[allow(clippy::missing_safety_doc)] + #[expect(clippy::missing_safety_doc)] unsafe fn __pin_data() -> Self::PinData; }
@@ -84,7 +84,7 @@ pub unsafe trait PinData: Copy { pub unsafe trait HasInitData { type InitData: InitData;
- #[allow(clippy::missing_safety_doc)] + #[expect(clippy::missing_safety_doc)] unsafe fn __init_data() -> Self::InitData; }
--- a/rust/kernel/init/macros.rs +++ b/rust/kernel/init/macros.rs @@ -182,13 +182,13 @@ //! // Normally `Drop` bounds do not have the correct semantics, but for this purpose they do //! // (normally people want to know if a type has any kind of drop glue at all, here we want //! // to know if it has any kind of custom drop glue, which is exactly what this bound does). -//! #[allow(drop_bounds)] +//! #[expect(drop_bounds)] //! impl<T: ::core::ops::Drop> MustNotImplDrop for T {} //! impl<T> MustNotImplDrop for Bar<T> {} //! // Here comes a convenience check, if one implemented `PinnedDrop`, but forgot to add it to //! // `#[pin_data]`, then this will error with the same mechanic as above, this is not needed //! // for safety, but a good sanity check, since no normal code calls `PinnedDrop::drop`. -//! #[allow(non_camel_case_types)] +//! #[expect(non_camel_case_types)] //! trait UselessPinnedDropImpl_you_need_to_specify_PinnedDrop {} //! impl< //! T: ::kernel::init::PinnedDrop, @@ -925,14 +925,14 @@ macro_rules! __pin_data { // `Drop`. Additionally we will implement this trait for the struct leading to a conflict, // if it also implements `Drop` trait MustNotImplDrop {} - #[allow(drop_bounds)] + #[expect(drop_bounds)] impl<T: ::core::ops::Drop> MustNotImplDrop for T {} impl<$($impl_generics)*> MustNotImplDrop for $name<$($ty_generics)*> where $($whr)* {} // We also take care to prevent users from writing a useless `PinnedDrop` implementation. // They might implement `PinnedDrop` correctly for the struct, but forget to give // `PinnedDrop` as the parameter to `#[pin_data]`. - #[allow(non_camel_case_types)] + #[expect(non_camel_case_types)] trait UselessPinnedDropImpl_you_need_to_specify_PinnedDrop {} impl<T: $crate::init::PinnedDrop> UselessPinnedDropImpl_you_need_to_specify_PinnedDrop for T {} @@ -989,7 +989,7 @@ macro_rules! __pin_data { // // The functions are `unsafe` to prevent accidentally calling them. #[allow(dead_code)] - #[allow(clippy::missing_safety_doc)] + #[expect(clippy::missing_safety_doc)] impl<$($impl_generics)*> $pin_data<$($ty_generics)*> where $($whr)* { --- a/rust/kernel/ioctl.rs +++ b/rust/kernel/ioctl.rs @@ -4,7 +4,7 @@ //! //! C header: [`include/asm-generic/ioctl.h`](srctree/include/asm-generic/ioctl.h)
-#![allow(non_snake_case)] +#![expect(non_snake_case)]
use crate::build_assert;
--- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -15,6 +15,7 @@ #![feature(arbitrary_self_types)] #![feature(coerce_unsized)] #![feature(dispatch_from_dyn)] +#![feature(lint_reasons)] #![feature(new_uninit)] #![feature(unsize)]
--- a/rust/kernel/list/arc_field.rs +++ b/rust/kernel/list/arc_field.rs @@ -56,7 +56,7 @@ impl<T, const ID: u64> ListArcField<T, I /// /// The caller must have mutable access to the `ListArc<ID>` containing the struct with this /// field for the duration of the returned reference. - #[allow(clippy::mut_from_ref)] + #[expect(clippy::mut_from_ref)] pub unsafe fn assert_mut(&self) -> &mut T { // SAFETY: The caller has exclusive access to the `ListArc`, so they also have exclusive // access to this field. --- a/rust/kernel/print.rs +++ b/rust/kernel/print.rs @@ -14,7 +14,7 @@ use core::{ use crate::str::RawFormatter;
// Called from `vsprintf` with format specifier `%pA`. -#[allow(clippy::missing_safety_doc)] +#[expect(clippy::missing_safety_doc)] #[no_mangle] unsafe extern "C" fn rust_fmt_argument( buf: *mut c_char, @@ -140,7 +140,7 @@ pub fn call_printk_cont(args: fmt::Argum #[doc(hidden)] #[cfg(not(testlib))] #[macro_export] -#[allow(clippy::crate_in_macro_def)] +#[expect(clippy::crate_in_macro_def)] macro_rules! print_macro ( // The non-continuation cases (most of them, e.g. `INFO`). ($format_string:path, false, $($arg:tt)+) => ( --- a/rust/kernel/std_vendor.rs +++ b/rust/kernel/std_vendor.rs @@ -16,7 +16,7 @@ /// /// ```rust /// let a = 2; -/// # #[allow(clippy::disallowed_macros)] +/// # #[expect(clippy::disallowed_macros)] /// let b = dbg!(a * 2) + 1; /// // ^-- prints: [src/main.rs:2] a * 2 = 4 /// assert_eq!(b, 5); @@ -54,7 +54,7 @@ /// With a method call: /// /// ```rust -/// # #[allow(clippy::disallowed_macros)] +/// # #[expect(clippy::disallowed_macros)] /// fn foo(n: usize) { /// if dbg!(n.checked_sub(4)).is_some() { /// // ... @@ -73,7 +73,7 @@ /// Naive factorial implementation: /// /// ```rust -/// # #[allow(clippy::disallowed_macros)] +/// # #[expect(clippy::disallowed_macros)] /// # { /// fn factorial(n: u32) -> u32 { /// if dbg!(n <= 1) { @@ -120,7 +120,7 @@ /// a tuple (and return it, too): /// /// ``` -/// # #![allow(clippy::disallowed_macros)] +/// # #![expect(clippy::disallowed_macros)] /// assert_eq!(dbg!(1usize, 2u32), (1, 2)); /// ``` /// @@ -129,7 +129,7 @@ /// invocations. You can use a 1-tuple directly if you need one: /// /// ``` -/// # #[allow(clippy::disallowed_macros)] +/// # #[expect(clippy::disallowed_macros)] /// # { /// assert_eq!(1, dbg!(1u32,)); // trailing comma ignored /// assert_eq!((1,), dbg!((1u32,))); // 1-tuple --- a/samples/rust/rust_print.rs +++ b/samples/rust/rust_print.rs @@ -15,7 +15,7 @@ module! {
struct RustPrint;
-#[allow(clippy::disallowed_macros)] +#[expect(clippy::disallowed_macros)] fn arc_print() -> Result { use kernel::sync::*;
--- a/scripts/Makefile.build +++ b/scripts/Makefile.build @@ -248,7 +248,7 @@ $(obj)/%.lst: $(obj)/%.c FORCE # Compile Rust sources (.rs) # ---------------------------------------------------------------------------
-rust_allowed_features := arbitrary_self_types,new_uninit +rust_allowed_features := arbitrary_self_types,lint_reasons,new_uninit
# `--out-dir` is required to avoid temporaries being created by `rustc` in the # current working directory, which may be not accessible in the out-of-tree
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 04866494e936d041fd196d3a36aecd979e4ef078 upstream.
Discuss `#[expect(...)]` in the Lints sections of the coding guidelines document, which is an upcoming feature in Rust 1.81.0, and explain that it is generally to be preferred over `allow` unless there is a reason not to use it (e.g. conditional compilation being involved).
Tested-by: Gary Guo gary@garyguo.net Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240904204347.168520-19-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/rust/coding-guidelines.rst | 110 +++++++++++++++++++++++++++++++ 1 file changed, 110 insertions(+)
--- a/Documentation/rust/coding-guidelines.rst +++ b/Documentation/rust/coding-guidelines.rst @@ -262,6 +262,116 @@ default (i.e. outside ``W=`` levels). In false positives but that are otherwise quite useful to keep enabled to catch potential mistakes.
+On top of that, Rust provides the ``expect`` attribute which takes this further. +It makes the compiler warn if the warning was not produced. For instance, the +following will ensure that, when ``f()`` is called somewhere, we will have to +remove the attribute: + +.. code-block:: rust + + #[expect(dead_code)] + fn f() {} + +If we do not, we get a warning from the compiler:: + + warning: this lint expectation is unfulfilled + --> x.rs:3:10 + | + 3 | #[expect(dead_code)] + | ^^^^^^^^^ + | + = note: `#[warn(unfulfilled_lint_expectations)]` on by default + +This means that ``expect``\ s do not get forgotten when they are not needed, which +may happen in several situations, e.g.: + +- Temporary attributes added while developing. + +- Improvements in lints in the compiler, Clippy or custom tools which may + remove a false positive. + +- When the lint is not needed anymore because it was expected that it would be + removed at some point, such as the ``dead_code`` example above. + +It also increases the visibility of the remaining ``allow``\ s and reduces the +chance of misapplying one. + +Thus prefer ``except`` over ``allow`` unless: + +- The lint attribute is intended to be temporary, e.g. while developing. + +- Conditional compilation triggers the warning in some cases but not others. + + If there are only a few cases where the warning triggers (or does not + trigger) compared to the total number of cases, then one may consider using + a conditional ``expect`` (i.e. ``cfg_attr(..., expect(...))``). Otherwise, + it is likely simpler to just use ``allow``. + +- Inside macros, when the different invocations may create expanded code that + triggers the warning in some cases but not in others. + +- When code may trigger a warning for some architectures but not others, such + as an ``as`` cast to a C FFI type. + +As a more developed example, consider for instance this program: + +.. code-block:: rust + + fn g() {} + + fn main() { + #[cfg(CONFIG_X)] + g(); + } + +Here, function ``g()`` is dead code if ``CONFIG_X`` is not set. Can we use +``expect`` here? + +.. code-block:: rust + + #[expect(dead_code)] + fn g() {} + + fn main() { + #[cfg(CONFIG_X)] + g(); + } + +This would emit a lint if ``CONFIG_X`` is set, since it is not dead code in that +configuration. Therefore, in cases like this, we cannot use ``expect`` as-is. + +A simple possibility is using ``allow``: + +.. code-block:: rust + + #[allow(dead_code)] + fn g() {} + + fn main() { + #[cfg(CONFIG_X)] + g(); + } + +An alternative would be using a conditional ``expect``: + +.. code-block:: rust + + #[cfg_attr(not(CONFIG_X), expect(dead_code))] + fn g() {} + + fn main() { + #[cfg(CONFIG_X)] + g(); + } + +This would ensure that, if someone introduces another call to ``g()`` somewhere +(e.g. unconditionally), then it would be spotted that it is not dead code +anymore. However, the ``cfg_attr`` is more complex than a simple ``allow``. + +Therefore, it is likely that it is not worth using conditional ``expect``\ s when +more than one or two configurations are involved or when the lint may be +triggered due to non-local changes (such as ``dead_code``). + For more information about diagnostics in Rust, please see:
https://doc.rust-lang.org/stable/reference/attributes/diagnostics.html
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Filipe Xavier felipe_life@live.com
commit 5ed147473458f8c20f908a03227d8f5bb3cb8f7d upstream.
Change visibility to public of functions in error.rs: from_err_ptr, from_errno, from_result and to_ptr. Additionally, remove dead_code annotations.
Link: https://github.com/Rust-for-Linux/linux/issues/1105 Reviewed-by: Alice Ryhl aliceryhl@google.com Signed-off-by: Filipe Xavier felipe_life@live.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/DM4PR14MB7276E6948E67B3B23D8EA847E9652@DM4PR14MB72... Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/error.rs | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-)
--- a/rust/kernel/error.rs +++ b/rust/kernel/error.rs @@ -95,7 +95,7 @@ impl Error { /// /// It is a bug to pass an out-of-range `errno`. `EINVAL` would /// be returned in such a case. - pub(crate) fn from_errno(errno: core::ffi::c_int) -> Error { + pub fn from_errno(errno: core::ffi::c_int) -> Error { if errno < -(bindings::MAX_ERRNO as i32) || errno >= 0 { // TODO: Make it a `WARN_ONCE` once available. crate::pr_warn!( @@ -133,8 +133,7 @@ impl Error { }
/// Returns the error encoded as a pointer. - #[expect(dead_code)] - pub(crate) fn to_ptr<T>(self) -> *mut T { + pub fn to_ptr<T>(self) -> *mut T { #[cfg_attr(target_pointer_width = "32", allow(clippy::useless_conversion))] // SAFETY: `self.0` is a valid error due to its invariant. unsafe { @@ -270,9 +269,7 @@ pub fn to_result(err: core::ffi::c_int) /// from_err_ptr(unsafe { bindings::devm_platform_ioremap_resource(pdev.to_ptr(), index) }) /// } /// ``` -// TODO: Remove `dead_code` marker once an in-kernel client is available. -#[allow(dead_code)] -pub(crate) fn from_err_ptr<T>(ptr: *mut T) -> Result<*mut T> { +pub fn from_err_ptr<T>(ptr: *mut T) -> Result<*mut T> { // CAST: Casting a pointer to `*const core::ffi::c_void` is always valid. let const_ptr: *const core::ffi::c_void = ptr.cast(); // SAFETY: The FFI function does not deref the pointer. @@ -318,9 +315,7 @@ pub(crate) fn from_err_ptr<T>(ptr: *mut /// }) /// } /// ``` -// TODO: Remove `dead_code` marker once an in-kernel client is available. -#[allow(dead_code)] -pub(crate) fn from_result<T, F>(f: F) -> T +pub fn from_result<T, F>(f: F) -> T where T: From<i16>, F: FnOnce() -> Result<T>,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Filipe Xavier felipe_life@live.com
commit e9759c5b9ea555d09f426c70c880e9522e9b0576 upstream.
Optimize `Result<(), Error>` size by changing `Error` type to `NonZero*` for niche optimization.
This reduces the space used by the `Result` type, as the `NonZero*` type enables the compiler to apply more efficient memory layout. For example, the `Result<(), Error>` changes size from 8 to 4 bytes.
Link: https://github.com/Rust-for-Linux/linux/issues/1120 Signed-off-by: Filipe Xavier felipe_life@live.com Reviewed-by: Gary Guo gary@garyguo.net Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Fiona Behrens me@kloenk.dev Link: https://lore.kernel.org/r/BL0PR02MB4914B9B088865CF237731207E9732@BL0PR02MB49... [ Removed unneeded block around `match`, added backticks in panic message and added intra-doc link. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/error.rs | 37 ++++++++++++++++++++++++++++--------- 1 file changed, 28 insertions(+), 9 deletions(-)
--- a/rust/kernel/error.rs +++ b/rust/kernel/error.rs @@ -9,6 +9,7 @@ use crate::{alloc::AllocError, str::CStr use alloc::alloc::LayoutError;
use core::fmt; +use core::num::NonZeroI32; use core::num::TryFromIntError; use core::str::Utf8Error;
@@ -20,7 +21,11 @@ pub mod code { $( #[doc = $doc] )* - pub const $err: super::Error = super::Error(-(crate::bindings::$err as i32)); + pub const $err: super::Error = + match super::Error::try_from_errno(-(crate::bindings::$err as i32)) { + Some(err) => err, + None => panic!("Invalid errno in `declare_err!`"), + }; }; }
@@ -88,7 +93,7 @@ pub mod code { /// /// The value is a valid `errno` (i.e. `>= -MAX_ERRNO && < 0`). #[derive(Clone, Copy, PartialEq, Eq)] -pub struct Error(core::ffi::c_int); +pub struct Error(NonZeroI32);
impl Error { /// Creates an [`Error`] from a kernel error code. @@ -107,7 +112,20 @@ impl Error {
// INVARIANT: The check above ensures the type invariant // will hold. - Error(errno) + // SAFETY: `errno` is checked above to be in a valid range. + unsafe { Error::from_errno_unchecked(errno) } + } + + /// Creates an [`Error`] from a kernel error code. + /// + /// Returns [`None`] if `errno` is out-of-range. + const fn try_from_errno(errno: core::ffi::c_int) -> Option<Error> { + if errno < -(bindings::MAX_ERRNO as i32) || errno >= 0 { + return None; + } + + // SAFETY: `errno` is checked above to be in a valid range. + Some(unsafe { Error::from_errno_unchecked(errno) }) }
/// Creates an [`Error`] from a kernel error code. @@ -115,21 +133,22 @@ impl Error { /// # Safety /// /// `errno` must be within error code range (i.e. `>= -MAX_ERRNO && < 0`). - unsafe fn from_errno_unchecked(errno: core::ffi::c_int) -> Error { + const unsafe fn from_errno_unchecked(errno: core::ffi::c_int) -> Error { // INVARIANT: The contract ensures the type invariant // will hold. - Error(errno) + // SAFETY: The caller guarantees `errno` is non-zero. + Error(unsafe { NonZeroI32::new_unchecked(errno) }) }
/// Returns the kernel error code. pub fn to_errno(self) -> core::ffi::c_int { - self.0 + self.0.get() }
#[cfg(CONFIG_BLOCK)] pub(crate) fn to_blk_status(self) -> bindings::blk_status_t { // SAFETY: `self.0` is a valid error due to its invariant. - unsafe { bindings::errno_to_blk_status(self.0) } + unsafe { bindings::errno_to_blk_status(self.0.get()) } }
/// Returns the error encoded as a pointer. @@ -137,7 +156,7 @@ impl Error { #[cfg_attr(target_pointer_width = "32", allow(clippy::useless_conversion))] // SAFETY: `self.0` is a valid error due to its invariant. unsafe { - bindings::ERR_PTR(self.0.into()) as *mut _ + bindings::ERR_PTR(self.0.get().into()) as *mut _ } }
@@ -145,7 +164,7 @@ impl Error { #[cfg(not(testlib))] pub fn name(&self) -> Option<&'static CStr> { // SAFETY: Just an FFI call, there are no extra safety requirements. - let ptr = unsafe { bindings::errname(-self.0) }; + let ptr = unsafe { bindings::errname(-self.0.get()) }; if ptr.is_null() { None } else {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit b7a084ba4fbb8f416ce8d19c93a3a2bee63c9c89 upstream.
Add a kernel specific `Allocator` trait, that in contrast to the one in Rust's core library doesn't require unstable features and supports GFP flags.
Subsequent patches add the following trait implementors: `Kmalloc`, `Vmalloc` and `KVmalloc`.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-2-dakr@kernel.org [ Fixed typo. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 101 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 101 insertions(+)
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -11,6 +11,7 @@ pub mod vec_ext; /// Indicates an allocation error. #[derive(Copy, Clone, PartialEq, Eq, Debug)] pub struct AllocError; +use core::{alloc::Layout, ptr::NonNull};
/// Flags to be used when allocating memory. /// @@ -86,3 +87,103 @@ pub mod flags { /// small allocations. pub const GFP_NOWAIT: Flags = Flags(bindings::GFP_NOWAIT); } + +/// The kernel's [`Allocator`] trait. +/// +/// An implementation of [`Allocator`] can allocate, re-allocate and free memory buffers described +/// via [`Layout`]. +/// +/// [`Allocator`] is designed to be implemented as a ZST; [`Allocator`] functions do not operate on +/// an object instance. +/// +/// In order to be able to support `#[derive(SmartPointer)]` later on, we need to avoid a design +/// that requires an `Allocator` to be instantiated, hence its functions must not contain any kind +/// of `self` parameter. +/// +/// # Safety +/// +/// - A memory allocation returned from an allocator must remain valid until it is explicitly freed. +/// +/// - Any pointer to a valid memory allocation must be valid to be passed to any other [`Allocator`] +/// function of the same type. +/// +/// - Implementers must ensure that all trait functions abide by the guarantees documented in the +/// `# Guarantees` sections. +pub unsafe trait Allocator { + /// Allocate memory based on `layout` and `flags`. + /// + /// On success, returns a buffer represented as `NonNull<[u8]>` that satisfies the layout + /// constraints (i.e. minimum size and alignment as specified by `layout`). + /// + /// This function is equivalent to `realloc` when called with `None`. + /// + /// # Guarantees + /// + /// When the return value is `Ok(ptr)`, then `ptr` is + /// - valid for reads and writes for `layout.size()` bytes, until it is passed to + /// [`Allocator::free`] or [`Allocator::realloc`], + /// - aligned to `layout.align()`, + /// + /// Additionally, `Flags` are honored as documented in + /// https://docs.kernel.org/core-api/mm-api.html#mm-api-gfp-flags. + fn alloc(layout: Layout, flags: Flags) -> Result<NonNull<[u8]>, AllocError> { + // SAFETY: Passing `None` to `realloc` is valid by its safety requirements and asks for a + // new memory allocation. + unsafe { Self::realloc(None, layout, Layout::new::<()>(), flags) } + } + + /// Re-allocate an existing memory allocation to satisfy the requested `layout`. + /// + /// If the requested size is zero, `realloc` behaves equivalent to `free`. + /// + /// If the requested size is larger than the size of the existing allocation, a successful call + /// to `realloc` guarantees that the new or grown buffer has at least `Layout::size` bytes, but + /// may also be larger. + /// + /// If the requested size is smaller than the size of the existing allocation, `realloc` may or + /// may not shrink the buffer; this is implementation specific to the allocator. + /// + /// On allocation failure, the existing buffer, if any, remains valid. + /// + /// The buffer is represented as `NonNull<[u8]>`. + /// + /// # Safety + /// + /// - If `ptr == Some(p)`, then `p` must point to an existing and valid memory allocation + /// created by this [`Allocator`]; if `old_layout` is zero-sized `p` does not need to be a + /// pointer returned by this [`Allocator`]. + /// - `ptr` is allowed to be `None`; in this case a new memory allocation is created and + /// `old_layout` is ignored. + /// - `old_layout` must match the `Layout` the allocation has been created with. + /// + /// # Guarantees + /// + /// This function has the same guarantees as [`Allocator::alloc`]. When `ptr == Some(p)`, then + /// it additionally guarantees that: + /// - the contents of the memory pointed to by `p` are preserved up to the lesser of the new + /// and old size, i.e. `ret_ptr[0..min(layout.size(), old_layout.size())] == + /// p[0..min(layout.size(), old_layout.size())]`. + /// - when the return value is `Err(AllocError)`, then `ptr` is still valid. + unsafe fn realloc( + ptr: Option<NonNull<u8>>, + layout: Layout, + old_layout: Layout, + flags: Flags, + ) -> Result<NonNull<[u8]>, AllocError>; + + /// Free an existing memory allocation. + /// + /// # Safety + /// + /// - `ptr` must point to an existing and valid memory allocation created by this [`Allocator`]; + /// if `old_layout` is zero-sized `p` does not need to be a pointer returned by this + /// [`Allocator`]. + /// - `layout` must match the `Layout` the allocation has been created with. + /// - The memory allocation at `ptr` must never again be read from or written to. + unsafe fn free(ptr: NonNull<u8>, layout: Layout) { + // SAFETY: The caller guarantees that `ptr` points at a valid allocation created by this + // allocator. We are passing a `Layout` with the smallest possible alignment, so it is + // smaller than or equal to the alignment previously used with this allocation. + let _ = unsafe { Self::realloc(Some(ptr), Layout::new::<()>(), layout, Flags(0)) }; + } +}
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit a654a6e09644266e38ac05415ef7737d299c4497 upstream.
Separate `aligned_size` from `krealloc_aligned`.
Subsequent patches implement `Allocator` derivates, such as `Kmalloc`, that require `aligned_size` and replace the original `krealloc_aligned`.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-3-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc/allocator.rs | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-)
--- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -8,6 +8,17 @@ use core::ptr;
struct KernelAllocator;
+/// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment. +fn aligned_size(new_layout: Layout) -> usize { + // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. + let layout = new_layout.pad_to_align(); + + // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()` + // which together with the slab guarantees means the `krealloc` will return a properly aligned + // object (see comments in `kmalloc()` for more information). + layout.size() +} + /// Calls `krealloc` with a proper size to alloc a new object aligned to `new_layout`'s alignment. /// /// # Safety @@ -15,13 +26,7 @@ struct KernelAllocator; /// - `ptr` can be either null or a pointer which has been allocated by this allocator. /// - `new_layout` must have a non-zero size. pub(crate) unsafe fn krealloc_aligned(ptr: *mut u8, new_layout: Layout, flags: Flags) -> *mut u8 { - // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. - let layout = new_layout.pad_to_align(); - - // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()` - // which together with the slab guarantees means the `krealloc` will return a properly aligned - // object (see comments in `kmalloc()` for more information). - let size = layout.size(); + let size = aligned_size(new_layout);
// SAFETY: // - `ptr` is either null or a pointer returned from a previous `k{re}alloc()` by the
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 941e65531446c1eb5d573c5d30172117ebe96112 upstream.
Subsequent patches implement `Vmalloc` and `KVmalloc` allocators, hence align `KernelAllocator` to this naming scheme.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-4-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc/allocator.rs | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -6,7 +6,7 @@ use super::{flags::*, Flags}; use core::alloc::{GlobalAlloc, Layout}; use core::ptr;
-struct KernelAllocator; +struct Kmalloc;
/// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment. fn aligned_size(new_layout: Layout) -> usize { @@ -37,7 +37,7 @@ pub(crate) unsafe fn krealloc_aligned(pt }
// SAFETY: TODO. -unsafe impl GlobalAlloc for KernelAllocator { +unsafe impl GlobalAlloc for Kmalloc { unsafe fn alloc(&self, layout: Layout) -> *mut u8 { // SAFETY: `ptr::null_mut()` is null and `layout` has a non-zero size by the function safety // requirement. @@ -74,7 +74,7 @@ unsafe impl GlobalAlloc for KernelAlloca }
#[global_allocator] -static ALLOCATOR: KernelAllocator = KernelAllocator; +static ALLOCATOR: Kmalloc = Kmalloc;
// See https://github.com/rust-lang/rust/pull/86844. #[no_mangle]
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 8a799831fc63c988eec90d334fdd68ff5f2c7eb5 upstream.
`ReallocFunc` is an abstraction for the kernel's realloc derivates, such as `krealloc`, `vrealloc` and `kvrealloc`.
All of the named functions share the same function signature and implement the same semantics. The `ReallocFunc` abstractions provides a generalized wrapper around those, to trivialize the implementation of `Kmalloc`, `Vmalloc` and `KVmalloc` in subsequent patches.
Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-5-dakr@kernel.org [ Added temporary `allow(dead_code)` for `dangling_from_layout` to clean warning in `rusttest` target as discussed in the list (but it is needed earlier, i.e. in this patch already). Added colon. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 9 +++++ rust/kernel/alloc/allocator.rs | 70 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 79 insertions(+)
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -187,3 +187,12 @@ pub unsafe trait Allocator { let _ = unsafe { Self::realloc(Some(ptr), Layout::new::<()>(), layout, Flags(0)) }; } } + +#[allow(dead_code)] +/// Returns a properly aligned dangling pointer from the given `layout`. +pub(crate) fn dangling_from_layout(layout: Layout) -> NonNull<u8> { + let ptr = layout.align() as *mut u8; + + // SAFETY: `layout.align()` (and hence `ptr`) is guaranteed to be non-zero. + unsafe { NonNull::new_unchecked(ptr) } +} --- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -1,10 +1,20 @@ // SPDX-License-Identifier: GPL-2.0
//! Allocator support. +//! +//! Documentation for the kernel's memory allocators can found in the "Memory Allocation Guide" +//! linked below. For instance, this includes the concept of "get free page" (GFP) flags and the +//! typical application of the different kernel allocators. +//! +//! Reference: https://docs.kernel.org/core-api/memory-allocation.html
use super::{flags::*, Flags}; use core::alloc::{GlobalAlloc, Layout}; use core::ptr; +use core::ptr::NonNull; + +use crate::alloc::AllocError; +use crate::bindings;
struct Kmalloc;
@@ -36,6 +46,66 @@ pub(crate) unsafe fn krealloc_aligned(pt unsafe { bindings::krealloc(ptr as *const core::ffi::c_void, size, flags.0) as *mut u8 } }
+/// # Invariants +/// +/// One of the following: `krealloc`, `vrealloc`, `kvrealloc`. +struct ReallocFunc( + unsafe extern "C" fn(*const core::ffi::c_void, usize, u32) -> *mut core::ffi::c_void, +); + +#[expect(dead_code)] +impl ReallocFunc { + /// # Safety + /// + /// This method has the same safety requirements as [`Allocator::realloc`]. + /// + /// # Guarantees + /// + /// This method has the same guarantees as `Allocator::realloc`. Additionally + /// - it accepts any pointer to a valid memory allocation allocated by this function. + /// - memory allocated by this function remains valid until it is passed to this function. + unsafe fn call( + &self, + ptr: Option<NonNull<u8>>, + layout: Layout, + old_layout: Layout, + flags: Flags, + ) -> Result<NonNull<[u8]>, AllocError> { + let size = aligned_size(layout); + let ptr = match ptr { + Some(ptr) => { + if old_layout.size() == 0 { + ptr::null() + } else { + ptr.as_ptr() + } + } + None => ptr::null(), + }; + + // SAFETY: + // - `self.0` is one of `krealloc`, `vrealloc`, `kvrealloc` and thus only requires that + // `ptr` is NULL or valid. + // - `ptr` is either NULL or valid by the safety requirements of this function. + // + // GUARANTEE: + // - `self.0` is one of `krealloc`, `vrealloc`, `kvrealloc`. + // - Those functions provide the guarantees of this function. + let raw_ptr = unsafe { + // If `size == 0` and `ptr != NULL` the memory behind the pointer is freed. + self.0(ptr.cast(), size, flags.0).cast() + }; + + let ptr = if size == 0 { + crate::alloc::dangling_from_layout(layout) + } else { + NonNull::new(raw_ptr).ok_or(AllocError)? + }; + + Ok(NonNull::slice_from_raw_parts(ptr, size)) + } +} + // SAFETY: TODO. unsafe impl GlobalAlloc for Kmalloc { unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit a87a36f0bf517dae22f3e3790b05c979070f776a upstream.
Subsequent patches implement allocators such as `Kmalloc`, `Vmalloc`, `KVmalloc`; we need them to be available outside of the kernel crate as well.
Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-6-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -4,7 +4,7 @@
#[cfg(not(test))] #[cfg(not(testlib))] -mod allocator; +pub mod allocator; pub mod box_ext; pub mod vec_ext;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit a34822d1c4c93085f635b922441a017bd7e959b0 upstream.
Implement `Allocator` for `Kmalloc`, the kernel's default allocator, typically used for objects smaller than page size.
All memory allocations made with `Kmalloc` end up in `krealloc()`.
It serves as allocator for the subsequently introduced types `KBox` and `KVec`.
Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-7-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc/allocator.rs | 31 ++++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-)
--- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -13,10 +13,16 @@ use core::alloc::{GlobalAlloc, Layout}; use core::ptr; use core::ptr::NonNull;
-use crate::alloc::AllocError; +use crate::alloc::{AllocError, Allocator}; use crate::bindings;
-struct Kmalloc; +/// The contiguous kernel allocator. +/// +/// `Kmalloc` is typically used for physically contiguous allocations up to page size, but also +/// supports larger allocations up to `bindings::KMALLOC_MAX_SIZE`, which is hardware specific. +/// +/// For more details see [self]. +pub struct Kmalloc;
/// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment. fn aligned_size(new_layout: Layout) -> usize { @@ -53,8 +59,10 @@ struct ReallocFunc( unsafe extern "C" fn(*const core::ffi::c_void, usize, u32) -> *mut core::ffi::c_void, );
-#[expect(dead_code)] impl ReallocFunc { + // INVARIANT: `krealloc` satisfies the type invariants. + const KREALLOC: Self = Self(bindings::krealloc); + /// # Safety /// /// This method has the same safety requirements as [`Allocator::realloc`]. @@ -106,6 +114,23 @@ impl ReallocFunc { } }
+// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that +// - memory remains valid until it is explicitly freed, +// - passing a pointer to a valid memory allocation is OK, +// - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same. +unsafe impl Allocator for Kmalloc { + #[inline] + unsafe fn realloc( + ptr: Option<NonNull<u8>>, + layout: Layout, + old_layout: Layout, + flags: Flags, + ) -> Result<NonNull<[u8]>, AllocError> { + // SAFETY: `ReallocFunc::call` has the same safety requirements as `Allocator::realloc`. + unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags) } + } +} + // SAFETY: TODO. unsafe impl GlobalAlloc for Kmalloc { unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 5a888c28e3b4ff6f54a53fca33951537d135e7f1 upstream.
`Allocator`s, such as `Kmalloc`, will be used by e.g. `Box` and `Vec` in subsequent patches, and hence this dependency propagates throughout the whole kernel.
Add the `allocator_test` module that provides an empty implementation for all `Allocator`s in the kernel, such that we don't break the `rusttest` make target in subsequent patches.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-8-dakr@kernel.org [ Added missing `_old_layout` parameter as discussed. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 9 +++++++-- rust/kernel/alloc/allocator_test.rs | 20 ++++++++++++++++++++ 2 files changed, 27 insertions(+), 2 deletions(-) create mode 100644 rust/kernel/alloc/allocator_test.rs
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -2,12 +2,17 @@
//! Extensions to the [`alloc`] crate.
-#[cfg(not(test))] -#[cfg(not(testlib))] +#[cfg(not(any(test, testlib)))] pub mod allocator; pub mod box_ext; pub mod vec_ext;
+#[cfg(any(test, testlib))] +pub mod allocator_test; + +#[cfg(any(test, testlib))] +pub use self::allocator_test as allocator; + /// Indicates an allocation error. #[derive(Copy, Clone, PartialEq, Eq, Debug)] pub struct AllocError; --- /dev/null +++ b/rust/kernel/alloc/allocator_test.rs @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 + +#![allow(missing_docs)] + +use super::{AllocError, Allocator, Flags}; +use core::alloc::Layout; +use core::ptr::NonNull; + +pub struct Kmalloc; + +unsafe impl Allocator for Kmalloc { + unsafe fn realloc( + _ptr: Option<NonNull<u8>>, + _layout: Layout, + _old_layout: Layout, + _flags: Flags, + ) -> Result<NonNull<[u8]>, AllocError> { + panic!(); + } +}
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 61c004781d6b928443052e7a6cf84b35d4f61401 upstream.
Implement `Allocator` for `Vmalloc`, the kernel's virtually contiguous allocator, typically used for larger objects, (much) larger than page size.
All memory allocations made with `Vmalloc` end up in `vrealloc()`.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-9-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/helpers/helpers.c | 1 rust/helpers/vmalloc.c | 9 ++++++++ rust/kernel/alloc/allocator.rs | 37 ++++++++++++++++++++++++++++++++++++ rust/kernel/alloc/allocator_test.rs | 1 4 files changed, 48 insertions(+) create mode 100644 rust/helpers/vmalloc.c
--- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -22,5 +22,6 @@ #include "spinlock.c" #include "task.c" #include "uaccess.c" +#include "vmalloc.c" #include "wait.c" #include "workqueue.c" --- /dev/null +++ b/rust/helpers/vmalloc.c @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include <linux/vmalloc.h> + +void * __must_check __realloc_size(2) +rust_helper_vrealloc(const void *p, size_t size, gfp_t flags) +{ + return vrealloc(p, size, flags); +} --- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -15,6 +15,7 @@ use core::ptr::NonNull;
use crate::alloc::{AllocError, Allocator}; use crate::bindings; +use crate::pr_warn;
/// The contiguous kernel allocator. /// @@ -24,6 +25,15 @@ use crate::bindings; /// For more details see [self]. pub struct Kmalloc;
+/// The virtually contiguous kernel allocator. +/// +/// `Vmalloc` allocates pages from the page level allocator and maps them into the contiguous kernel +/// virtual space. It is typically used for large allocations. The memory allocated with this +/// allocator is not physically contiguous. +/// +/// For more details see [self]. +pub struct Vmalloc; + /// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment. fn aligned_size(new_layout: Layout) -> usize { // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. @@ -63,6 +73,9 @@ impl ReallocFunc { // INVARIANT: `krealloc` satisfies the type invariants. const KREALLOC: Self = Self(bindings::krealloc);
+ // INVARIANT: `vrealloc` satisfies the type invariants. + const VREALLOC: Self = Self(bindings::vrealloc); + /// # Safety /// /// This method has the same safety requirements as [`Allocator::realloc`]. @@ -168,6 +181,30 @@ unsafe impl GlobalAlloc for Kmalloc { } }
+// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that +// - memory remains valid until it is explicitly freed, +// - passing a pointer to a valid memory allocation is OK, +// - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same. +unsafe impl Allocator for Vmalloc { + #[inline] + unsafe fn realloc( + ptr: Option<NonNull<u8>>, + layout: Layout, + old_layout: Layout, + flags: Flags, + ) -> Result<NonNull<[u8]>, AllocError> { + // TODO: Support alignments larger than PAGE_SIZE. + if layout.align() > bindings::PAGE_SIZE { + pr_warn!("Vmalloc does not support alignments larger than PAGE_SIZE yet.\n"); + return Err(AllocError); + } + + // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously + // allocated with this `Allocator`. + unsafe { ReallocFunc::VREALLOC.call(ptr, layout, old_layout, flags) } + } +} + #[global_allocator] static ALLOCATOR: Kmalloc = Kmalloc;
--- a/rust/kernel/alloc/allocator_test.rs +++ b/rust/kernel/alloc/allocator_test.rs @@ -7,6 +7,7 @@ use core::alloc::Layout; use core::ptr::NonNull;
pub struct Kmalloc; +pub type Vmalloc = Kmalloc;
unsafe impl Allocator for Kmalloc { unsafe fn realloc(
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 8362c2608ba1be635ffa22a256dfcfe51c6238cc upstream.
Implement `Allocator` for `KVmalloc`, an `Allocator` that tries to allocate memory with `kmalloc` first and, on failure, falls back to `vmalloc`.
All memory allocations made with `KVmalloc` end up in `kvrealloc_noprof()`; all frees in `kvfree()`.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-10-dakr@kernel.org [ Reworded typo. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/helpers/slab.c | 6 ++++++ rust/kernel/alloc/allocator.rs | 36 ++++++++++++++++++++++++++++++++++++ rust/kernel/alloc/allocator_test.rs | 1 + 3 files changed, 43 insertions(+)
--- a/rust/helpers/slab.c +++ b/rust/helpers/slab.c @@ -7,3 +7,9 @@ rust_helper_krealloc(const void *objp, s { return krealloc(objp, new_size, flags); } + +void * __must_check __realloc_size(2) +rust_helper_kvrealloc(const void *p, size_t size, gfp_t flags) +{ + return kvrealloc(p, size, flags); +} --- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -34,6 +34,15 @@ pub struct Kmalloc; /// For more details see [self]. pub struct Vmalloc;
+/// The kvmalloc kernel allocator. +/// +/// `KVmalloc` attempts to allocate memory with `Kmalloc` first, but falls back to `Vmalloc` upon +/// failure. This allocator is typically used when the size for the requested allocation is not +/// known and may exceed the capabilities of `Kmalloc`. +/// +/// For more details see [self]. +pub struct KVmalloc; + /// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment. fn aligned_size(new_layout: Layout) -> usize { // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. @@ -76,6 +85,9 @@ impl ReallocFunc { // INVARIANT: `vrealloc` satisfies the type invariants. const VREALLOC: Self = Self(bindings::vrealloc);
+ // INVARIANT: `kvrealloc` satisfies the type invariants. + const KVREALLOC: Self = Self(bindings::kvrealloc); + /// # Safety /// /// This method has the same safety requirements as [`Allocator::realloc`]. @@ -205,6 +217,30 @@ unsafe impl Allocator for Vmalloc { } }
+// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that +// - memory remains valid until it is explicitly freed, +// - passing a pointer to a valid memory allocation is OK, +// - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same. +unsafe impl Allocator for KVmalloc { + #[inline] + unsafe fn realloc( + ptr: Option<NonNull<u8>>, + layout: Layout, + old_layout: Layout, + flags: Flags, + ) -> Result<NonNull<[u8]>, AllocError> { + // TODO: Support alignments larger than PAGE_SIZE. + if layout.align() > bindings::PAGE_SIZE { + pr_warn!("KVmalloc does not support alignments larger than PAGE_SIZE yet.\n"); + return Err(AllocError); + } + + // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously + // allocated with this `Allocator`. + unsafe { ReallocFunc::KVREALLOC.call(ptr, layout, old_layout, flags) } + } +} + #[global_allocator] static ALLOCATOR: Kmalloc = Kmalloc;
--- a/rust/kernel/alloc/allocator_test.rs +++ b/rust/kernel/alloc/allocator_test.rs @@ -8,6 +8,7 @@ use core::ptr::NonNull;
pub struct Kmalloc; pub type Vmalloc = Kmalloc; +pub type KVmalloc = Kmalloc;
unsafe impl Allocator for Kmalloc { unsafe fn realloc(
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 01b2196e5aac8af9343282d0044fa0d6b07d484c upstream.
Some test cases in subsequent patches provoke allocation failures. Add `__GFP_NOWARN` to enable test cases to silence unpleasant warnings.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-11-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/bindings/bindings_helper.h | 1 + rust/kernel/alloc.rs | 5 +++++ 2 files changed, 6 insertions(+)
--- a/rust/bindings/bindings_helper.h +++ b/rust/bindings/bindings_helper.h @@ -31,4 +31,5 @@ const gfp_t RUST_CONST_HELPER_GFP_KERNEL const gfp_t RUST_CONST_HELPER_GFP_NOWAIT = GFP_NOWAIT; const gfp_t RUST_CONST_HELPER___GFP_ZERO = __GFP_ZERO; const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM = ___GFP_HIGHMEM; +const gfp_t RUST_CONST_HELPER___GFP_NOWARN = ___GFP_NOWARN; const blk_features_t RUST_CONST_HELPER_BLK_FEAT_ROTATIONAL = BLK_FEAT_ROTATIONAL; --- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -91,6 +91,11 @@ pub mod flags { /// use any filesystem callback. It is very likely to fail to allocate memory, even for very /// small allocations. pub const GFP_NOWAIT: Flags = Flags(bindings::GFP_NOWAIT); + + /// Suppresses allocation failure reports. + /// + /// This is normally or'd with other flags. + pub const __GFP_NOWARN: Flags = Flags(bindings::__GFP_NOWARN); }
/// The kernel's [`Allocator`] trait.
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit c8cfa8d0c0b10be216861fe904ea68978b1dcc97 upstream.
`Box` provides the simplest way to allocate memory for a generic type with one of the kernel's allocators, e.g. `Kmalloc`, `Vmalloc` or `KVmalloc`.
In contrast to Rust's `Box` type, the kernel `Box` type considers the kernel's GFP flags for all appropriate functions, always reports allocation failures through `Result<_, AllocError>` and remains independent from unstable features.
Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-12-dakr@kernel.org [ Added backticks, fixed typos. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 6 rust/kernel/alloc/kbox.rs | 456 ++++++++++++++++++++++++++++++++++++++++++++++ rust/kernel/prelude.rs | 2 3 files changed, 463 insertions(+), 1 deletion(-) create mode 100644 rust/kernel/alloc/kbox.rs
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -5,6 +5,7 @@ #[cfg(not(any(test, testlib)))] pub mod allocator; pub mod box_ext; +pub mod kbox; pub mod vec_ext;
#[cfg(any(test, testlib))] @@ -13,6 +14,11 @@ pub mod allocator_test; #[cfg(any(test, testlib))] pub use self::allocator_test as allocator;
+pub use self::kbox::Box; +pub use self::kbox::KBox; +pub use self::kbox::KVBox; +pub use self::kbox::VBox; + /// Indicates an allocation error. #[derive(Copy, Clone, PartialEq, Eq, Debug)] pub struct AllocError; --- /dev/null +++ b/rust/kernel/alloc/kbox.rs @@ -0,0 +1,456 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Implementation of [`Box`]. + +#[allow(unused_imports)] // Used in doc comments. +use super::allocator::{KVmalloc, Kmalloc, Vmalloc}; +use super::{AllocError, Allocator, Flags}; +use core::alloc::Layout; +use core::fmt; +use core::marker::PhantomData; +use core::mem::ManuallyDrop; +use core::mem::MaybeUninit; +use core::ops::{Deref, DerefMut}; +use core::pin::Pin; +use core::ptr::NonNull; +use core::result::Result; + +use crate::init::{InPlaceInit, InPlaceWrite, Init, PinInit}; +use crate::types::ForeignOwnable; + +/// The kernel's [`Box`] type -- a heap allocation for a single value of type `T`. +/// +/// This is the kernel's version of the Rust stdlib's `Box`. There are several differences, +/// for example no `noalias` attribute is emitted and partially moving out of a `Box` is not +/// supported. There are also several API differences, e.g. `Box` always requires an [`Allocator`] +/// implementation to be passed as generic, page [`Flags`] when allocating memory and all functions +/// that may allocate memory are fallible. +/// +/// `Box` works with any of the kernel's allocators, e.g. [`Kmalloc`], [`Vmalloc`] or [`KVmalloc`]. +/// There are aliases for `Box` with these allocators ([`KBox`], [`VBox`], [`KVBox`]). +/// +/// When dropping a [`Box`], the value is also dropped and the heap memory is automatically freed. +/// +/// # Examples +/// +/// ``` +/// let b = KBox::<u64>::new(24_u64, GFP_KERNEL)?; +/// +/// assert_eq!(*b, 24_u64); +/// # Ok::<(), Error>(()) +/// ``` +/// +/// ``` +/// # use kernel::bindings; +/// const SIZE: usize = bindings::KMALLOC_MAX_SIZE as usize + 1; +/// struct Huge([u8; SIZE]); +/// +/// assert!(KBox::<Huge>::new_uninit(GFP_KERNEL | __GFP_NOWARN).is_err()); +/// ``` +/// +/// ``` +/// # use kernel::bindings; +/// const SIZE: usize = bindings::KMALLOC_MAX_SIZE as usize + 1; +/// struct Huge([u8; SIZE]); +/// +/// assert!(KVBox::<Huge>::new_uninit(GFP_KERNEL).is_ok()); +/// ``` +/// +/// # Invariants +/// +/// `self.0` is always properly aligned and either points to memory allocated with `A` or, for +/// zero-sized types, is a dangling, well aligned pointer. +#[repr(transparent)] +pub struct Box<T: ?Sized, A: Allocator>(NonNull<T>, PhantomData<A>); + +/// Type alias for [`Box`] with a [`Kmalloc`] allocator. +/// +/// # Examples +/// +/// ``` +/// let b = KBox::new(24_u64, GFP_KERNEL)?; +/// +/// assert_eq!(*b, 24_u64); +/// # Ok::<(), Error>(()) +/// ``` +pub type KBox<T> = Box<T, super::allocator::Kmalloc>; + +/// Type alias for [`Box`] with a [`Vmalloc`] allocator. +/// +/// # Examples +/// +/// ``` +/// let b = VBox::new(24_u64, GFP_KERNEL)?; +/// +/// assert_eq!(*b, 24_u64); +/// # Ok::<(), Error>(()) +/// ``` +pub type VBox<T> = Box<T, super::allocator::Vmalloc>; + +/// Type alias for [`Box`] with a [`KVmalloc`] allocator. +/// +/// # Examples +/// +/// ``` +/// let b = KVBox::new(24_u64, GFP_KERNEL)?; +/// +/// assert_eq!(*b, 24_u64); +/// # Ok::<(), Error>(()) +/// ``` +pub type KVBox<T> = Box<T, super::allocator::KVmalloc>; + +// SAFETY: `Box` is `Send` if `T` is `Send` because the `Box` owns a `T`. +unsafe impl<T, A> Send for Box<T, A> +where + T: Send + ?Sized, + A: Allocator, +{ +} + +// SAFETY: `Box` is `Sync` if `T` is `Sync` because the `Box` owns a `T`. +unsafe impl<T, A> Sync for Box<T, A> +where + T: Sync + ?Sized, + A: Allocator, +{ +} + +impl<T, A> Box<T, A> +where + T: ?Sized, + A: Allocator, +{ + /// Creates a new `Box<T, A>` from a raw pointer. + /// + /// # Safety + /// + /// For non-ZSTs, `raw` must point at an allocation allocated with `A` that is sufficiently + /// aligned for and holds a valid `T`. The caller passes ownership of the allocation to the + /// `Box`. + /// + /// For ZSTs, `raw` must be a dangling, well aligned pointer. + #[inline] + pub const unsafe fn from_raw(raw: *mut T) -> Self { + // INVARIANT: Validity of `raw` is guaranteed by the safety preconditions of this function. + // SAFETY: By the safety preconditions of this function, `raw` is not a NULL pointer. + Self(unsafe { NonNull::new_unchecked(raw) }, PhantomData) + } + + /// Consumes the `Box<T, A>` and returns a raw pointer. + /// + /// This will not run the destructor of `T` and for non-ZSTs the allocation will stay alive + /// indefinitely. Use [`Box::from_raw`] to recover the [`Box`], drop the value and free the + /// allocation, if any. + /// + /// # Examples + /// + /// ``` + /// let x = KBox::new(24, GFP_KERNEL)?; + /// let ptr = KBox::into_raw(x); + /// // SAFETY: `ptr` comes from a previous call to `KBox::into_raw`. + /// let x = unsafe { KBox::from_raw(ptr) }; + /// + /// assert_eq!(*x, 24); + /// # Ok::<(), Error>(()) + /// ``` + #[inline] + pub fn into_raw(b: Self) -> *mut T { + ManuallyDrop::new(b).0.as_ptr() + } + + /// Consumes and leaks the `Box<T, A>` and returns a mutable reference. + /// + /// See [`Box::into_raw`] for more details. + #[inline] + pub fn leak<'a>(b: Self) -> &'a mut T { + // SAFETY: `Box::into_raw` always returns a properly aligned and dereferenceable pointer + // which points to an initialized instance of `T`. + unsafe { &mut *Box::into_raw(b) } + } +} + +impl<T, A> Box<MaybeUninit<T>, A> +where + A: Allocator, +{ + /// Converts a `Box<MaybeUninit<T>, A>` to a `Box<T, A>`. + /// + /// It is undefined behavior to call this function while the value inside of `b` is not yet + /// fully initialized. + /// + /// # Safety + /// + /// Callers must ensure that the value inside of `b` is in an initialized state. + pub unsafe fn assume_init(self) -> Box<T, A> { + let raw = Self::into_raw(self); + + // SAFETY: `raw` comes from a previous call to `Box::into_raw`. By the safety requirements + // of this function, the value inside the `Box` is in an initialized state. Hence, it is + // safe to reconstruct the `Box` as `Box<T, A>`. + unsafe { Box::from_raw(raw.cast()) } + } + + /// Writes the value and converts to `Box<T, A>`. + pub fn write(mut self, value: T) -> Box<T, A> { + (*self).write(value); + + // SAFETY: We've just initialized `b`'s value. + unsafe { self.assume_init() } + } +} + +impl<T, A> Box<T, A> +where + A: Allocator, +{ + /// Creates a new `Box<T, A>` and initializes its contents with `x`. + /// + /// New memory is allocated with `A`. The allocation may fail, in which case an error is + /// returned. For ZSTs no memory is allocated. + pub fn new(x: T, flags: Flags) -> Result<Self, AllocError> { + let b = Self::new_uninit(flags)?; + Ok(Box::write(b, x)) + } + + /// Creates a new `Box<T, A>` with uninitialized contents. + /// + /// New memory is allocated with `A`. The allocation may fail, in which case an error is + /// returned. For ZSTs no memory is allocated. + /// + /// # Examples + /// + /// ``` + /// let b = KBox::<u64>::new_uninit(GFP_KERNEL)?; + /// let b = KBox::write(b, 24); + /// + /// assert_eq!(*b, 24_u64); + /// # Ok::<(), Error>(()) + /// ``` + pub fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>, A>, AllocError> { + let layout = Layout::new::<MaybeUninit<T>>(); + let ptr = A::alloc(layout, flags)?; + + // INVARIANT: `ptr` is either a dangling pointer or points to memory allocated with `A`, + // which is sufficient in size and alignment for storing a `T`. + Ok(Box(ptr.cast(), PhantomData)) + } + + /// Constructs a new `Pin<Box<T, A>>`. If `T` does not implement [`Unpin`], then `x` will be + /// pinned in memory and can't be moved. + #[inline] + pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> + where + A: 'static, + { + Ok(Self::new(x, flags)?.into()) + } + + /// Forgets the contents (does not run the destructor), but keeps the allocation. + fn forget_contents(this: Self) -> Box<MaybeUninit<T>, A> { + let ptr = Self::into_raw(this); + + // SAFETY: `ptr` is valid, because it came from `Box::into_raw`. + unsafe { Box::from_raw(ptr.cast()) } + } + + /// Drops the contents, but keeps the allocation. + /// + /// # Examples + /// + /// ``` + /// let value = KBox::new([0; 32], GFP_KERNEL)?; + /// assert_eq!(*value, [0; 32]); + /// let value = KBox::drop_contents(value); + /// // Now we can re-use `value`: + /// let value = KBox::write(value, [1; 32]); + /// assert_eq!(*value, [1; 32]); + /// # Ok::<(), Error>(()) + /// ``` + pub fn drop_contents(this: Self) -> Box<MaybeUninit<T>, A> { + let ptr = this.0.as_ptr(); + + // SAFETY: `ptr` is valid, because it came from `this`. After this call we never access the + // value stored in `this` again. + unsafe { core::ptr::drop_in_place(ptr) }; + + Self::forget_contents(this) + } + + /// Moves the `Box`'s value out of the `Box` and consumes the `Box`. + pub fn into_inner(b: Self) -> T { + // SAFETY: By the type invariant `&*b` is valid for `read`. + let value = unsafe { core::ptr::read(&*b) }; + let _ = Self::forget_contents(b); + value + } +} + +impl<T, A> From<Box<T, A>> for Pin<Box<T, A>> +where + T: ?Sized, + A: Allocator, +{ + /// Converts a `Box<T, A>` into a `Pin<Box<T, A>>`. If `T` does not implement [`Unpin`], then + /// `*b` will be pinned in memory and can't be moved. + /// + /// This moves `b` into `Pin` without moving `*b` or allocating and copying any memory. + fn from(b: Box<T, A>) -> Self { + // SAFETY: The value wrapped inside a `Pin<Box<T, A>>` cannot be moved or replaced as long + // as `T` does not implement `Unpin`. + unsafe { Pin::new_unchecked(b) } + } +} + +impl<T, A> InPlaceWrite<T> for Box<MaybeUninit<T>, A> +where + A: Allocator + 'static, +{ + type Initialized = Box<T, A>; + + fn write_init<E>(mut self, init: impl Init<T, E>) -> Result<Self::Initialized, E> { + let slot = self.as_mut_ptr(); + // SAFETY: When init errors/panics, slot will get deallocated but not dropped, + // slot is valid. + unsafe { init.__init(slot)? }; + // SAFETY: All fields have been initialized. + Ok(unsafe { Box::assume_init(self) }) + } + + fn write_pin_init<E>(mut self, init: impl PinInit<T, E>) -> Result<PinSelf::Initialized, E> { + let slot = self.as_mut_ptr(); + // SAFETY: When init errors/panics, slot will get deallocated but not dropped, + // slot is valid and will not be moved, because we pin it later. + unsafe { init.__pinned_init(slot)? }; + // SAFETY: All fields have been initialized. + Ok(unsafe { Box::assume_init(self) }.into()) + } +} + +impl<T, A> InPlaceInit<T> for Box<T, A> +where + A: Allocator + 'static, +{ + type PinnedSelf = Pin<Self>; + + #[inline] + fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Pin<Self>, E> + where + E: From<AllocError>, + { + Box::<_, A>::new_uninit(flags)?.write_pin_init(init) + } + + #[inline] + fn try_init<E>(init: impl Init<T, E>, flags: Flags) -> Result<Self, E> + where + E: From<AllocError>, + { + Box::<_, A>::new_uninit(flags)?.write_init(init) + } +} + +impl<T: 'static, A> ForeignOwnable for Box<T, A> +where + A: Allocator, +{ + type Borrowed<'a> = &'a T; + + fn into_foreign(self) -> *const core::ffi::c_void { + Box::into_raw(self) as _ + } + + unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { + // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous + // call to `Self::into_foreign`. + unsafe { Box::from_raw(ptr as _) } + } + + unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> &'a T { + // SAFETY: The safety requirements of this method ensure that the object remains alive and + // immutable for the duration of 'a. + unsafe { &*ptr.cast() } + } +} + +impl<T: 'static, A> ForeignOwnable for Pin<Box<T, A>> +where + A: Allocator, +{ + type Borrowed<'a> = Pin<&'a T>; + + fn into_foreign(self) -> *const core::ffi::c_void { + // SAFETY: We are still treating the box as pinned. + Box::into_raw(unsafe { Pin::into_inner_unchecked(self) }) as _ + } + + unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { + // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous + // call to `Self::into_foreign`. + unsafe { Pin::new_unchecked(Box::from_raw(ptr as _)) } + } + + unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> Pin<&'a T> { + // SAFETY: The safety requirements for this function ensure that the object is still alive, + // so it is safe to dereference the raw pointer. + // The safety requirements of `from_foreign` also ensure that the object remains alive for + // the lifetime of the returned value. + let r = unsafe { &*ptr.cast() }; + + // SAFETY: This pointer originates from a `Pin<Box<T>>`. + unsafe { Pin::new_unchecked(r) } + } +} + +impl<T, A> Deref for Box<T, A> +where + T: ?Sized, + A: Allocator, +{ + type Target = T; + + fn deref(&self) -> &T { + // SAFETY: `self.0` is always properly aligned, dereferenceable and points to an initialized + // instance of `T`. + unsafe { self.0.as_ref() } + } +} + +impl<T, A> DerefMut for Box<T, A> +where + T: ?Sized, + A: Allocator, +{ + fn deref_mut(&mut self) -> &mut T { + // SAFETY: `self.0` is always properly aligned, dereferenceable and points to an initialized + // instance of `T`. + unsafe { self.0.as_mut() } + } +} + +impl<T, A> fmt::Debug for Box<T, A> +where + T: ?Sized + fmt::Debug, + A: Allocator, +{ + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + fmt::Debug::fmt(&**self, f) + } +} + +impl<T, A> Drop for Box<T, A> +where + T: ?Sized, + A: Allocator, +{ + fn drop(&mut self) { + let layout = Layout::for_value::<T>(self); + + // SAFETY: The pointer in `self.0` is guaranteed to be valid by the type invariant. + unsafe { core::ptr::drop_in_place::<T>(self.deref_mut()) }; + + // SAFETY: + // - `self.0` was previously allocated with `A`. + // - `layout` is equal to the `Layout´ `self.0` was allocated with. + unsafe { A::free(self.0.cast(), layout) }; + } +} --- a/rust/kernel/prelude.rs +++ b/rust/kernel/prelude.rs @@ -14,7 +14,7 @@ #[doc(no_inline)] pub use core::pin::Pin;
-pub use crate::alloc::{box_ext::BoxExt, flags::*, vec_ext::VecExt}; +pub use crate::alloc::{box_ext::BoxExt, flags::*, vec_ext::VecExt, KBox, KVBox, VBox};
#[doc(no_inline)] pub use alloc::{boxed::Box, vec::Vec};
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 8373147ce4961665c5700016b1c76299e962d077 upstream.
Now that we got the kernel `Box` type in place, convert all existing `Box` users to make use of it.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-13-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/block/rnull.rs | 4 +- rust/kernel/init.rs | 51 +++++++++++++++++++------------------- rust/kernel/init/__internal.rs | 2 - rust/kernel/rbtree.rs | 49 ++++++++++++++++++++---------------- rust/kernel/sync/arc.rs | 17 +++++------- rust/kernel/sync/condvar.rs | 4 +- rust/kernel/sync/lock/mutex.rs | 2 - rust/kernel/sync/lock/spinlock.rs | 2 - rust/kernel/workqueue.rs | 20 +++++++------- rust/macros/lib.rs | 6 ++-- 10 files changed, 81 insertions(+), 76 deletions(-)
--- a/drivers/block/rnull.rs +++ b/drivers/block/rnull.rs @@ -32,7 +32,7 @@ module! { }
struct NullBlkModule { - _disk: Pin<Box<Mutex<GenDisk<NullBlkDevice>>>>, + _disk: Pin<KBox<Mutex<GenDisk<NullBlkDevice>>>>, }
impl kernel::Module for NullBlkModule { @@ -47,7 +47,7 @@ impl kernel::Module for NullBlkModule { .rotational(false) .build(format_args!("rnullb{}", 0), tagset)?;
- let disk = Box::pin_init(new_mutex!(disk, "nullb:disk"), flags::GFP_KERNEL)?; + let disk = KBox::pin_init(new_mutex!(disk, "nullb:disk"), flags::GFP_KERNEL)?;
Ok(Self { _disk: disk }) } --- a/rust/kernel/init.rs +++ b/rust/kernel/init.rs @@ -13,7 +13,7 @@ //! To initialize a `struct` with an in-place constructor you will need two things: //! - an in-place constructor, //! - a memory location that can hold your `struct` (this can be the [stack], an [`Arc<T>`], -//! [`UniqueArc<T>`], [`Box<T>`] or any other smart pointer that implements [`InPlaceInit`]). +//! [`UniqueArc<T>`], [`KBox<T>`] or any other smart pointer that implements [`InPlaceInit`]). //! //! To get an in-place constructor there are generally three options: //! - directly creating an in-place constructor using the [`pin_init!`] macro, @@ -68,7 +68,7 @@ //! # a <- new_mutex!(42, "Foo::a"), //! # b: 24, //! # }); -//! let foo: Result<Pin<Box<Foo>>> = Box::pin_init(foo, GFP_KERNEL); +//! let foo: Result<Pin<KBox<Foo>>> = KBox::pin_init(foo, GFP_KERNEL); //! ``` //! //! For more information see the [`pin_init!`] macro. @@ -92,14 +92,14 @@ //! struct DriverData { //! #[pin] //! status: Mutex<i32>, -//! buffer: Box<[u8; 1_000_000]>, +//! buffer: KBox<[u8; 1_000_000]>, //! } //! //! impl DriverData { //! fn new() -> impl PinInit<Self, Error> { //! try_pin_init!(Self { //! status <- new_mutex!(0, "DriverData::status"), -//! buffer: Box::init(kernel::init::zeroed(), GFP_KERNEL)?, +//! buffer: KBox::init(kernel::init::zeroed(), GFP_KERNEL)?, //! }) //! } //! } @@ -211,7 +211,7 @@ //! [`pin_init!`]: crate::pin_init!
use crate::{ - alloc::{box_ext::BoxExt, AllocError, Flags}, + alloc::{box_ext::BoxExt, AllocError, Flags, KBox}, error::{self, Error}, sync::Arc, sync::UniqueArc, @@ -298,7 +298,7 @@ macro_rules! stack_pin_init { /// struct Foo { /// #[pin] /// a: Mutex<usize>, -/// b: Box<Bar>, +/// b: KBox<Bar>, /// } /// /// struct Bar { @@ -307,7 +307,7 @@ macro_rules! stack_pin_init { /// /// stack_try_pin_init!(let foo: Result<Pin<&mut Foo>, AllocError> = pin_init!(Foo { /// a <- new_mutex!(42), -/// b: Box::new(Bar { +/// b: KBox::new(Bar { /// x: 64, /// }, GFP_KERNEL)?, /// })); @@ -324,7 +324,7 @@ macro_rules! stack_pin_init { /// struct Foo { /// #[pin] /// a: Mutex<usize>, -/// b: Box<Bar>, +/// b: KBox<Bar>, /// } /// /// struct Bar { @@ -333,7 +333,7 @@ macro_rules! stack_pin_init { /// /// stack_try_pin_init!(let foo: Pin<&mut Foo> =? pin_init!(Foo { /// a <- new_mutex!(42), -/// b: Box::new(Bar { +/// b: KBox::new(Bar { /// x: 64, /// }, GFP_KERNEL)?, /// })); @@ -391,7 +391,7 @@ macro_rules! stack_try_pin_init { /// }, /// }); /// # initializer } -/// # Box::pin_init(demo(), GFP_KERNEL).unwrap(); +/// # KBox::pin_init(demo(), GFP_KERNEL).unwrap(); /// ``` /// /// Arbitrary Rust expressions can be used to set the value of a variable. @@ -460,7 +460,7 @@ macro_rules! stack_try_pin_init { /// # }) /// # } /// # } -/// let foo = Box::pin_init(Foo::new(), GFP_KERNEL); +/// let foo = KBox::pin_init(Foo::new(), GFP_KERNEL); /// ``` /// /// They can also easily embed it into their own `struct`s: @@ -592,7 +592,7 @@ macro_rules! pin_init { /// use kernel::{init::{self, PinInit}, error::Error}; /// #[pin_data] /// struct BigBuf { -/// big: Box<[u8; 1024 * 1024 * 1024]>, +/// big: KBox<[u8; 1024 * 1024 * 1024]>, /// small: [u8; 1024 * 1024], /// ptr: *mut u8, /// } @@ -600,7 +600,7 @@ macro_rules! pin_init { /// impl BigBuf { /// fn new() -> impl PinInit<Self, Error> { /// try_pin_init!(Self { -/// big: Box::init(init::zeroed(), GFP_KERNEL)?, +/// big: KBox::init(init::zeroed(), GFP_KERNEL)?, /// small: [0; 1024 * 1024], /// ptr: core::ptr::null_mut(), /// }? Error) @@ -692,16 +692,16 @@ macro_rules! init { /// # Examples /// /// ```rust -/// use kernel::{init::{PinInit, zeroed}, error::Error}; +/// use kernel::{alloc::KBox, init::{PinInit, zeroed}, error::Error}; /// struct BigBuf { -/// big: Box<[u8; 1024 * 1024 * 1024]>, +/// big: KBox<[u8; 1024 * 1024 * 1024]>, /// small: [u8; 1024 * 1024], /// } /// /// impl BigBuf { /// fn new() -> impl Init<Self, Error> { /// try_init!(Self { -/// big: Box::init(zeroed(), GFP_KERNEL)?, +/// big: KBox::init(zeroed(), GFP_KERNEL)?, /// small: [0; 1024 * 1024], /// }? Error) /// } @@ -812,8 +812,8 @@ macro_rules! assert_pinned { /// A pin-initializer for the type `T`. /// /// To use this initializer, you will need a suitable memory location that can hold a `T`. This can -/// be [`Box<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use the -/// [`InPlaceInit::pin_init`] function of a smart pointer like [`Arc<T>`] on this. +/// be [`KBox<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use +/// the [`InPlaceInit::pin_init`] function of a smart pointer like [`Arc<T>`] on this. /// /// Also see the [module description](self). /// @@ -893,7 +893,7 @@ pub unsafe trait PinInit<T: ?Sized, E = }
/// An initializer returned by [`PinInit::pin_chain`]. -pub struct ChainPinInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, Box<T>)>); +pub struct ChainPinInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, KBox<T>)>);
// SAFETY: The `__pinned_init` function is implemented such that it // - returns `Ok(())` on successful initialization, @@ -919,8 +919,8 @@ where /// An initializer for `T`. /// /// To use this initializer, you will need a suitable memory location that can hold a `T`. This can -/// be [`Box<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use the -/// [`InPlaceInit::init`] function of a smart pointer like [`Arc<T>`] on this. Because +/// be [`KBox<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use +/// the [`InPlaceInit::init`] function of a smart pointer like [`Arc<T>`] on this. Because /// [`PinInit<T, E>`] is a super trait, you can use every function that takes it as well. /// /// Also see the [module description](self). @@ -992,7 +992,7 @@ pub unsafe trait Init<T: ?Sized, E = Inf }
/// An initializer returned by [`Init::chain`]. -pub struct ChainInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, Box<T>)>); +pub struct ChainInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, KBox<T>)>);
// SAFETY: The `__init` function is implemented such that it // - returns `Ok(())` on successful initialization, @@ -1076,8 +1076,9 @@ pub fn uninit<T, E>() -> impl Init<Maybe /// # Examples /// /// ```rust -/// use kernel::{error::Error, init::init_array_from_fn}; -/// let array: Box<[usize; 1_000]> = Box::init::<Error>(init_array_from_fn(|i| i), GFP_KERNEL).unwrap(); +/// use kernel::{alloc::KBox, error::Error, init::init_array_from_fn}; +/// let array: KBox<[usize; 1_000]> = +/// KBox::init::<Error>(init_array_from_fn(|i| i), GFP_KERNEL).unwrap(); /// assert_eq!(array.len(), 1_000); /// ``` pub fn init_array_from_fn<I, const N: usize, T, E>( @@ -1453,7 +1454,7 @@ impl_zeroable! { // // In this case we are allowed to use `T: ?Sized`, since all zeros is the `None` variant. {<T: ?Sized>} Option<NonNull<T>>, - {<T: ?Sized>} Option<Box<T>>, + {<T: ?Sized>} Option<KBox<T>>,
// SAFETY: `null` pointer is valid. // --- a/rust/kernel/init/__internal.rs +++ b/rust/kernel/init/__internal.rs @@ -105,7 +105,7 @@ pub unsafe trait InitData: Copy { } }
-pub struct AllData<T: ?Sized>(PhantomData<fn(Box<T>) -> Box<T>>); +pub struct AllData<T: ?Sized>(PhantomData<fn(KBox<T>) -> KBox<T>>);
impl<T: ?Sized> Clone for AllData<T> { fn clone(&self) -> Self { --- a/rust/kernel/rbtree.rs +++ b/rust/kernel/rbtree.rs @@ -7,7 +7,6 @@ //! Reference: https://docs.kernel.org/core-api/rbtree.html
use crate::{alloc::Flags, bindings, container_of, error::Result, prelude::*}; -use alloc::boxed::Box; use core::{ cmp::{Ord, Ordering}, marker::PhantomData, @@ -497,7 +496,7 @@ impl<K, V> Drop for RBTree<K, V> { // but it is not observable. The loop invariant is still maintained.
// SAFETY: `this` is valid per the loop invariant. - unsafe { drop(Box::from_raw(this.cast_mut())) }; + unsafe { drop(KBox::from_raw(this.cast_mut())) }; } } } @@ -764,7 +763,7 @@ impl<'a, K, V> Cursor<'a, K, V> { // point to the links field of `Node<K, V>` objects. let this = unsafe { container_of!(self.current.as_ptr(), Node<K, V>, links) }.cast_mut(); // SAFETY: `this` is valid by the type invariants as described above. - let node = unsafe { Box::from_raw(this) }; + let node = unsafe { KBox::from_raw(this) }; let node = RBTreeNode { node }; // SAFETY: The reference to the tree used to create the cursor outlives the cursor, so // the tree cannot change. By the tree invariant, all nodes are valid. @@ -809,7 +808,7 @@ impl<'a, K, V> Cursor<'a, K, V> { // point to the links field of `Node<K, V>` objects. let this = unsafe { container_of!(neighbor, Node<K, V>, links) }.cast_mut(); // SAFETY: `this` is valid by the type invariants as described above. - let node = unsafe { Box::from_raw(this) }; + let node = unsafe { KBox::from_raw(this) }; return Some(RBTreeNode { node }); } None @@ -1038,7 +1037,7 @@ impl<K, V> Iterator for IterRaw<K, V> { /// It contains the memory needed to hold a node that can be inserted into a red-black tree. One /// can be obtained by directly allocating it ([`RBTreeNodeReservation::new`]). pub struct RBTreeNodeReservation<K, V> { - node: Box<MaybeUninit<Node<K, V>>>, + node: KBox<MaybeUninit<Node<K, V>>>, }
impl<K, V> RBTreeNodeReservation<K, V> { @@ -1046,7 +1045,7 @@ impl<K, V> RBTreeNodeReservation<K, V> { /// call to [`RBTree::insert`]. pub fn new(flags: Flags) -> Result<RBTreeNodeReservation<K, V>> { Ok(RBTreeNodeReservation { - node: <Box<_> as BoxExt<_>>::new_uninit(flags)?, + node: KBox::new_uninit(flags)?, }) } } @@ -1062,14 +1061,15 @@ impl<K, V> RBTreeNodeReservation<K, V> { /// Initialises a node reservation. /// /// It then becomes an [`RBTreeNode`] that can be inserted into a tree. - pub fn into_node(mut self, key: K, value: V) -> RBTreeNode<K, V> { - self.node.write(Node { - key, - value, - links: bindings::rb_node::default(), - }); - // SAFETY: We just wrote to it. - let node = unsafe { self.node.assume_init() }; + pub fn into_node(self, key: K, value: V) -> RBTreeNode<K, V> { + let node = KBox::write( + self.node, + Node { + key, + value, + links: bindings::rb_node::default(), + }, + ); RBTreeNode { node } } } @@ -1079,7 +1079,7 @@ impl<K, V> RBTreeNodeReservation<K, V> { /// The node is fully initialised (with key and value) and can be inserted into a tree without any /// extra allocations or failure paths. pub struct RBTreeNode<K, V> { - node: Box<Node<K, V>>, + node: KBox<Node<K, V>>, }
impl<K, V> RBTreeNode<K, V> { @@ -1091,7 +1091,9 @@ impl<K, V> RBTreeNode<K, V> {
/// Get the key and value from inside the node. pub fn to_key_value(self) -> (K, V) { - (self.node.key, self.node.value) + let node = KBox::into_inner(self.node); + + (node.key, node.value) } }
@@ -1113,7 +1115,7 @@ impl<K, V> RBTreeNode<K, V> { /// may be freed (but only for the key/value; memory for the node itself is kept for reuse). pub fn into_reservation(self) -> RBTreeNodeReservation<K, V> { RBTreeNodeReservation { - node: Box::drop_contents(self.node), + node: KBox::drop_contents(self.node), } } } @@ -1164,7 +1166,7 @@ impl<'a, K, V> RawVacantEntry<'a, K, V> /// The `node` must have a key such that inserting it here does not break the ordering of this /// [`RBTree`]. fn insert(self, node: RBTreeNode<K, V>) -> &'a mut V { - let node = Box::into_raw(node.node); + let node = KBox::into_raw(node.node);
// SAFETY: `node` is valid at least until we call `Box::from_raw`, which only happens when // the node is removed or replaced. @@ -1238,21 +1240,24 @@ impl<'a, K, V> OccupiedEntry<'a, K, V> { // SAFETY: The node was a node in the tree, but we removed it, so we can convert it // back into a box. node: unsafe { - Box::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) + KBox::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) }, } }
/// Takes the value of the entry out of the map, and returns it. pub fn remove(self) -> V { - self.remove_node().node.value + let rb_node = self.remove_node(); + let node = KBox::into_inner(rb_node.node); + + node.value }
/// Swap the current node for the provided node. /// /// The key of both nodes must be equal. fn replace(self, node: RBTreeNode<K, V>) -> RBTreeNode<K, V> { - let node = Box::into_raw(node.node); + let node = KBox::into_raw(node.node);
// SAFETY: `node` is valid at least until we call `Box::from_raw`, which only happens when // the node is removed or replaced. @@ -1268,7 +1273,7 @@ impl<'a, K, V> OccupiedEntry<'a, K, V> { // - `self.node_ptr` produces a valid pointer to a node in the tree. // - Now that we removed this entry from the tree, we can convert the node to a box. let old_node = - unsafe { Box::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) }; + unsafe { KBox::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) };
RBTreeNode { node: old_node } } --- a/rust/kernel/sync/arc.rs +++ b/rust/kernel/sync/arc.rs @@ -17,13 +17,12 @@ //! [`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html
use crate::{ - alloc::{box_ext::BoxExt, AllocError, Flags}, + alloc::{AllocError, Flags, KBox}, bindings, init::{self, InPlaceInit, Init, PinInit}, try_init, types::{ForeignOwnable, Opaque}, }; -use alloc::boxed::Box; use core::{ alloc::Layout, fmt, @@ -201,11 +200,11 @@ impl<T> Arc<T> { data: contents, };
- let inner = <Box<_> as BoxExt<_>>::new(value, flags)?; + let inner = KBox::new(value, flags)?;
// SAFETY: We just created `inner` with a reference count of 1, which is owned by the new // `Arc` object. - Ok(unsafe { Self::from_inner(Box::leak(inner).into()) }) + Ok(unsafe { Self::from_inner(KBox::leak(inner).into()) }) } }
@@ -398,8 +397,8 @@ impl<T: ?Sized> Drop for Arc<T> { if is_zero { // The count reached zero, we must free the memory. // - // SAFETY: The pointer was initialised from the result of `Box::leak`. - unsafe { drop(Box::from_raw(self.ptr.as_ptr())) }; + // SAFETY: The pointer was initialised from the result of `KBox::leak`. + unsafe { drop(KBox::from_raw(self.ptr.as_ptr())) }; } } } @@ -641,7 +640,7 @@ impl<T> UniqueArc<T> { /// Tries to allocate a new [`UniqueArc`] instance whose contents are not initialised yet. pub fn new_uninit(flags: Flags) -> Result<UniqueArc<MaybeUninit<T>>, AllocError> { // INVARIANT: The refcount is initialised to a non-zero value. - let inner = Box::try_init::<AllocError>( + let inner = KBox::try_init::<AllocError>( try_init!(ArcInner { // SAFETY: There are no safety requirements for this FFI call. refcount: Opaque::new(unsafe { bindings::REFCOUNT_INIT(1) }), @@ -651,8 +650,8 @@ impl<T> UniqueArc<T> { )?; Ok(UniqueArc { // INVARIANT: The newly-created object has a refcount of 1. - // SAFETY: The pointer from the `Box` is valid. - inner: unsafe { Arc::from_inner(Box::leak(inner).into()) }, + // SAFETY: The pointer from the `KBox` is valid. + inner: unsafe { Arc::from_inner(KBox::leak(inner).into()) }, }) } } --- a/rust/kernel/sync/condvar.rs +++ b/rust/kernel/sync/condvar.rs @@ -70,8 +70,8 @@ pub use new_condvar; /// } /// /// /// Allocates a new boxed `Example`. -/// fn new_example() -> Result<Pin<Box<Example>>> { -/// Box::pin_init(pin_init!(Example { +/// fn new_example() -> Result<Pin<KBox<Example>>> { +/// KBox::pin_init(pin_init!(Example { /// value <- new_mutex!(0), /// value_changed <- new_condvar!(), /// }), GFP_KERNEL) --- a/rust/kernel/sync/lock/mutex.rs +++ b/rust/kernel/sync/lock/mutex.rs @@ -58,7 +58,7 @@ pub use new_mutex; /// } /// /// // Allocate a boxed `Example`. -/// let e = Box::pin_init(Example::new(), GFP_KERNEL)?; +/// let e = KBox::pin_init(Example::new(), GFP_KERNEL)?; /// assert_eq!(e.c, 10); /// assert_eq!(e.d.lock().a, 20); /// assert_eq!(e.d.lock().b, 30); --- a/rust/kernel/sync/lock/spinlock.rs +++ b/rust/kernel/sync/lock/spinlock.rs @@ -56,7 +56,7 @@ pub use new_spinlock; /// } /// /// // Allocate a boxed `Example`. -/// let e = Box::pin_init(Example::new(), GFP_KERNEL)?; +/// let e = KBox::pin_init(Example::new(), GFP_KERNEL)?; /// assert_eq!(e.c, 10); /// assert_eq!(e.d.lock().a, 20); /// assert_eq!(e.d.lock().b, 30); --- a/rust/kernel/workqueue.rs +++ b/rust/kernel/workqueue.rs @@ -216,7 +216,7 @@ impl Queue { func: Some(func), });
- self.enqueue(Box::pin_init(init, flags).map_err(|_| AllocError)?); + self.enqueue(KBox::pin_init(init, flags).map_err(|_| AllocError)?); Ok(()) } } @@ -239,9 +239,9 @@ impl<T> ClosureWork<T> { }
impl<T: FnOnce()> WorkItem for ClosureWork<T> { - type Pointer = Pin<Box<Self>>; + type Pointer = Pin<KBox<Self>>;
- fn run(mut this: Pin<Box<Self>>) { + fn run(mut this: Pin<KBox<Self>>) { if let Some(func) = this.as_mut().project().take() { (func)() } @@ -297,7 +297,7 @@ pub unsafe trait RawWorkItem<const ID: u
/// Defines the method that should be called directly when a work item is executed. /// -/// This trait is implemented by `Pin<Box<T>>` and [`Arc<T>`], and is mainly intended to be +/// This trait is implemented by `Pin<KBox<T>>` and [`Arc<T>`], and is mainly intended to be /// implemented for smart pointer types. For your own structs, you would implement [`WorkItem`] /// instead. The [`run`] method on this trait will usually just perform the appropriate /// `container_of` translation and then call into the [`run`][WorkItem::run] method from the @@ -329,7 +329,7 @@ pub unsafe trait WorkItemPointer<const I /// This trait is used when the `work_struct` field is defined using the [`Work`] helper. pub trait WorkItem<const ID: u64 = 0> { /// The pointer type that this struct is wrapped in. This will typically be `Arc<Self>` or - /// `Pin<Box<Self>>`. + /// `Pin<KBox<Self>>`. type Pointer: WorkItemPointer<ID>;
/// The method that should be called when this work item is executed. @@ -567,7 +567,7 @@ where }
// SAFETY: TODO. -unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<Box<T>> +unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<KBox<T>> where T: WorkItem<ID, Pointer = Self>, T: HasWork<T, ID>, @@ -578,7 +578,7 @@ where // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`. let ptr = unsafe { T::work_container_of(ptr) }; // SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership. - let boxed = unsafe { Box::from_raw(ptr) }; + let boxed = unsafe { KBox::from_raw(ptr) }; // SAFETY: The box was already pinned when it was enqueued. let pinned = unsafe { Pin::new_unchecked(boxed) };
@@ -587,7 +587,7 @@ where }
// SAFETY: TODO. -unsafe impl<T, const ID: u64> RawWorkItem<ID> for Pin<Box<T>> +unsafe impl<T, const ID: u64> RawWorkItem<ID> for Pin<KBox<T>> where T: WorkItem<ID, Pointer = Self>, T: HasWork<T, ID>, @@ -601,9 +601,9 @@ where // SAFETY: We're not going to move `self` or any of its fields, so its okay to temporarily // remove the `Pin` wrapper. let boxed = unsafe { Pin::into_inner_unchecked(self) }; - let ptr = Box::into_raw(boxed); + let ptr = KBox::into_raw(boxed);
- // SAFETY: Pointers into a `Box` point at a valid value. + // SAFETY: Pointers into a `KBox` point at a valid value. let work_ptr = unsafe { T::raw_get_work(ptr) }; // SAFETY: `raw_get_work` returns a pointer to a valid value. let work_ptr = unsafe { Work::raw_get(work_ptr) }; --- a/rust/macros/lib.rs +++ b/rust/macros/lib.rs @@ -243,7 +243,7 @@ pub fn concat_idents(ts: TokenStream) -> /// struct DriverData { /// #[pin] /// queue: Mutex<Vec<Command>>, -/// buf: Box<[u8; 1024 * 1024]>, +/// buf: KBox<[u8; 1024 * 1024]>, /// } /// ``` /// @@ -252,7 +252,7 @@ pub fn concat_idents(ts: TokenStream) -> /// struct DriverData { /// #[pin] /// queue: Mutex<Vec<Command>>, -/// buf: Box<[u8; 1024 * 1024]>, +/// buf: KBox<[u8; 1024 * 1024]>, /// raw_info: *mut Info, /// } /// @@ -282,7 +282,7 @@ pub fn pin_data(inner: TokenStream, item /// struct DriverData { /// #[pin] /// queue: Mutex<Vec<Command>>, -/// buf: Box<[u8; 1024 * 1024]>, +/// buf: KBox<[u8; 1024 * 1024]>, /// raw_info: *mut Info, /// } ///
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit e8c6ccdbcaaf31f26c0fffd4073edd0b0147cdc6 upstream.
Now that all existing `Box` users were moved to the kernel `Box` type, remove the `BoxExt` extension and all other related extensions.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-14-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 1 rust/kernel/alloc/box_ext.rs | 89 ------------------------------------------- rust/kernel/init.rs | 46 ---------------------- rust/kernel/lib.rs | 1 rust/kernel/prelude.rs | 4 - rust/kernel/types.rs | 50 ------------------------ 6 files changed, 3 insertions(+), 188 deletions(-) delete mode 100644 rust/kernel/alloc/box_ext.rs
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -4,7 +4,6 @@
#[cfg(not(any(test, testlib)))] pub mod allocator; -pub mod box_ext; pub mod kbox; pub mod vec_ext;
--- a/rust/kernel/alloc/box_ext.rs +++ /dev/null @@ -1,89 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 - -//! Extensions to [`Box`] for fallible allocations. - -use super::{AllocError, Flags}; -use alloc::boxed::Box; -use core::{mem::MaybeUninit, ptr, result::Result}; - -/// Extensions to [`Box`]. -pub trait BoxExt<T>: Sized { - /// Allocates a new box. - /// - /// The allocation may fail, in which case an error is returned. - fn new(x: T, flags: Flags) -> Result<Self, AllocError>; - - /// Allocates a new uninitialised box. - /// - /// The allocation may fail, in which case an error is returned. - fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>>, AllocError>; - - /// Drops the contents, but keeps the allocation. - /// - /// # Examples - /// - /// ``` - /// use kernel::alloc::{flags, box_ext::BoxExt}; - /// let value = Box::new([0; 32], flags::GFP_KERNEL)?; - /// assert_eq!(*value, [0; 32]); - /// let mut value = Box::drop_contents(value); - /// // Now we can re-use `value`: - /// value.write([1; 32]); - /// // SAFETY: We just wrote to it. - /// let value = unsafe { value.assume_init() }; - /// assert_eq!(*value, [1; 32]); - /// # Ok::<(), Error>(()) - /// ``` - fn drop_contents(this: Self) -> Box<MaybeUninit<T>>; -} - -impl<T> BoxExt<T> for Box<T> { - fn new(x: T, flags: Flags) -> Result<Self, AllocError> { - let mut b = <Self as BoxExt<_>>::new_uninit(flags)?; - b.write(x); - // SAFETY: We just wrote to it. - Ok(unsafe { b.assume_init() }) - } - - #[cfg(any(test, testlib))] - fn new_uninit(_flags: Flags) -> Result<Box<MaybeUninit<T>>, AllocError> { - Ok(Box::new_uninit()) - } - - #[cfg(not(any(test, testlib)))] - fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>>, AllocError> { - let ptr = if core::mem::size_of::<MaybeUninit<T>>() == 0 { - core::ptr::NonNull::<_>::dangling().as_ptr() - } else { - let layout = core::alloc::Layout::new::<MaybeUninit<T>>(); - - // SAFETY: Memory is being allocated (first arg is null). The only other source of - // safety issues is sleeping on atomic context, which is addressed by klint. Lastly, - // the type is not a SZT (checked above). - let ptr = - unsafe { super::allocator::krealloc_aligned(core::ptr::null_mut(), layout, flags) }; - if ptr.is_null() { - return Err(AllocError); - } - - ptr.cast::<MaybeUninit<T>>() - }; - - // SAFETY: For non-zero-sized types, we allocate above using the global allocator. For - // zero-sized types, we use `NonNull::dangling`. - Ok(unsafe { Box::from_raw(ptr) }) - } - - fn drop_contents(this: Self) -> Box<MaybeUninit<T>> { - let ptr = Box::into_raw(this); - // SAFETY: `ptr` is valid, because it came from `Box::into_raw`. - unsafe { ptr::drop_in_place(ptr) }; - - // CAST: `MaybeUninit<T>` is a transparent wrapper of `T`. - let ptr = ptr.cast::<MaybeUninit<T>>(); - - // SAFETY: `ptr` is valid for writes, because it came from `Box::into_raw` and it is valid for - // reads, since the pointer came from `Box::into_raw` and the type is `MaybeUninit<T>`. - unsafe { Box::from_raw(ptr) } - } -} --- a/rust/kernel/init.rs +++ b/rust/kernel/init.rs @@ -211,13 +211,12 @@ //! [`pin_init!`]: crate::pin_init!
use crate::{ - alloc::{box_ext::BoxExt, AllocError, Flags, KBox}, + alloc::{AllocError, Flags, KBox}, error::{self, Error}, sync::Arc, sync::UniqueArc, types::{Opaque, ScopeGuard}, }; -use alloc::boxed::Box; use core::{ cell::UnsafeCell, convert::Infallible, @@ -588,7 +587,6 @@ macro_rules! pin_init { /// # Examples /// /// ```rust -/// # #![feature(new_uninit)] /// use kernel::{init::{self, PinInit}, error::Error}; /// #[pin_data] /// struct BigBuf { @@ -1245,26 +1243,6 @@ impl<T> InPlaceInit<T> for Arc<T> { } }
-impl<T> InPlaceInit<T> for Box<T> { - type PinnedSelf = Pin<Self>; - - #[inline] - fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Self::PinnedSelf, E> - where - E: From<AllocError>, - { - <Box<_> as BoxExt<_>>::new_uninit(flags)?.write_pin_init(init) - } - - #[inline] - fn try_init<E>(init: impl Init<T, E>, flags: Flags) -> Result<Self, E> - where - E: From<AllocError>, - { - <Box<_> as BoxExt<_>>::new_uninit(flags)?.write_init(init) - } -} - impl<T> InPlaceInit<T> for UniqueArc<T> { type PinnedSelf = Pin<Self>;
@@ -1301,28 +1279,6 @@ pub trait InPlaceWrite<T> { fn write_pin_init<E>(self, init: impl PinInit<T, E>) -> Result<PinSelf::Initialized, E>; }
-impl<T> InPlaceWrite<T> for Box<MaybeUninit<T>> { - type Initialized = Box<T>; - - fn write_init<E>(mut self, init: impl Init<T, E>) -> Result<Self::Initialized, E> { - let slot = self.as_mut_ptr(); - // SAFETY: When init errors/panics, slot will get deallocated but not dropped, - // slot is valid. - unsafe { init.__init(slot)? }; - // SAFETY: All fields have been initialized. - Ok(unsafe { self.assume_init() }) - } - - fn write_pin_init<E>(mut self, init: impl PinInit<T, E>) -> Result<PinSelf::Initialized, E> { - let slot = self.as_mut_ptr(); - // SAFETY: When init errors/panics, slot will get deallocated but not dropped, - // slot is valid and will not be moved, because we pin it later. - unsafe { init.__pinned_init(slot)? }; - // SAFETY: All fields have been initialized. - Ok(unsafe { self.assume_init() }.into()) - } -} - impl<T> InPlaceWrite<T> for UniqueArc<MaybeUninit<T>> { type Initialized = UniqueArc<T>;
--- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -16,7 +16,6 @@ #![feature(coerce_unsized)] #![feature(dispatch_from_dyn)] #![feature(lint_reasons)] -#![feature(new_uninit)] #![feature(unsize)]
// Ensure conditional compilation based on the kernel configuration works; --- a/rust/kernel/prelude.rs +++ b/rust/kernel/prelude.rs @@ -14,10 +14,10 @@ #[doc(no_inline)] pub use core::pin::Pin;
-pub use crate::alloc::{box_ext::BoxExt, flags::*, vec_ext::VecExt, KBox, KVBox, VBox}; +pub use crate::alloc::{flags::*, vec_ext::VecExt, KBox, KVBox, VBox};
#[doc(no_inline)] -pub use alloc::{boxed::Box, vec::Vec}; +pub use alloc::vec::Vec;
#[doc(no_inline)] pub use macros::{module, pin_data, pinned_drop, vtable, Zeroable}; --- a/rust/kernel/types.rs +++ b/rust/kernel/types.rs @@ -3,13 +3,11 @@ //! Kernel types.
use crate::init::{self, PinInit}; -use alloc::boxed::Box; use core::{ cell::UnsafeCell, marker::{PhantomData, PhantomPinned}, mem::{ManuallyDrop, MaybeUninit}, ops::{Deref, DerefMut}, - pin::Pin, ptr::NonNull, };
@@ -71,54 +69,6 @@ pub trait ForeignOwnable: Sized { } }
-impl<T: 'static> ForeignOwnable for Box<T> { - type Borrowed<'a> = &'a T; - - fn into_foreign(self) -> *const core::ffi::c_void { - Box::into_raw(self) as _ - } - - unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> &'a T { - // SAFETY: The safety requirements for this function ensure that the object is still alive, - // so it is safe to dereference the raw pointer. - // The safety requirements of `from_foreign` also ensure that the object remains alive for - // the lifetime of the returned value. - unsafe { &*ptr.cast() } - } - - unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { - // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous - // call to `Self::into_foreign`. - unsafe { Box::from_raw(ptr as _) } - } -} - -impl<T: 'static> ForeignOwnable for Pin<Box<T>> { - type Borrowed<'a> = Pin<&'a T>; - - fn into_foreign(self) -> *const core::ffi::c_void { - // SAFETY: We are still treating the box as pinned. - Box::into_raw(unsafe { Pin::into_inner_unchecked(self) }) as _ - } - - unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> Pin<&'a T> { - // SAFETY: The safety requirements for this function ensure that the object is still alive, - // so it is safe to dereference the raw pointer. - // The safety requirements of `from_foreign` also ensure that the object remains alive for - // the lifetime of the returned value. - let r = unsafe { &*ptr.cast() }; - - // SAFETY: This pointer originates from a `Pin<Box<T>>`. - unsafe { Pin::new_unchecked(r) } - } - - unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { - // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous - // call to `Self::into_foreign`. - unsafe { Pin::new_unchecked(Box::from_raw(ptr as _)) } - } -} - impl ForeignOwnable for () { type Borrowed<'a> = ();
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit e1044c2238f54ae5bd902cac6d12e48835df418b upstream.
Now that we removed `BoxExt` and the corresponding includes in prelude.rs, add the new kernel `Box` type instead.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-15-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/prelude.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/rust/kernel/prelude.rs +++ b/rust/kernel/prelude.rs @@ -14,7 +14,7 @@ #[doc(no_inline)] pub use core::pin::Pin;
-pub use crate::alloc::{flags::*, vec_ext::VecExt, KBox, KVBox, VBox}; +pub use crate::alloc::{flags::*, vec_ext::VecExt, Box, KBox, KVBox, VBox};
#[doc(no_inline)] pub use alloc::vec::Vec;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benno Lossin benno.lossin@proton.me
commit 9e7bbfa182767f638ba61dba3518ff78da9f31ff upstream.
When allocating memory for arrays using allocators, the `Layout::array` function is typically used. It returns a result, since the given size might be too big. However, `Vec` and its iterators store their allocated capacity and thus they already did check that the size is not too big.
The `ArrayLayout` type provides this exact behavior, as it can be infallibly converted into a `Layout`. Instead of a `usize` capacity, `Vec` and other similar array-storing types can use `ArrayLayout` instead.
Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Benno Lossin benno.lossin@proton.me Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-16-dakr@kernel.org [ Formatted a few comments. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 1 rust/kernel/alloc/layout.rs | 91 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 92 insertions(+) create mode 100644 rust/kernel/alloc/layout.rs
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -5,6 +5,7 @@ #[cfg(not(any(test, testlib)))] pub mod allocator; pub mod kbox; +pub mod layout; pub mod vec_ext;
#[cfg(any(test, testlib))] --- /dev/null +++ b/rust/kernel/alloc/layout.rs @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory layout. +//! +//! Custom layout types extending or improving [`Layout`]. + +use core::{alloc::Layout, marker::PhantomData}; + +/// Error when constructing an [`ArrayLayout`]. +pub struct LayoutError; + +/// A layout for an array `[T; n]`. +/// +/// # Invariants +/// +/// - `len * size_of::<T>() <= isize::MAX`. +pub struct ArrayLayout<T> { + len: usize, + _phantom: PhantomData<fn() -> T>, +} + +impl<T> Clone for ArrayLayout<T> { + fn clone(&self) -> Self { + *self + } +} +impl<T> Copy for ArrayLayout<T> {} + +const ISIZE_MAX: usize = isize::MAX as usize; + +impl<T> ArrayLayout<T> { + /// Creates a new layout for `[T; 0]`. + pub const fn empty() -> Self { + // INVARIANT: `0 * size_of::<T>() <= isize::MAX`. + Self { + len: 0, + _phantom: PhantomData, + } + } + + /// Creates a new layout for `[T; len]`. + /// + /// # Errors + /// + /// When `len * size_of::<T>()` overflows or when `len * size_of::<T>() > isize::MAX`. + pub const fn new(len: usize) -> Result<Self, LayoutError> { + match len.checked_mul(core::mem::size_of::<T>()) { + Some(len) if len <= ISIZE_MAX => { + // INVARIANT: We checked above that `len * size_of::<T>() <= isize::MAX`. + Ok(Self { + len, + _phantom: PhantomData, + }) + } + _ => Err(LayoutError), + } + } + + /// Creates a new layout for `[T; len]`. + /// + /// # Safety + /// + /// `len` must be a value, for which `len * size_of::<T>() <= isize::MAX` is true. + pub unsafe fn new_unchecked(len: usize) -> Self { + // INVARIANT: By the safety requirements of this function + // `len * size_of::<T>() <= isize::MAX`. + Self { + len, + _phantom: PhantomData, + } + } + + /// Returns the number of array elements represented by this layout. + pub const fn len(&self) -> usize { + self.len + } + + /// Returns `true` when no array elements are represented by this layout. + pub const fn is_empty(&self) -> bool { + self.len == 0 + } +} + +impl<T> From<ArrayLayout<T>> for Layout { + fn from(value: ArrayLayout<T>) -> Self { + let res = Layout::array::<T>(value.len); + // SAFETY: By the type invariant of `ArrayLayout` we have + // `len * size_of::<T>() <= isize::MAX` and thus the result must be `Ok`. + unsafe { res.unwrap_unchecked() } + } +}
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 2aac4cd7dae3d7bb0e0ddec2561b2ee4cbe6c8f6 upstream.
`Vec` provides a contiguous growable array type with contents allocated with the kernel's allocators (e.g. `Kmalloc`, `Vmalloc` or `KVmalloc`).
In contrast to Rust's stdlib `Vec` type, the kernel `Vec` type considers the kernel's GFP flags for all appropriate functions, always reports allocation failures through `Result<_, AllocError>` and remains independent from unstable features.
[ This patch starts using a new unstable feature, `inline_const`, but it was stabilized in Rust 1.79.0, i.e. the next version after the minimum one, thus it will not be an issue. - Miguel ]
Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-17-dakr@kernel.org [ Cleaned `rustdoc` unescaped backtick warning, added a couple more backticks elsewhere, fixed typos, sorted `feature`s, rewrapped documentation lines. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 6 rust/kernel/alloc/kvec.rs | 648 ++++++++++++++++++++++++++++++++++++++++++++++ rust/kernel/lib.rs | 1 rust/kernel/prelude.rs | 2 4 files changed, 656 insertions(+), 1 deletion(-) create mode 100644 rust/kernel/alloc/kvec.rs
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -5,6 +5,7 @@ #[cfg(not(any(test, testlib)))] pub mod allocator; pub mod kbox; +pub mod kvec; pub mod layout; pub mod vec_ext;
@@ -19,6 +20,11 @@ pub use self::kbox::KBox; pub use self::kbox::KVBox; pub use self::kbox::VBox;
+pub use self::kvec::KVVec; +pub use self::kvec::KVec; +pub use self::kvec::VVec; +pub use self::kvec::Vec; + /// Indicates an allocation error. #[derive(Copy, Clone, PartialEq, Eq, Debug)] pub struct AllocError; --- /dev/null +++ b/rust/kernel/alloc/kvec.rs @@ -0,0 +1,648 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Implementation of [`Vec`]. + +use super::{ + allocator::{KVmalloc, Kmalloc, Vmalloc}, + layout::ArrayLayout, + AllocError, Allocator, Box, Flags, +}; +use core::{ + fmt, + marker::PhantomData, + mem::{ManuallyDrop, MaybeUninit}, + ops::Deref, + ops::DerefMut, + ops::Index, + ops::IndexMut, + ptr, + ptr::NonNull, + slice, + slice::SliceIndex, +}; + +/// Create a [`KVec`] containing the arguments. +/// +/// New memory is allocated with `GFP_KERNEL`. +/// +/// # Examples +/// +/// ``` +/// let mut v = kernel::kvec![]; +/// v.push(1, GFP_KERNEL)?; +/// assert_eq!(v, [1]); +/// +/// let mut v = kernel::kvec![1; 3]?; +/// v.push(4, GFP_KERNEL)?; +/// assert_eq!(v, [1, 1, 1, 4]); +/// +/// let mut v = kernel::kvec![1, 2, 3]?; +/// v.push(4, GFP_KERNEL)?; +/// assert_eq!(v, [1, 2, 3, 4]); +/// +/// # Ok::<(), Error>(()) +/// ``` +#[macro_export] +macro_rules! kvec { + () => ( + $crate::alloc::KVec::new() + ); + ($elem:expr; $n:expr) => ( + $crate::alloc::KVec::from_elem($elem, $n, GFP_KERNEL) + ); + ($($x:expr),+ $(,)?) => ( + match $crate::alloc::KBox::new_uninit(GFP_KERNEL) { + Ok(b) => Ok($crate::alloc::KVec::from($crate::alloc::KBox::write(b, [$($x),+]))), + Err(e) => Err(e), + } + ); +} + +/// The kernel's [`Vec`] type. +/// +/// A contiguous growable array type with contents allocated with the kernel's allocators (e.g. +/// [`Kmalloc`], [`Vmalloc`] or [`KVmalloc`]), written `Vec<T, A>`. +/// +/// For non-zero-sized values, a [`Vec`] will use the given allocator `A` for its allocation. For +/// the most common allocators the type aliases [`KVec`], [`VVec`] and [`KVVec`] exist. +/// +/// For zero-sized types the [`Vec`]'s pointer must be `dangling_mut::<T>`; no memory is allocated. +/// +/// Generally, [`Vec`] consists of a pointer that represents the vector's backing buffer, the +/// capacity of the vector (the number of elements that currently fit into the vector), its length +/// (the number of elements that are currently stored in the vector) and the `Allocator` type used +/// to allocate (and free) the backing buffer. +/// +/// A [`Vec`] can be deconstructed into and (re-)constructed from its previously named raw parts +/// and manually modified. +/// +/// [`Vec`]'s backing buffer gets, if required, automatically increased (re-allocated) when elements +/// are added to the vector. +/// +/// # Invariants +/// +/// - `self.ptr` is always properly aligned and either points to memory allocated with `A` or, for +/// zero-sized types, is a dangling, well aligned pointer. +/// +/// - `self.len` always represents the exact number of elements stored in the vector. +/// +/// - `self.layout` represents the absolute number of elements that can be stored within the vector +/// without re-allocation. For ZSTs `self.layout`'s capacity is zero. However, it is legal for the +/// backing buffer to be larger than `layout`. +/// +/// - The `Allocator` type `A` of the vector is the exact same `Allocator` type the backing buffer +/// was allocated with (and must be freed with). +pub struct Vec<T, A: Allocator> { + ptr: NonNull<T>, + /// Represents the actual buffer size as `cap` times `size_of::<T>` bytes. + /// + /// Note: This isn't quite the same as `Self::capacity`, which in contrast returns the number of + /// elements we can still store without reallocating. + layout: ArrayLayout<T>, + len: usize, + _p: PhantomData<A>, +} + +/// Type alias for [`Vec`] with a [`Kmalloc`] allocator. +/// +/// # Examples +/// +/// ``` +/// let mut v = KVec::new(); +/// v.push(1, GFP_KERNEL)?; +/// assert_eq!(&v, &[1]); +/// +/// # Ok::<(), Error>(()) +/// ``` +pub type KVec<T> = Vec<T, Kmalloc>; + +/// Type alias for [`Vec`] with a [`Vmalloc`] allocator. +/// +/// # Examples +/// +/// ``` +/// let mut v = VVec::new(); +/// v.push(1, GFP_KERNEL)?; +/// assert_eq!(&v, &[1]); +/// +/// # Ok::<(), Error>(()) +/// ``` +pub type VVec<T> = Vec<T, Vmalloc>; + +/// Type alias for [`Vec`] with a [`KVmalloc`] allocator. +/// +/// # Examples +/// +/// ``` +/// let mut v = KVVec::new(); +/// v.push(1, GFP_KERNEL)?; +/// assert_eq!(&v, &[1]); +/// +/// # Ok::<(), Error>(()) +/// ``` +pub type KVVec<T> = Vec<T, KVmalloc>; + +// SAFETY: `Vec` is `Send` if `T` is `Send` because `Vec` owns its elements. +unsafe impl<T, A> Send for Vec<T, A> +where + T: Send, + A: Allocator, +{ +} + +// SAFETY: `Vec` is `Sync` if `T` is `Sync` because `Vec` owns its elements. +unsafe impl<T, A> Sync for Vec<T, A> +where + T: Sync, + A: Allocator, +{ +} + +impl<T, A> Vec<T, A> +where + A: Allocator, +{ + #[inline] + const fn is_zst() -> bool { + core::mem::size_of::<T>() == 0 + } + + /// Returns the number of elements that can be stored within the vector without allocating + /// additional memory. + pub fn capacity(&self) -> usize { + if const { Self::is_zst() } { + usize::MAX + } else { + self.layout.len() + } + } + + /// Returns the number of elements stored within the vector. + #[inline] + pub fn len(&self) -> usize { + self.len + } + + /// Forcefully sets `self.len` to `new_len`. + /// + /// # Safety + /// + /// - `new_len` must be less than or equal to [`Self::capacity`]. + /// - If `new_len` is greater than `self.len`, all elements within the interval + /// [`self.len`,`new_len`) must be initialized. + #[inline] + pub unsafe fn set_len(&mut self, new_len: usize) { + debug_assert!(new_len <= self.capacity()); + self.len = new_len; + } + + /// Returns a slice of the entire vector. + #[inline] + pub fn as_slice(&self) -> &[T] { + self + } + + /// Returns a mutable slice of the entire vector. + #[inline] + pub fn as_mut_slice(&mut self) -> &mut [T] { + self + } + + /// Returns a mutable raw pointer to the vector's backing buffer, or, if `T` is a ZST, a + /// dangling raw pointer. + #[inline] + pub fn as_mut_ptr(&mut self) -> *mut T { + self.ptr.as_ptr() + } + + /// Returns a raw pointer to the vector's backing buffer, or, if `T` is a ZST, a dangling raw + /// pointer. + #[inline] + pub fn as_ptr(&self) -> *const T { + self.ptr.as_ptr() + } + + /// Returns `true` if the vector contains no elements, `false` otherwise. + /// + /// # Examples + /// + /// ``` + /// let mut v = KVec::new(); + /// assert!(v.is_empty()); + /// + /// v.push(1, GFP_KERNEL); + /// assert!(!v.is_empty()); + /// ``` + #[inline] + pub fn is_empty(&self) -> bool { + self.len() == 0 + } + + /// Creates a new, empty `Vec<T, A>`. + /// + /// This method does not allocate by itself. + #[inline] + pub const fn new() -> Self { + // INVARIANT: Since this is a new, empty `Vec` with no backing memory yet, + // - `ptr` is a properly aligned dangling pointer for type `T`, + // - `layout` is an empty `ArrayLayout` (zero capacity) + // - `len` is zero, since no elements can be or have been stored, + // - `A` is always valid. + Self { + ptr: NonNull::dangling(), + layout: ArrayLayout::empty(), + len: 0, + _p: PhantomData::<A>, + } + } + + /// Returns a slice of `MaybeUninit<T>` for the remaining spare capacity of the vector. + pub fn spare_capacity_mut(&mut self) -> &mut [MaybeUninit<T>] { + // SAFETY: + // - `self.len` is smaller than `self.capacity` and hence, the resulting pointer is + // guaranteed to be part of the same allocated object. + // - `self.len` can not overflow `isize`. + let ptr = unsafe { self.as_mut_ptr().add(self.len) } as *mut MaybeUninit<T>; + + // SAFETY: The memory between `self.len` and `self.capacity` is guaranteed to be allocated + // and valid, but uninitialized. + unsafe { slice::from_raw_parts_mut(ptr, self.capacity() - self.len) } + } + + /// Appends an element to the back of the [`Vec`] instance. + /// + /// # Examples + /// + /// ``` + /// let mut v = KVec::new(); + /// v.push(1, GFP_KERNEL)?; + /// assert_eq!(&v, &[1]); + /// + /// v.push(2, GFP_KERNEL)?; + /// assert_eq!(&v, &[1, 2]); + /// # Ok::<(), Error>(()) + /// ``` + pub fn push(&mut self, v: T, flags: Flags) -> Result<(), AllocError> { + self.reserve(1, flags)?; + + // SAFETY: + // - `self.len` is smaller than `self.capacity` and hence, the resulting pointer is + // guaranteed to be part of the same allocated object. + // - `self.len` can not overflow `isize`. + let ptr = unsafe { self.as_mut_ptr().add(self.len) }; + + // SAFETY: + // - `ptr` is properly aligned and valid for writes. + unsafe { core::ptr::write(ptr, v) }; + + // SAFETY: We just initialised the first spare entry, so it is safe to increase the length + // by 1. We also know that the new length is <= capacity because of the previous call to + // `reserve` above. + unsafe { self.set_len(self.len() + 1) }; + Ok(()) + } + + /// Creates a new [`Vec`] instance with at least the given capacity. + /// + /// # Examples + /// + /// ``` + /// let v = KVec::<u32>::with_capacity(20, GFP_KERNEL)?; + /// + /// assert!(v.capacity() >= 20); + /// # Ok::<(), Error>(()) + /// ``` + pub fn with_capacity(capacity: usize, flags: Flags) -> Result<Self, AllocError> { + let mut v = Vec::new(); + + v.reserve(capacity, flags)?; + + Ok(v) + } + + /// Creates a `Vec<T, A>` from a pointer, a length and a capacity using the allocator `A`. + /// + /// # Examples + /// + /// ``` + /// let mut v = kernel::kvec![1, 2, 3]?; + /// v.reserve(1, GFP_KERNEL)?; + /// + /// let (mut ptr, mut len, cap) = v.into_raw_parts(); + /// + /// // SAFETY: We've just reserved memory for another element. + /// unsafe { ptr.add(len).write(4) }; + /// len += 1; + /// + /// // SAFETY: We only wrote an additional element at the end of the `KVec`'s buffer and + /// // correspondingly increased the length of the `KVec` by one. Otherwise, we construct it + /// // from the exact same raw parts. + /// let v = unsafe { KVec::from_raw_parts(ptr, len, cap) }; + /// + /// assert_eq!(v, [1, 2, 3, 4]); + /// + /// # Ok::<(), Error>(()) + /// ``` + /// + /// # Safety + /// + /// If `T` is a ZST: + /// + /// - `ptr` must be a dangling, well aligned pointer. + /// + /// Otherwise: + /// + /// - `ptr` must have been allocated with the allocator `A`. + /// - `ptr` must satisfy or exceed the alignment requirements of `T`. + /// - `ptr` must point to memory with a size of at least `size_of::<T>() * capacity` bytes. + /// - The allocated size in bytes must not be larger than `isize::MAX`. + /// - `length` must be less than or equal to `capacity`. + /// - The first `length` elements must be initialized values of type `T`. + /// + /// It is also valid to create an empty `Vec` passing a dangling pointer for `ptr` and zero for + /// `cap` and `len`. + pub unsafe fn from_raw_parts(ptr: *mut T, length: usize, capacity: usize) -> Self { + let layout = if Self::is_zst() { + ArrayLayout::empty() + } else { + // SAFETY: By the safety requirements of this function, `capacity * size_of::<T>()` is + // smaller than `isize::MAX`. + unsafe { ArrayLayout::new_unchecked(capacity) } + }; + + // INVARIANT: For ZSTs, we store an empty `ArrayLayout`, all other type invariants are + // covered by the safety requirements of this function. + Self { + // SAFETY: By the safety requirements, `ptr` is either dangling or pointing to a valid + // memory allocation, allocated with `A`. + ptr: unsafe { NonNull::new_unchecked(ptr) }, + layout, + len: length, + _p: PhantomData::<A>, + } + } + + /// Consumes the `Vec<T, A>` and returns its raw components `pointer`, `length` and `capacity`. + /// + /// This will not run the destructor of the contained elements and for non-ZSTs the allocation + /// will stay alive indefinitely. Use [`Vec::from_raw_parts`] to recover the [`Vec`], drop the + /// elements and free the allocation, if any. + pub fn into_raw_parts(self) -> (*mut T, usize, usize) { + let mut me = ManuallyDrop::new(self); + let len = me.len(); + let capacity = me.capacity(); + let ptr = me.as_mut_ptr(); + (ptr, len, capacity) + } + + /// Ensures that the capacity exceeds the length by at least `additional` elements. + /// + /// # Examples + /// + /// ``` + /// let mut v = KVec::new(); + /// v.push(1, GFP_KERNEL)?; + /// + /// v.reserve(10, GFP_KERNEL)?; + /// let cap = v.capacity(); + /// assert!(cap >= 10); + /// + /// v.reserve(10, GFP_KERNEL)?; + /// let new_cap = v.capacity(); + /// assert_eq!(new_cap, cap); + /// + /// # Ok::<(), Error>(()) + /// ``` + pub fn reserve(&mut self, additional: usize, flags: Flags) -> Result<(), AllocError> { + let len = self.len(); + let cap = self.capacity(); + + if cap - len >= additional { + return Ok(()); + } + + if Self::is_zst() { + // The capacity is already `usize::MAX` for ZSTs, we can't go higher. + return Err(AllocError); + } + + // We know that `cap <= isize::MAX` because of the type invariants of `Self`. So the + // multiplication by two won't overflow. + let new_cap = core::cmp::max(cap * 2, len.checked_add(additional).ok_or(AllocError)?); + let layout = ArrayLayout::new(new_cap).map_err(|_| AllocError)?; + + // SAFETY: + // - `ptr` is valid because it's either `None` or comes from a previous call to + // `A::realloc`. + // - `self.layout` matches the `ArrayLayout` of the preceding allocation. + let ptr = unsafe { + A::realloc( + Some(self.ptr.cast()), + layout.into(), + self.layout.into(), + flags, + )? + }; + + // INVARIANT: + // - `layout` is some `ArrayLayout::<T>`, + // - `ptr` has been created by `A::realloc` from `layout`. + self.ptr = ptr.cast(); + self.layout = layout; + + Ok(()) + } +} + +impl<T: Clone, A: Allocator> Vec<T, A> { + /// Extend the vector by `n` clones of `value`. + pub fn extend_with(&mut self, n: usize, value: T, flags: Flags) -> Result<(), AllocError> { + if n == 0 { + return Ok(()); + } + + self.reserve(n, flags)?; + + let spare = self.spare_capacity_mut(); + + for item in spare.iter_mut().take(n - 1) { + item.write(value.clone()); + } + + // We can write the last element directly without cloning needlessly. + spare[n - 1].write(value); + + // SAFETY: + // - `self.len() + n < self.capacity()` due to the call to reserve above, + // - the loop and the line above initialized the next `n` elements. + unsafe { self.set_len(self.len() + n) }; + + Ok(()) + } + + /// Pushes clones of the elements of slice into the [`Vec`] instance. + /// + /// # Examples + /// + /// ``` + /// let mut v = KVec::new(); + /// v.push(1, GFP_KERNEL)?; + /// + /// v.extend_from_slice(&[20, 30, 40], GFP_KERNEL)?; + /// assert_eq!(&v, &[1, 20, 30, 40]); + /// + /// v.extend_from_slice(&[50, 60], GFP_KERNEL)?; + /// assert_eq!(&v, &[1, 20, 30, 40, 50, 60]); + /// # Ok::<(), Error>(()) + /// ``` + pub fn extend_from_slice(&mut self, other: &[T], flags: Flags) -> Result<(), AllocError> { + self.reserve(other.len(), flags)?; + for (slot, item) in core::iter::zip(self.spare_capacity_mut(), other) { + slot.write(item.clone()); + } + + // SAFETY: + // - `other.len()` spare entries have just been initialized, so it is safe to increase + // the length by the same number. + // - `self.len() + other.len() <= self.capacity()` is guaranteed by the preceding `reserve` + // call. + unsafe { self.set_len(self.len() + other.len()) }; + Ok(()) + } + + /// Create a new `Vec<T, A>` and extend it by `n` clones of `value`. + pub fn from_elem(value: T, n: usize, flags: Flags) -> Result<Self, AllocError> { + let mut v = Self::with_capacity(n, flags)?; + + v.extend_with(n, value, flags)?; + + Ok(v) + } +} + +impl<T, A> Drop for Vec<T, A> +where + A: Allocator, +{ + fn drop(&mut self) { + // SAFETY: `self.as_mut_ptr` is guaranteed to be valid by the type invariant. + unsafe { + ptr::drop_in_place(core::ptr::slice_from_raw_parts_mut( + self.as_mut_ptr(), + self.len, + )) + }; + + // SAFETY: + // - `self.ptr` was previously allocated with `A`. + // - `self.layout` matches the `ArrayLayout` of the preceding allocation. + unsafe { A::free(self.ptr.cast(), self.layout.into()) }; + } +} + +impl<T, A, const N: usize> From<Box<[T; N], A>> for Vec<T, A> +where + A: Allocator, +{ + fn from(b: Box<[T; N], A>) -> Vec<T, A> { + let len = b.len(); + let ptr = Box::into_raw(b); + + // SAFETY: + // - `b` has been allocated with `A`, + // - `ptr` fulfills the alignment requirements for `T`, + // - `ptr` points to memory with at least a size of `size_of::<T>() * len`, + // - all elements within `b` are initialized values of `T`, + // - `len` does not exceed `isize::MAX`. + unsafe { Vec::from_raw_parts(ptr as _, len, len) } + } +} + +impl<T> Default for KVec<T> { + #[inline] + fn default() -> Self { + Self::new() + } +} + +impl<T: fmt::Debug, A: Allocator> fmt::Debug for Vec<T, A> { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + fmt::Debug::fmt(&**self, f) + } +} + +impl<T, A> Deref for Vec<T, A> +where + A: Allocator, +{ + type Target = [T]; + + #[inline] + fn deref(&self) -> &[T] { + // SAFETY: The memory behind `self.as_ptr()` is guaranteed to contain `self.len` + // initialized elements of type `T`. + unsafe { slice::from_raw_parts(self.as_ptr(), self.len) } + } +} + +impl<T, A> DerefMut for Vec<T, A> +where + A: Allocator, +{ + #[inline] + fn deref_mut(&mut self) -> &mut [T] { + // SAFETY: The memory behind `self.as_ptr()` is guaranteed to contain `self.len` + // initialized elements of type `T`. + unsafe { slice::from_raw_parts_mut(self.as_mut_ptr(), self.len) } + } +} + +impl<T: Eq, A> Eq for Vec<T, A> where A: Allocator {} + +impl<T, I: SliceIndex<[T]>, A> Index<I> for Vec<T, A> +where + A: Allocator, +{ + type Output = I::Output; + + #[inline] + fn index(&self, index: I) -> &Self::Output { + Index::index(&**self, index) + } +} + +impl<T, I: SliceIndex<[T]>, A> IndexMut<I> for Vec<T, A> +where + A: Allocator, +{ + #[inline] + fn index_mut(&mut self, index: I) -> &mut Self::Output { + IndexMut::index_mut(&mut **self, index) + } +} + +macro_rules! impl_slice_eq { + ($([$($vars:tt)*] $lhs:ty, $rhs:ty,)*) => { + $( + impl<T, U, $($vars)*> PartialEq<$rhs> for $lhs + where + T: PartialEq<U>, + { + #[inline] + fn eq(&self, other: &$rhs) -> bool { self[..] == other[..] } + } + )* + } +} + +impl_slice_eq! { + [A1: Allocator, A2: Allocator] Vec<T, A1>, Vec<U, A2>, + [A: Allocator] Vec<T, A>, &[U], + [A: Allocator] Vec<T, A>, &mut [U], + [A: Allocator] &[T], Vec<U, A>, + [A: Allocator] &mut [T], Vec<U, A>, + [A: Allocator] Vec<T, A>, [U], + [A: Allocator] [T], Vec<U, A>, + [A: Allocator, const N: usize] Vec<T, A>, [U; N], + [A: Allocator, const N: usize] Vec<T, A>, &[U; N], +} --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -15,6 +15,7 @@ #![feature(arbitrary_self_types)] #![feature(coerce_unsized)] #![feature(dispatch_from_dyn)] +#![feature(inline_const)] #![feature(lint_reasons)] #![feature(unsize)]
--- a/rust/kernel/prelude.rs +++ b/rust/kernel/prelude.rs @@ -14,7 +14,7 @@ #[doc(no_inline)] pub use core::pin::Pin;
-pub use crate::alloc::{flags::*, vec_ext::VecExt, Box, KBox, KVBox, VBox}; +pub use crate::alloc::{flags::*, vec_ext::VecExt, Box, KBox, KVBox, KVVec, KVec, VBox, VVec};
#[doc(no_inline)] pub use alloc::vec::Vec;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 1d1d223aa3b37c34271aefc2706340d0843bfcb2 upstream.
Implement `IntoIterator` for `Vec`, `Vec`'s `IntoIter` type, as well as `Iterator` for `IntoIter`.
`Vec::into_iter` disassembles the `Vec` into its raw parts; additionally, `IntoIter` keeps track of a separate pointer, which is incremented correspondingly as the iterator advances, while the length, or the count of elements, is decremented.
This also means that `IntoIter` takes the ownership of the backing buffer and is responsible to drop the remaining elements and free the backing buffer, if it's dropped.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-18-dakr@kernel.org [ Fixed typos. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 1 rust/kernel/alloc/kvec.rs | 170 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 171 insertions(+)
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -20,6 +20,7 @@ pub use self::kbox::KBox; pub use self::kbox::KVBox; pub use self::kbox::VBox;
+pub use self::kvec::IntoIter; pub use self::kvec::KVVec; pub use self::kvec::KVec; pub use self::kvec::VVec; --- a/rust/kernel/alloc/kvec.rs +++ b/rust/kernel/alloc/kvec.rs @@ -646,3 +646,173 @@ impl_slice_eq! { [A: Allocator, const N: usize] Vec<T, A>, [U; N], [A: Allocator, const N: usize] Vec<T, A>, &[U; N], } + +impl<'a, T, A> IntoIterator for &'a Vec<T, A> +where + A: Allocator, +{ + type Item = &'a T; + type IntoIter = slice::Iter<'a, T>; + + fn into_iter(self) -> Self::IntoIter { + self.iter() + } +} + +impl<'a, T, A: Allocator> IntoIterator for &'a mut Vec<T, A> +where + A: Allocator, +{ + type Item = &'a mut T; + type IntoIter = slice::IterMut<'a, T>; + + fn into_iter(self) -> Self::IntoIter { + self.iter_mut() + } +} + +/// An [`Iterator`] implementation for [`Vec`] that moves elements out of a vector. +/// +/// This structure is created by the [`Vec::into_iter`] method on [`Vec`] (provided by the +/// [`IntoIterator`] trait). +/// +/// # Examples +/// +/// ``` +/// let v = kernel::kvec![0, 1, 2]?; +/// let iter = v.into_iter(); +/// +/// # Ok::<(), Error>(()) +/// ``` +pub struct IntoIter<T, A: Allocator> { + ptr: *mut T, + buf: NonNull<T>, + len: usize, + layout: ArrayLayout<T>, + _p: PhantomData<A>, +} + +impl<T, A> Iterator for IntoIter<T, A> +where + A: Allocator, +{ + type Item = T; + + /// # Examples + /// + /// ``` + /// let v = kernel::kvec![1, 2, 3]?; + /// let mut it = v.into_iter(); + /// + /// assert_eq!(it.next(), Some(1)); + /// assert_eq!(it.next(), Some(2)); + /// assert_eq!(it.next(), Some(3)); + /// assert_eq!(it.next(), None); + /// + /// # Ok::<(), Error>(()) + /// ``` + fn next(&mut self) -> Option<T> { + if self.len == 0 { + return None; + } + + let current = self.ptr; + + // SAFETY: We can't overflow; decreasing `self.len` by one every time we advance `self.ptr` + // by one guarantees that. + unsafe { self.ptr = self.ptr.add(1) }; + + self.len -= 1; + + // SAFETY: `current` is guaranteed to point at a valid element within the buffer. + Some(unsafe { current.read() }) + } + + /// # Examples + /// + /// ``` + /// let v: KVec<u32> = kernel::kvec![1, 2, 3]?; + /// let mut iter = v.into_iter(); + /// let size = iter.size_hint().0; + /// + /// iter.next(); + /// assert_eq!(iter.size_hint().0, size - 1); + /// + /// iter.next(); + /// assert_eq!(iter.size_hint().0, size - 2); + /// + /// iter.next(); + /// assert_eq!(iter.size_hint().0, size - 3); + /// + /// # Ok::<(), Error>(()) + /// ``` + fn size_hint(&self) -> (usize, Option<usize>) { + (self.len, Some(self.len)) + } +} + +impl<T, A> Drop for IntoIter<T, A> +where + A: Allocator, +{ + fn drop(&mut self) { + // SAFETY: `self.ptr` is guaranteed to be valid by the type invariant. + unsafe { ptr::drop_in_place(ptr::slice_from_raw_parts_mut(self.ptr, self.len)) }; + + // SAFETY: + // - `self.buf` was previously allocated with `A`. + // - `self.layout` matches the `ArrayLayout` of the preceding allocation. + unsafe { A::free(self.buf.cast(), self.layout.into()) }; + } +} + +impl<T, A> IntoIterator for Vec<T, A> +where + A: Allocator, +{ + type Item = T; + type IntoIter = IntoIter<T, A>; + + /// Consumes the `Vec<T, A>` and creates an `Iterator`, which moves each value out of the + /// vector (from start to end). + /// + /// # Examples + /// + /// ``` + /// let v = kernel::kvec![1, 2]?; + /// let mut v_iter = v.into_iter(); + /// + /// let first_element: Option<u32> = v_iter.next(); + /// + /// assert_eq!(first_element, Some(1)); + /// assert_eq!(v_iter.next(), Some(2)); + /// assert_eq!(v_iter.next(), None); + /// + /// # Ok::<(), Error>(()) + /// ``` + /// + /// ``` + /// let v = kernel::kvec![]; + /// let mut v_iter = v.into_iter(); + /// + /// let first_element: Option<u32> = v_iter.next(); + /// + /// assert_eq!(first_element, None); + /// + /// # Ok::<(), Error>(()) + /// ``` + #[inline] + fn into_iter(self) -> Self::IntoIter { + let buf = self.ptr; + let layout = self.layout; + let (ptr, len, _) = self.into_raw_parts(); + + IntoIter { + ptr, + buf, + len, + layout, + _p: PhantomData::<A>, + } + } +}
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 93e602310f87b7b515b86a8f919cc0799387e5c3 upstream.
Currently, we can't implement `FromIterator`. There are a couple of issues with this trait in the kernel, namely:
- Rust's specialization feature is unstable. This prevents us to optimize for the special case where `I::IntoIter` equals `Vec`'s `IntoIter` type. - We also can't use `I::IntoIter`'s type ID either to work around this, since `FromIterator` doesn't require this type to be `'static`. - `FromIterator::from_iter` does return `Self` instead of `Result<Self, AllocError>`, hence we can't properly handle allocation failures. - Neither `Iterator::collect` nor `FromIterator::from_iter` can handle additional allocation flags.
Instead, provide `IntoIter::collect`, such that we can at least convert `IntoIter` into a `Vec` again.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-19-dakr@kernel.org [ Added newline in documentation, changed case of section to be consistent with an existing one, fixed typo. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc/kvec.rs | 95 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+)
--- a/rust/kernel/alloc/kvec.rs +++ b/rust/kernel/alloc/kvec.rs @@ -692,6 +692,101 @@ pub struct IntoIter<T, A: Allocator> { _p: PhantomData<A>, }
+impl<T, A> IntoIter<T, A> +where + A: Allocator, +{ + fn into_raw_parts(self) -> (*mut T, NonNull<T>, usize, usize) { + let me = ManuallyDrop::new(self); + let ptr = me.ptr; + let buf = me.buf; + let len = me.len; + let cap = me.layout.len(); + (ptr, buf, len, cap) + } + + /// Same as `Iterator::collect` but specialized for `Vec`'s `IntoIter`. + /// + /// # Examples + /// + /// ``` + /// let v = kernel::kvec![1, 2, 3]?; + /// let mut it = v.into_iter(); + /// + /// assert_eq!(it.next(), Some(1)); + /// + /// let v = it.collect(GFP_KERNEL); + /// assert_eq!(v, [2, 3]); + /// + /// # Ok::<(), Error>(()) + /// ``` + /// + /// # Implementation details + /// + /// Currently, we can't implement `FromIterator`. There are a couple of issues with this trait + /// in the kernel, namely: + /// + /// - Rust's specialization feature is unstable. This prevents us to optimize for the special + /// case where `I::IntoIter` equals `Vec`'s `IntoIter` type. + /// - We also can't use `I::IntoIter`'s type ID either to work around this, since `FromIterator` + /// doesn't require this type to be `'static`. + /// - `FromIterator::from_iter` does return `Self` instead of `Result<Self, AllocError>`, hence + /// we can't properly handle allocation failures. + /// - Neither `Iterator::collect` nor `FromIterator::from_iter` can handle additional allocation + /// flags. + /// + /// Instead, provide `IntoIter::collect`, such that we can at least convert a `IntoIter` into a + /// `Vec` again. + /// + /// Note that `IntoIter::collect` doesn't require `Flags`, since it re-uses the existing backing + /// buffer. However, this backing buffer may be shrunk to the actual count of elements. + pub fn collect(self, flags: Flags) -> Vec<T, A> { + let old_layout = self.layout; + let (mut ptr, buf, len, mut cap) = self.into_raw_parts(); + let has_advanced = ptr != buf.as_ptr(); + + if has_advanced { + // Copy the contents we have advanced to at the beginning of the buffer. + // + // SAFETY: + // - `ptr` is valid for reads of `len * size_of::<T>()` bytes, + // - `buf.as_ptr()` is valid for writes of `len * size_of::<T>()` bytes, + // - `ptr` and `buf.as_ptr()` are not be subject to aliasing restrictions relative to + // each other, + // - both `ptr` and `buf.ptr()` are properly aligned. + unsafe { ptr::copy(ptr, buf.as_ptr(), len) }; + ptr = buf.as_ptr(); + + // SAFETY: `len` is guaranteed to be smaller than `self.layout.len()`. + let layout = unsafe { ArrayLayout::<T>::new_unchecked(len) }; + + // SAFETY: `buf` points to the start of the backing buffer and `len` is guaranteed to be + // smaller than `cap`. Depending on `alloc` this operation may shrink the buffer or leaves + // it as it is. + ptr = match unsafe { + A::realloc(Some(buf.cast()), layout.into(), old_layout.into(), flags) + } { + // If we fail to shrink, which likely can't even happen, continue with the existing + // buffer. + Err(_) => ptr, + Ok(ptr) => { + cap = len; + ptr.as_ptr().cast() + } + }; + } + + // SAFETY: If the iterator has been advanced, the advanced elements have been copied to + // the beginning of the buffer and `len` has been adjusted accordingly. + // + // - `ptr` is guaranteed to point to the start of the backing buffer. + // - `cap` is either the original capacity or, after shrinking the buffer, equal to `len`. + // - `alloc` is guaranteed to be unchanged since `into_iter` has been called on the original + // `Vec`. + unsafe { Vec::from_raw_parts(ptr, len, cap) } + } +} + impl<T, A> Iterator for IntoIter<T, A> where A: Allocator,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 58eff8e872bd04ccb3adcf99aec7334ffad06cfd upstream.
Now that we got the kernel `Vec` in place, convert all existing `Vec` users to make use of it.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-20-dakr@kernel.org [ Converted `kasan_test_rust.rs` too, as discussed. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/kasan/kasan_test_rust.rs | 2 +- rust/kernel/str.rs | 12 +++++------- rust/kernel/sync/locked_by.rs | 2 +- rust/kernel/types.rs | 2 +- rust/kernel/uaccess.rs | 17 +++++++---------- rust/macros/lib.rs | 6 +++--- samples/rust/rust_minimal.rs | 4 ++-- 7 files changed, 20 insertions(+), 25 deletions(-)
--- a/mm/kasan/kasan_test_rust.rs +++ b/mm/kasan/kasan_test_rust.rs @@ -11,7 +11,7 @@ use kernel::prelude::*; /// drop the vector, and touch it. #[no_mangle] pub extern "C" fn kasan_test_rust_uaf() -> u8 { - let mut v: Vec<u8> = Vec::new(); + let mut v: KVec<u8> = KVec::new(); for _ in 0..4096 { v.push(0x42, GFP_KERNEL).unwrap(); } --- a/rust/kernel/str.rs +++ b/rust/kernel/str.rs @@ -2,8 +2,7 @@
//! String representations.
-use crate::alloc::{flags::*, vec_ext::VecExt, AllocError}; -use alloc::vec::Vec; +use crate::alloc::{flags::*, AllocError, KVec}; use core::fmt::{self, Write}; use core::ops::{self, Deref, DerefMut, Index};
@@ -791,7 +790,7 @@ impl fmt::Write for Formatter { /// assert_eq!(s.is_ok(), false); /// ``` pub struct CString { - buf: Vec<u8>, + buf: KVec<u8>, }
impl CString { @@ -804,7 +803,7 @@ impl CString { let size = f.bytes_written();
// Allocate a vector with the required number of bytes, and write to it. - let mut buf = <Vec<_> as VecExt<_>>::with_capacity(size, GFP_KERNEL)?; + let mut buf = KVec::with_capacity(size, GFP_KERNEL)?; // SAFETY: The buffer stored in `buf` is at least of size `size` and is valid for writes. let mut f = unsafe { Formatter::from_buffer(buf.as_mut_ptr(), size) }; f.write_fmt(args)?; @@ -851,10 +850,9 @@ impl<'a> TryFrom<&'a CStr> for CString { type Error = AllocError;
fn try_from(cstr: &'a CStr) -> Result<CString, AllocError> { - let mut buf = Vec::new(); + let mut buf = KVec::new();
- <Vec<_> as VecExt<_>>::extend_from_slice(&mut buf, cstr.as_bytes_with_nul(), GFP_KERNEL) - .map_err(|_| AllocError)?; + buf.extend_from_slice(cstr.as_bytes_with_nul(), GFP_KERNEL)?;
// INVARIANT: The `CStr` and `CString` types have the same invariants for // the string data, and we copied it over without changes. --- a/rust/kernel/sync/locked_by.rs +++ b/rust/kernel/sync/locked_by.rs @@ -43,7 +43,7 @@ use core::{cell::UnsafeCell, mem::size_o /// struct InnerDirectory { /// /// The sum of the bytes used by all files. /// bytes_used: u64, -/// _files: Vec<File>, +/// _files: KVec<File>, /// } /// /// struct Directory { --- a/rust/kernel/types.rs +++ b/rust/kernel/types.rs @@ -135,7 +135,7 @@ impl ForeignOwnable for () { /// # use kernel::types::ScopeGuard; /// fn example3(arg: bool) -> Result { /// let mut vec = -/// ScopeGuard::new_with_data(Vec::new(), |v| pr_info!("vec had {} elements\n", v.len())); +/// ScopeGuard::new_with_data(KVec::new(), |v| pr_info!("vec had {} elements\n", v.len())); /// /// vec.push(10u8, GFP_KERNEL)?; /// if arg { --- a/rust/kernel/uaccess.rs +++ b/rust/kernel/uaccess.rs @@ -11,7 +11,6 @@ use crate::{ prelude::*, types::{AsBytes, FromBytes}, }; -use alloc::vec::Vec; use core::ffi::{c_ulong, c_void}; use core::mem::{size_of, MaybeUninit};
@@ -46,7 +45,6 @@ pub type UserPtr = usize; /// every byte in the region. /// /// ```no_run -/// use alloc::vec::Vec; /// use core::ffi::c_void; /// use kernel::error::Result; /// use kernel::uaccess::{UserPtr, UserSlice}; @@ -54,7 +52,7 @@ pub type UserPtr = usize; /// fn bytes_add_one(uptr: UserPtr, len: usize) -> Result<()> { /// let (read, mut write) = UserSlice::new(uptr, len).reader_writer(); /// -/// let mut buf = Vec::new(); +/// let mut buf = KVec::new(); /// read.read_all(&mut buf, GFP_KERNEL)?; /// /// for b in &mut buf { @@ -69,7 +67,6 @@ pub type UserPtr = usize; /// Example illustrating a TOCTOU (time-of-check to time-of-use) bug. /// /// ```no_run -/// use alloc::vec::Vec; /// use core::ffi::c_void; /// use kernel::error::{code::EINVAL, Result}; /// use kernel::uaccess::{UserPtr, UserSlice}; @@ -78,21 +75,21 @@ pub type UserPtr = usize; /// fn is_valid(uptr: UserPtr, len: usize) -> Result<bool> { /// let read = UserSlice::new(uptr, len).reader(); /// -/// let mut buf = Vec::new(); +/// let mut buf = KVec::new(); /// read.read_all(&mut buf, GFP_KERNEL)?; /// /// todo!() /// } /// /// /// Returns the bytes behind this user pointer if they are valid. -/// fn get_bytes_if_valid(uptr: UserPtr, len: usize) -> Result<Vec<u8>> { +/// fn get_bytes_if_valid(uptr: UserPtr, len: usize) -> Result<KVec<u8>> { /// if !is_valid(uptr, len)? { /// return Err(EINVAL); /// } /// /// let read = UserSlice::new(uptr, len).reader(); /// -/// let mut buf = Vec::new(); +/// let mut buf = KVec::new(); /// read.read_all(&mut buf, GFP_KERNEL)?; /// /// // THIS IS A BUG! The bytes could have changed since we checked them. @@ -130,7 +127,7 @@ impl UserSlice { /// Reads the entirety of the user slice, appending it to the end of the provided buffer. /// /// Fails with [`EFAULT`] if the read happens on a bad address. - pub fn read_all(self, buf: &mut Vec<u8>, flags: Flags) -> Result { + pub fn read_all(self, buf: &mut KVec<u8>, flags: Flags) -> Result { self.reader().read_all(buf, flags) }
@@ -291,9 +288,9 @@ impl UserSliceReader { /// Reads the entirety of the user slice, appending it to the end of the provided buffer. /// /// Fails with [`EFAULT`] if the read happens on a bad address. - pub fn read_all(mut self, buf: &mut Vec<u8>, flags: Flags) -> Result { + pub fn read_all(mut self, buf: &mut KVec<u8>, flags: Flags) -> Result { let len = self.length; - VecExt::<u8>::reserve(buf, len, flags)?; + buf.reserve(len, flags)?;
// The call to `try_reserve` was successful, so the spare capacity is at least `len` bytes // long. --- a/rust/macros/lib.rs +++ b/rust/macros/lib.rs @@ -242,7 +242,7 @@ pub fn concat_idents(ts: TokenStream) -> /// #[pin_data] /// struct DriverData { /// #[pin] -/// queue: Mutex<Vec<Command>>, +/// queue: Mutex<KVec<Command>>, /// buf: KBox<[u8; 1024 * 1024]>, /// } /// ``` @@ -251,7 +251,7 @@ pub fn concat_idents(ts: TokenStream) -> /// #[pin_data(PinnedDrop)] /// struct DriverData { /// #[pin] -/// queue: Mutex<Vec<Command>>, +/// queue: Mutex<KVec<Command>>, /// buf: KBox<[u8; 1024 * 1024]>, /// raw_info: *mut Info, /// } @@ -281,7 +281,7 @@ pub fn pin_data(inner: TokenStream, item /// #[pin_data(PinnedDrop)] /// struct DriverData { /// #[pin] -/// queue: Mutex<Vec<Command>>, +/// queue: Mutex<KVec<Command>>, /// buf: KBox<[u8; 1024 * 1024]>, /// raw_info: *mut Info, /// } --- a/samples/rust/rust_minimal.rs +++ b/samples/rust/rust_minimal.rs @@ -13,7 +13,7 @@ module! { }
struct RustMinimal { - numbers: Vec<i32>, + numbers: KVec<i32>, }
impl kernel::Module for RustMinimal { @@ -21,7 +21,7 @@ impl kernel::Module for RustMinimal { pr_info!("Rust minimal sample (init)\n"); pr_info!("Am I built-in? {}\n", !cfg!(MODULE));
- let mut numbers = Vec::new(); + let mut numbers = KVec::new(); numbers.push(72, GFP_KERNEL)?; numbers.push(108, GFP_KERNEL)?; numbers.push(200, GFP_KERNEL)?;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 405966efc789888c3e1a53cd09d2c2b338064438 upstream.
Now that all existing `Vec` users were moved to the kernel `Vec` type, remove the `VecExt` extension.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-21-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 1 rust/kernel/alloc/vec_ext.rs | 185 ------------------------------------------- rust/kernel/prelude.rs | 5 - 3 files changed, 1 insertion(+), 190 deletions(-) delete mode 100644 rust/kernel/alloc/vec_ext.rs
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -7,7 +7,6 @@ pub mod allocator; pub mod kbox; pub mod kvec; pub mod layout; -pub mod vec_ext;
#[cfg(any(test, testlib))] pub mod allocator_test; --- a/rust/kernel/alloc/vec_ext.rs +++ /dev/null @@ -1,185 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 - -//! Extensions to [`Vec`] for fallible allocations. - -use super::{AllocError, Flags}; -use alloc::vec::Vec; - -/// Extensions to [`Vec`]. -pub trait VecExt<T>: Sized { - /// Creates a new [`Vec`] instance with at least the given capacity. - /// - /// # Examples - /// - /// ``` - /// let v = Vec::<u32>::with_capacity(20, GFP_KERNEL)?; - /// - /// assert!(v.capacity() >= 20); - /// # Ok::<(), Error>(()) - /// ``` - fn with_capacity(capacity: usize, flags: Flags) -> Result<Self, AllocError>; - - /// Appends an element to the back of the [`Vec`] instance. - /// - /// # Examples - /// - /// ``` - /// let mut v = Vec::new(); - /// v.push(1, GFP_KERNEL)?; - /// assert_eq!(&v, &[1]); - /// - /// v.push(2, GFP_KERNEL)?; - /// assert_eq!(&v, &[1, 2]); - /// # Ok::<(), Error>(()) - /// ``` - fn push(&mut self, v: T, flags: Flags) -> Result<(), AllocError>; - - /// Pushes clones of the elements of slice into the [`Vec`] instance. - /// - /// # Examples - /// - /// ``` - /// let mut v = Vec::new(); - /// v.push(1, GFP_KERNEL)?; - /// - /// v.extend_from_slice(&[20, 30, 40], GFP_KERNEL)?; - /// assert_eq!(&v, &[1, 20, 30, 40]); - /// - /// v.extend_from_slice(&[50, 60], GFP_KERNEL)?; - /// assert_eq!(&v, &[1, 20, 30, 40, 50, 60]); - /// # Ok::<(), Error>(()) - /// ``` - fn extend_from_slice(&mut self, other: &[T], flags: Flags) -> Result<(), AllocError> - where - T: Clone; - - /// Ensures that the capacity exceeds the length by at least `additional` elements. - /// - /// # Examples - /// - /// ``` - /// let mut v = Vec::new(); - /// v.push(1, GFP_KERNEL)?; - /// - /// v.reserve(10, GFP_KERNEL)?; - /// let cap = v.capacity(); - /// assert!(cap >= 10); - /// - /// v.reserve(10, GFP_KERNEL)?; - /// let new_cap = v.capacity(); - /// assert_eq!(new_cap, cap); - /// - /// # Ok::<(), Error>(()) - /// ``` - fn reserve(&mut self, additional: usize, flags: Flags) -> Result<(), AllocError>; -} - -impl<T> VecExt<T> for Vec<T> { - fn with_capacity(capacity: usize, flags: Flags) -> Result<Self, AllocError> { - let mut v = Vec::new(); - <Self as VecExt<_>>::reserve(&mut v, capacity, flags)?; - Ok(v) - } - - fn push(&mut self, v: T, flags: Flags) -> Result<(), AllocError> { - <Self as VecExt<_>>::reserve(self, 1, flags)?; - let s = self.spare_capacity_mut(); - s[0].write(v); - - // SAFETY: We just initialised the first spare entry, so it is safe to increase the length - // by 1. We also know that the new length is <= capacity because of the previous call to - // `reserve` above. - unsafe { self.set_len(self.len() + 1) }; - Ok(()) - } - - fn extend_from_slice(&mut self, other: &[T], flags: Flags) -> Result<(), AllocError> - where - T: Clone, - { - <Self as VecExt<_>>::reserve(self, other.len(), flags)?; - for (slot, item) in core::iter::zip(self.spare_capacity_mut(), other) { - slot.write(item.clone()); - } - - // SAFETY: We just initialised the `other.len()` spare entries, so it is safe to increase - // the length by the same amount. We also know that the new length is <= capacity because - // of the previous call to `reserve` above. - unsafe { self.set_len(self.len() + other.len()) }; - Ok(()) - } - - #[cfg(any(test, testlib))] - fn reserve(&mut self, additional: usize, _flags: Flags) -> Result<(), AllocError> { - Vec::reserve(self, additional); - Ok(()) - } - - #[cfg(not(any(test, testlib)))] - fn reserve(&mut self, additional: usize, flags: Flags) -> Result<(), AllocError> { - let len = self.len(); - let cap = self.capacity(); - - if cap - len >= additional { - return Ok(()); - } - - if core::mem::size_of::<T>() == 0 { - // The capacity is already `usize::MAX` for SZTs, we can't go higher. - return Err(AllocError); - } - - // We know cap is <= `isize::MAX` because `Layout::array` fails if the resulting byte size - // is greater than `isize::MAX`. So the multiplication by two won't overflow. - let new_cap = core::cmp::max(cap * 2, len.checked_add(additional).ok_or(AllocError)?); - let layout = core::alloc::Layout::array::<T>(new_cap).map_err(|_| AllocError)?; - - let (old_ptr, len, cap) = destructure(self); - - // We need to make sure that `ptr` is either NULL or comes from a previous call to - // `krealloc_aligned`. A `Vec<T>`'s `ptr` value is not guaranteed to be NULL and might be - // dangling after being created with `Vec::new`. Instead, we can rely on `Vec<T>`'s capacity - // to be zero if no memory has been allocated yet. - let ptr = if cap == 0 { - core::ptr::null_mut() - } else { - old_ptr - }; - - // SAFETY: `ptr` is valid because it's either NULL or comes from a previous call to - // `krealloc_aligned`. We also verified that the type is not a ZST. - let new_ptr = unsafe { super::allocator::krealloc_aligned(ptr.cast(), layout, flags) }; - if new_ptr.is_null() { - // SAFETY: We are just rebuilding the existing `Vec` with no changes. - unsafe { rebuild(self, old_ptr, len, cap) }; - Err(AllocError) - } else { - // SAFETY: `ptr` has been reallocated with the layout for `new_cap` elements. New cap - // is greater than `cap`, so it continues to be >= `len`. - unsafe { rebuild(self, new_ptr.cast::<T>(), len, new_cap) }; - Ok(()) - } - } -} - -#[cfg(not(any(test, testlib)))] -fn destructure<T>(v: &mut Vec<T>) -> (*mut T, usize, usize) { - let mut tmp = Vec::new(); - core::mem::swap(&mut tmp, v); - let mut tmp = core::mem::ManuallyDrop::new(tmp); - let len = tmp.len(); - let cap = tmp.capacity(); - (tmp.as_mut_ptr(), len, cap) -} - -/// Rebuilds a `Vec` from a pointer, length, and capacity. -/// -/// # Safety -/// -/// The same as [`Vec::from_raw_parts`]. -#[cfg(not(any(test, testlib)))] -unsafe fn rebuild<T>(v: &mut Vec<T>, ptr: *mut T, len: usize, cap: usize) { - // SAFETY: The safety requirements from this function satisfy those of `from_raw_parts`. - let mut tmp = unsafe { Vec::from_raw_parts(ptr, len, cap) }; - core::mem::swap(&mut tmp, v); -} --- a/rust/kernel/prelude.rs +++ b/rust/kernel/prelude.rs @@ -14,10 +14,7 @@ #[doc(no_inline)] pub use core::pin::Pin;
-pub use crate::alloc::{flags::*, vec_ext::VecExt, Box, KBox, KVBox, KVVec, KVec, VBox, VVec}; - -#[doc(no_inline)] -pub use alloc::vec::Vec; +pub use crate::alloc::{flags::*, Box, KBox, KVBox, KVVec, KVec, VBox, VVec};
#[doc(no_inline)] pub use macros::{module, pin_data, pinned_drop, vtable, Zeroable};
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 3145dc91c3c0ad945f06354385a6eb89d22becdb upstream.
Now that we removed `VecExt` and the corresponding includes in prelude.rs, add the new kernel `Vec` type instead.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-22-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/prelude.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/rust/kernel/prelude.rs +++ b/rust/kernel/prelude.rs @@ -14,7 +14,7 @@ #[doc(no_inline)] pub use core::pin::Pin;
-pub use crate::alloc::{flags::*, Box, KBox, KVBox, KVVec, KVec, VBox, VVec}; +pub use crate::alloc::{flags::*, Box, KBox, KVBox, KVVec, KVec, VBox, VVec, Vec};
#[doc(no_inline)] pub use macros::{module, pin_data, pinned_drop, vtable, Zeroable};
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 29a48d25ff53c183482dc88a99133a0fb5aa541a upstream.
Use `core::alloc::LayoutError` instead of `alloc::alloc::LayoutError` in preparation to get rid of Rust's alloc crate.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-23-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/error.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/rust/kernel/error.rs +++ b/rust/kernel/error.rs @@ -6,7 +6,7 @@
use crate::{alloc::AllocError, str::CStr};
-use alloc::alloc::LayoutError; +use core::alloc::LayoutError;
use core::fmt; use core::num::NonZeroI32;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 4a28ab469ff01855eb819dfd94754d1792f03f2a upstream.
Additional to `testlib` also check for `test` in `Error::name`. This is required by a subsequent patch that (indirectly) uses `Error` in test cases.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-24-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/error.rs | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/rust/kernel/error.rs +++ b/rust/kernel/error.rs @@ -161,7 +161,7 @@ impl Error { }
/// Returns a string representing the error, if one exists. - #[cfg(not(testlib))] + #[cfg(not(any(test, testlib)))] pub fn name(&self) -> Option<&'static CStr> { // SAFETY: Just an FFI call, there are no extra safety requirements. let ptr = unsafe { bindings::errname(-self.0.get()) }; @@ -178,7 +178,7 @@ impl Error { /// When `testlib` is configured, this always returns `None` to avoid the dependency on a /// kernel function so that tests that use this (e.g., by calling [`Result::unwrap`]) can still /// run in userspace. - #[cfg(testlib)] + #[cfg(any(test, testlib))] pub fn name(&self) -> Option<&'static CStr> { None }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 909037ce0369bc3f4fd31743fd2d8d7096f06002 upstream.
Provide a simple helper function to check whether given flags do contain one or multiple other flags.
This is used by a subsequent patch implementing the Cmalloc `Allocator` to check for __GFP_ZERO.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-25-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -35,7 +35,7 @@ use core::{alloc::Layout, ptr::NonNull}; /// They can be combined with the operators `|`, `&`, and `!`. /// /// Values can be used from the [`flags`] module. -#[derive(Clone, Copy)] +#[derive(Clone, Copy, PartialEq)] pub struct Flags(u32);
impl Flags { @@ -43,6 +43,11 @@ impl Flags { pub(crate) fn as_raw(self) -> u32 { self.0 } + + /// Check whether `flags` is contained in `self`. + pub fn contains(self, flags: Flags) -> bool { + (self & flags) == flags + } }
impl core::ops::BitOr for Flags {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit dd09538fb4093176a818fcecd45114430cc5840f upstream.
So far the kernel's `Box` and `Vec` types can't be used by userspace test cases, since all users of those types (e.g. `CString`) use kernel allocators for instantiation.
In order to allow userspace test cases to make use of such types as well, implement the `Cmalloc` allocator within the allocator_test module and type alias all kernel allocators to `Cmalloc`. The `Cmalloc` allocator uses libc's `realloc()` function as allocator backend.
Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-26-dakr@kernel.org [ Removed the temporary `allow(dead_code)` as discussed in the list and fixed typo, added backticks. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 1 rust/kernel/alloc/allocator_test.rs | 89 ++++++++++++++++++++++++++++++++---- 2 files changed, 81 insertions(+), 9 deletions(-)
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -215,7 +215,6 @@ pub unsafe trait Allocator { } }
-#[allow(dead_code)] /// Returns a properly aligned dangling pointer from the given `layout`. pub(crate) fn dangling_from_layout(layout: Layout) -> NonNull<u8> { let ptr = layout.align() as *mut u8; --- a/rust/kernel/alloc/allocator_test.rs +++ b/rust/kernel/alloc/allocator_test.rs @@ -1,22 +1,95 @@ // SPDX-License-Identifier: GPL-2.0
+//! So far the kernel's `Box` and `Vec` types can't be used by userspace test cases, since all users +//! of those types (e.g. `CString`) use kernel allocators for instantiation. +//! +//! In order to allow userspace test cases to make use of such types as well, implement the +//! `Cmalloc` allocator within the allocator_test module and type alias all kernel allocators to +//! `Cmalloc`. The `Cmalloc` allocator uses libc's `realloc()` function as allocator backend. + #![allow(missing_docs)]
-use super::{AllocError, Allocator, Flags}; +use super::{flags::*, AllocError, Allocator, Flags}; use core::alloc::Layout; +use core::cmp; +use core::ptr; use core::ptr::NonNull;
-pub struct Kmalloc; +/// The userspace allocator based on libc. +pub struct Cmalloc; + +pub type Kmalloc = Cmalloc; pub type Vmalloc = Kmalloc; pub type KVmalloc = Kmalloc;
-unsafe impl Allocator for Kmalloc { +extern "C" { + #[link_name = "aligned_alloc"] + fn libc_aligned_alloc(align: usize, size: usize) -> *mut core::ffi::c_void; + + #[link_name = "free"] + fn libc_free(ptr: *mut core::ffi::c_void); +} + +// SAFETY: +// - memory remains valid until it is explicitly freed, +// - passing a pointer to a valid memory allocation created by this `Allocator` is always OK, +// - `realloc` provides the guarantees as provided in the `# Guarantees` section. +unsafe impl Allocator for Cmalloc { unsafe fn realloc( - _ptr: Option<NonNull<u8>>, - _layout: Layout, - _old_layout: Layout, - _flags: Flags, + ptr: Option<NonNull<u8>>, + layout: Layout, + old_layout: Layout, + flags: Flags, ) -> Result<NonNull<[u8]>, AllocError> { - panic!(); + let src = match ptr { + Some(src) => { + if old_layout.size() == 0 { + ptr::null_mut() + } else { + src.as_ptr() + } + } + None => ptr::null_mut(), + }; + + if layout.size() == 0 { + // SAFETY: `src` is either NULL or was previously allocated with this `Allocator` + unsafe { libc_free(src.cast()) }; + + return Ok(NonNull::slice_from_raw_parts( + crate::alloc::dangling_from_layout(layout), + 0, + )); + } + + // SAFETY: Returns either NULL or a pointer to a memory allocation that satisfies or + // exceeds the given size and alignment requirements. + let dst = unsafe { libc_aligned_alloc(layout.align(), layout.size()) } as *mut u8; + let dst = NonNull::new(dst).ok_or(AllocError)?; + + if flags.contains(__GFP_ZERO) { + // SAFETY: The preceding calls to `libc_aligned_alloc` and `NonNull::new` + // guarantee that `dst` points to memory of at least `layout.size()` bytes. + unsafe { dst.as_ptr().write_bytes(0, layout.size()) }; + } + + if !src.is_null() { + // SAFETY: + // - `src` has previously been allocated with this `Allocator`; `dst` has just been + // newly allocated, hence the memory regions do not overlap. + // - both` src` and `dst` are properly aligned and valid for reads and writes + unsafe { + ptr::copy_nonoverlapping( + src, + dst.as_ptr(), + cmp::min(layout.size(), old_layout.size()), + ) + }; + } + + // SAFETY: `src` is either NULL or was previously allocated with this `Allocator` + unsafe { libc_free(src.cast()) }; + + Ok(NonNull::slice_from_raw_parts(dst, layout.size())) } }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit eb6f92cd3f755c179204ea1f933b07cf992892fd upstream.
The current implementation of tests in str.rs use `format!` to format strings for comparison, which, internally, creates a new `String`.
In order to prepare for getting rid of Rust's alloc crate, we have to cut this dependency. Instead, implement `format!` for `CString`.
Note that for userspace tests, `Kmalloc`, which is backing `CString`'s memory, is just a type alias to `Cmalloc`.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-27-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/str.rs | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-)
--- a/rust/kernel/str.rs +++ b/rust/kernel/str.rs @@ -524,7 +524,28 @@ macro_rules! c_str { #[cfg(test)] mod tests { use super::*; - use alloc::format; + + struct String(CString); + + impl String { + fn from_fmt(args: fmt::Arguments<'_>) -> Self { + String(CString::try_from_fmt(args).unwrap()) + } + } + + impl Deref for String { + type Target = str; + + fn deref(&self) -> &str { + self.0.to_str().unwrap() + } + } + + macro_rules! format { + ($($f:tt)*) => ({ + &*String::from_fmt(kernel::fmt!($($f)*)) + }) + }
const ALL_ASCII_CHARS: &'static str = "\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 8ae740c3917ff92108df17236b3cf1b9a74bd359 upstream.
Before we remove Rust's alloc crate, rewrite the module comment in alloc.rs to avoid a rustdoc warning.
Besides that, the module comment in alloc.rs isn't correct anymore, we're no longer extending Rust's alloc crate.
Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Benno Lossin benno.lossin@proton.me Reviewed-by: Gary Guo gary@garyguo.net Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-28-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0
-//! Extensions to the [`alloc`] crate. +//! Implementation of the kernel's memory allocation infrastructure.
#[cfg(not(any(test, testlib)))] pub mod allocator;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 392e34b6bc22077ef63abf62387ea3e9f39418c1 upstream.
Now that we have our own `Allocator`, `Box` and `Vec` types we can remove Rust's `alloc` crate and the `new_uninit` unstable feature.
Also remove `Kmalloc`'s `GlobalAlloc` implementation -- we can't remove this in a separate patch, since the `alloc` crate requires a `#[global_allocator]` to set, that implements `GlobalAlloc`.
Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-29-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/Makefile | 43 +++++-------------------- rust/exports.c | 1 rust/kernel/alloc/allocator.rs | 65 +------------------------------------- scripts/Makefile.build | 4 +- scripts/generate_rust_analyzer.py | 11 +----- 5 files changed, 16 insertions(+), 108 deletions(-)
--- a/rust/Makefile +++ b/rust/Makefile @@ -15,8 +15,8 @@ always-$(CONFIG_RUST) += libmacros.so no-clean-files += libmacros.so
always-$(CONFIG_RUST) += bindings/bindings_generated.rs bindings/bindings_helpers_generated.rs -obj-$(CONFIG_RUST) += alloc.o bindings.o kernel.o -always-$(CONFIG_RUST) += exports_alloc_generated.h exports_helpers_generated.h \ +obj-$(CONFIG_RUST) += bindings.o kernel.o +always-$(CONFIG_RUST) += exports_helpers_generated.h \ exports_bindings_generated.h exports_kernel_generated.h
always-$(CONFIG_RUST) += uapi/uapi_generated.rs @@ -53,11 +53,6 @@ endif core-cfgs = \ --cfg no_fp_fmt_parse
-alloc-cfgs = \ - --cfg no_global_oom_handling \ - --cfg no_rc \ - --cfg no_sync - quiet_cmd_rustdoc = RUSTDOC $(if $(rustdoc_host),H, ) $< cmd_rustdoc = \ OBJTREE=$(abspath $(objtree)) \ @@ -81,7 +76,7 @@ quiet_cmd_rustdoc = RUSTDOC $(if $(rustd # command-like flags to solve the issue. Meanwhile, we use the non-custom case # and then retouch the generated files. rustdoc: rustdoc-core rustdoc-macros rustdoc-compiler_builtins \ - rustdoc-alloc rustdoc-kernel + rustdoc-kernel $(Q)cp $(srctree)/Documentation/images/logo.svg $(rustdoc_output)/static.files/ $(Q)cp $(srctree)/Documentation/images/COPYING-logo $(rustdoc_output)/static.files/ $(Q)find $(rustdoc_output) -name '*.html' -type f -print0 | xargs -0 sed -Ei \ @@ -108,20 +103,11 @@ rustdoc-core: $(RUST_LIB_SRC)/core/src/l rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE +$(call if_changed,rustdoc)
-# We need to allow `rustdoc::broken_intra_doc_links` because some -# `no_global_oom_handling` functions refer to non-`no_global_oom_handling` -# functions. Ideally `rustdoc` would have a way to distinguish broken links -# due to things that are "configured out" vs. entirely non-existing ones. -rustdoc-alloc: private rustc_target_flags = $(alloc-cfgs) \ - -Arustdoc::broken_intra_doc_links -rustdoc-alloc: $(RUST_LIB_SRC)/alloc/src/lib.rs rustdoc-core rustdoc-compiler_builtins FORCE - +$(call if_changed,rustdoc) - -rustdoc-kernel: private rustc_target_flags = --extern alloc \ +rustdoc-kernel: private rustc_target_flags = \ --extern build_error --extern macros=$(objtree)/$(obj)/libmacros.so \ --extern bindings --extern uapi rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-macros \ - rustdoc-compiler_builtins rustdoc-alloc $(obj)/libmacros.so \ + rustdoc-compiler_builtins $(obj)/libmacros.so \ $(obj)/bindings.o FORCE +$(call if_changed,rustdoc)
@@ -165,7 +151,7 @@ quiet_cmd_rustdoc_test_kernel = RUSTDOC mkdir -p $(objtree)/$(obj)/test/doctests/kernel; \ OBJTREE=$(abspath $(objtree)) \ $(RUSTDOC) --test $(rust_flags) \ - -L$(objtree)/$(obj) --extern alloc --extern kernel \ + -L$(objtree)/$(obj) --extern kernel \ --extern build_error --extern macros \ --extern bindings --extern uapi \ --no-run --crate-name kernel -Zunstable-options \ @@ -201,7 +187,7 @@ rusttest-macros: $(src)/macros/lib.rs FO +$(call if_changed,rustc_test) +$(call if_changed,rustdoc_test)
-rusttest-kernel: private rustc_target_flags = --extern alloc \ +rusttest-kernel: private rustc_target_flags = \ --extern build_error --extern macros --extern bindings --extern uapi rusttest-kernel: $(src)/kernel/lib.rs \ rusttestlib-build_error rusttestlib-macros rusttestlib-bindings \ @@ -328,9 +314,6 @@ quiet_cmd_exports = EXPORTS $@ $(obj)/exports_core_generated.h: $(obj)/core.o FORCE $(call if_changed,exports)
-$(obj)/exports_alloc_generated.h: $(obj)/alloc.o FORCE - $(call if_changed,exports) - # Even though Rust kernel modules should never use the bindings directly, # symbols from the `bindings` crate and the C helpers need to be exported # because Rust generics and inlined functions may not get their code generated @@ -377,7 +360,7 @@ quiet_cmd_rustc_library = $(if $(skip_cl
rust-analyzer: $(Q)$(srctree)/scripts/generate_rust_analyzer.py \ - --cfgs='core=$(core-cfgs)' --cfgs='alloc=$(alloc-cfgs)' \ + --cfgs='core=$(core-cfgs)' \ $(realpath $(srctree)) $(realpath $(objtree)) \ $(rustc_sysroot) $(RUST_LIB_SRC) $(KBUILD_EXTMOD) > \ $(if $(KBUILD_EXTMOD),$(extmod_prefix),$(objtree))/rust-project.json @@ -415,12 +398,6 @@ $(obj)/compiler_builtins.o: private rust $(obj)/compiler_builtins.o: $(src)/compiler_builtins.rs $(obj)/core.o FORCE +$(call if_changed_rule,rustc_library)
-$(obj)/alloc.o: private skip_clippy = 1 -$(obj)/alloc.o: private skip_flags = -Wunreachable_pub -$(obj)/alloc.o: private rustc_target_flags = $(alloc-cfgs) -$(obj)/alloc.o: $(RUST_LIB_SRC)/alloc/src/lib.rs $(obj)/compiler_builtins.o FORCE - +$(call if_changed_rule,rustc_library) - $(obj)/build_error.o: $(src)/build_error.rs $(obj)/compiler_builtins.o FORCE +$(call if_changed_rule,rustc_library)
@@ -435,9 +412,9 @@ $(obj)/uapi.o: $(src)/uapi/lib.rs \ $(obj)/uapi/uapi_generated.rs FORCE +$(call if_changed_rule,rustc_library)
-$(obj)/kernel.o: private rustc_target_flags = --extern alloc \ +$(obj)/kernel.o: private rustc_target_flags = \ --extern build_error --extern macros --extern bindings --extern uapi -$(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/alloc.o $(obj)/build_error.o \ +$(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/build_error.o \ $(obj)/libmacros.so $(obj)/bindings.o $(obj)/uapi.o FORCE +$(call if_changed_rule,rustc_library)
--- a/rust/exports.c +++ b/rust/exports.c @@ -16,7 +16,6 @@ #define EXPORT_SYMBOL_RUST_GPL(sym) extern int sym; EXPORT_SYMBOL_GPL(sym)
#include "exports_core_generated.h" -#include "exports_alloc_generated.h" #include "exports_helpers_generated.h" #include "exports_bindings_generated.h" #include "exports_kernel_generated.h" --- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -8,8 +8,8 @@ //! //! Reference: https://docs.kernel.org/core-api/memory-allocation.html
-use super::{flags::*, Flags}; -use core::alloc::{GlobalAlloc, Layout}; +use super::Flags; +use core::alloc::Layout; use core::ptr; use core::ptr::NonNull;
@@ -54,23 +54,6 @@ fn aligned_size(new_layout: Layout) -> u layout.size() }
-/// Calls `krealloc` with a proper size to alloc a new object aligned to `new_layout`'s alignment. -/// -/// # Safety -/// -/// - `ptr` can be either null or a pointer which has been allocated by this allocator. -/// - `new_layout` must have a non-zero size. -pub(crate) unsafe fn krealloc_aligned(ptr: *mut u8, new_layout: Layout, flags: Flags) -> *mut u8 { - let size = aligned_size(new_layout); - - // SAFETY: - // - `ptr` is either null or a pointer returned from a previous `k{re}alloc()` by the - // function safety requirement. - // - `size` is greater than 0 since it's from `layout.size()` (which cannot be zero according - // to the function safety requirement) - unsafe { bindings::krealloc(ptr as *const core::ffi::c_void, size, flags.0) as *mut u8 } -} - /// # Invariants /// /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`. @@ -156,43 +139,6 @@ unsafe impl Allocator for Kmalloc { } }
-// SAFETY: TODO. -unsafe impl GlobalAlloc for Kmalloc { - unsafe fn alloc(&self, layout: Layout) -> *mut u8 { - // SAFETY: `ptr::null_mut()` is null and `layout` has a non-zero size by the function safety - // requirement. - unsafe { krealloc_aligned(ptr::null_mut(), layout, GFP_KERNEL) } - } - - unsafe fn dealloc(&self, ptr: *mut u8, _layout: Layout) { - // SAFETY: TODO. - unsafe { - bindings::kfree(ptr as *const core::ffi::c_void); - } - } - - unsafe fn realloc(&self, ptr: *mut u8, layout: Layout, new_size: usize) -> *mut u8 { - // SAFETY: - // - `new_size`, when rounded up to the nearest multiple of `layout.align()`, will not - // overflow `isize` by the function safety requirement. - // - `layout.align()` is a proper alignment (i.e. not zero and must be a power of two). - let layout = unsafe { Layout::from_size_align_unchecked(new_size, layout.align()) }; - - // SAFETY: - // - `ptr` is either null or a pointer allocated by this allocator by the function safety - // requirement. - // - the size of `layout` is not zero because `new_size` is not zero by the function safety - // requirement. - unsafe { krealloc_aligned(ptr, layout, GFP_KERNEL) } - } - - unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut u8 { - // SAFETY: `ptr::null_mut()` is null and `layout` has a non-zero size by the function safety - // requirement. - unsafe { krealloc_aligned(ptr::null_mut(), layout, GFP_KERNEL | __GFP_ZERO) } - } -} - // SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that // - memory remains valid until it is explicitly freed, // - passing a pointer to a valid memory allocation is OK, @@ -240,10 +186,3 @@ unsafe impl Allocator for KVmalloc { unsafe { ReallocFunc::KVREALLOC.call(ptr, layout, old_layout, flags) } } } - -#[global_allocator] -static ALLOCATOR: Kmalloc = Kmalloc; - -// See https://github.com/rust-lang/rust/pull/86844. -#[no_mangle] -static __rust_no_alloc_shim_is_unstable: u8 = 0; --- a/scripts/Makefile.build +++ b/scripts/Makefile.build @@ -248,7 +248,7 @@ $(obj)/%.lst: $(obj)/%.c FORCE # Compile Rust sources (.rs) # ---------------------------------------------------------------------------
-rust_allowed_features := arbitrary_self_types,lint_reasons,new_uninit +rust_allowed_features := arbitrary_self_types,lint_reasons
# `--out-dir` is required to avoid temporaries being created by `rustc` in the # current working directory, which may be not accessible in the out-of-tree @@ -258,7 +258,7 @@ rust_common_cmd = \ -Zallow-features=$(rust_allowed_features) \ -Zcrate-attr=no_std \ -Zcrate-attr='feature($(rust_allowed_features))' \ - -Zunstable-options --extern force:alloc --extern kernel \ + -Zunstable-options --extern kernel \ --crate-type rlib -L $(objtree)/rust/ \ --crate-name $(basename $(notdir $@)) \ --sysroot=/dev/null \ --- a/scripts/generate_rust_analyzer.py +++ b/scripts/generate_rust_analyzer.py @@ -65,13 +65,6 @@ def generate_crates(srctree, objtree, sy )
append_crate( - "alloc", - sysroot_src / "alloc" / "src" / "lib.rs", - ["core", "compiler_builtins"], - cfg=crates_cfgs.get("alloc", []), - ) - - append_crate( "macros", srctree / "rust" / "macros" / "lib.rs", [], @@ -96,7 +89,7 @@ def generate_crates(srctree, objtree, sy append_crate( "kernel", srctree / "rust" / "kernel" / "lib.rs", - ["core", "alloc", "macros", "build_error", "bindings"], + ["core", "macros", "build_error", "bindings"], cfg=cfg, ) crates[-1]["source"] = { @@ -133,7 +126,7 @@ def generate_crates(srctree, objtree, sy append_crate( name, path, - ["core", "alloc", "kernel"], + ["core", "kernel"], cfg=cfg, )
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danilo Krummrich dakr@kernel.org
commit 6ce162a002657910104c7a07fb50017681bc476c upstream.
Add maintainers entry for the Rust `alloc` module.
Currently, this includes the `Allocator` API itself, `Allocator` implementations, such as `Kmalloc` or `Vmalloc`, as well as the kernel's implementation of the primary memory allocation data structures, `Box` and `Vec`.
Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://lore.kernel.org/r/20241004154149.93856-30-dakr@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/MAINTAINERS +++ b/MAINTAINERS @@ -20183,6 +20183,13 @@ F: scripts/*rust* F: tools/testing/selftests/rust/ K: \b(?i:rust)\b
+RUST [ALLOC] +M: Danilo Krummrich dakr@kernel.org +L: rust-for-linux@vger.kernel.org +S: Maintained +F: rust/kernel/alloc.rs +F: rust/kernel/alloc/ + RXRPC SOCKETS (AF_RXRPC) M: David Howells dhowells@redhat.com M: Marc Dionne marc.dionne@auristor.com
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Böhler witcher@wiredspace.de
commit c408dd81678bb0a957eae96962c913c242e069f7 upstream.
Rust's standard library's `std::iter::Iterator` trait provides a function `find` that finds the first element that satisfies a predicate. The function `Version::from_segments` is doing the same thing but is implementing the same logic itself.
Clippy complains about this in the `manual_find` lint:
error: manual implementation of `Iterator::find` --> drivers/gpu/drm/drm_panic_qr.rs:212:9 | 212 | / for v in (1..=40).map(|k| Version(k)) { 213 | | if v.max_data() * 8 >= segments.iter().map(|s| s.total_size_bits(v)).sum() { 214 | | return Some(v); 215 | | } 216 | | } 217 | | None | |____________^ help: replace with an iterator: `(1..=40).map(|k| Version(k)).find(|&v| v.max_data() * 8 >= segments.iter().map(|s| s.total_size_bits(v)).sum())` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#manual_find = note: `-D clippy::manual-find` implied by `-D warnings` = help: to override `-D warnings` add `#[allow(clippy::manual_find)]`
Use `Iterator::find` instead to make the intention clearer.
At the same time, clean up the redundant closure that Clippy warns about too:
error: redundant closure --> drivers/gpu/drm/drm_panic_qr.rs:212:31 | 212 | for v in (1..=40).map(|k| Version(k)) { | ^^^^^^^^^^^^^^ help: replace the closure with the function itself: `Version` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#redundant_closure = note: `-D clippy::redundant-closure` implied by `-D warnings` = help: to override `-D warnings` add `#[allow(clippy::redundant_closure)]`
Fixes: cb5164ac43d0 ("drm/panic: Add a QR code panic screen") Reported-by: Miguel Ojeda ojeda@kernel.org Link: https://github.com/Rust-for-Linux/linux/issues/1123 Signed-off-by: Thomas Böhler witcher@wiredspace.de Reviewed-by: Jocelyn Falempe jfalempe@redhat.com Link: https://lore.kernel.org/r/20241019084048.22336-2-witcher@wiredspace.de [ Reworded to mention the redundant closure cleanup too. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/drm_panic_qr.rs | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)
--- a/drivers/gpu/drm/drm_panic_qr.rs +++ b/drivers/gpu/drm/drm_panic_qr.rs @@ -209,12 +209,9 @@ const FORMAT_INFOS_QR_L: [u16; 8] = [ impl Version { /// Returns the smallest QR version than can hold these segments. fn from_segments(segments: &[&Segment<'_>]) -> Option<Version> { - for v in (1..=40).map(|k| Version(k)) { - if v.max_data() * 8 >= segments.iter().map(|s| s.total_size_bits(v)).sum() { - return Some(v); - } - } - None + (1..=40) + .map(Version) + .find(|&v| v.max_data() * 8 >= segments.iter().map(|s| s.total_size_bits(v)).sum()) }
fn width(&self) -> u8 {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Böhler witcher@wiredspace.de
commit 7b6de57e0b2d1e62becfa3aac063c4c58d2c2c42 upstream.
The function `alignment_pattern` returns a static reference to a `u8` slice. The borrow of the returned element in `ALIGNMENT_PATTERNS` is already a reference as defined in the array definition above so this borrow is unnecessary and removed by the compiler. Clippy notes this in `needless_borrow`:
error: this expression creates a reference which is immediately dereferenced by the compiler --> drivers/gpu/drm/drm_panic_qr.rs:245:9 | 245 | &ALIGNMENT_PATTERNS[self.0 - 1] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: change this to: `ALIGNMENT_PATTERNS[self.0 - 1]` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow = note: `-D clippy::needless-borrow` implied by `-D warnings` = help: to override `-D warnings` add `#[allow(clippy::needless_borrow)]`
Remove the unnecessary borrow.
Fixes: cb5164ac43d0 ("drm/panic: Add a QR code panic screen") Reported-by: Miguel Ojeda ojeda@kernel.org Link: https://github.com/Rust-for-Linux/linux/issues/1123 Signed-off-by: Thomas Böhler witcher@wiredspace.de Reviewed-by: Jocelyn Falempe jfalempe@redhat.com Link: https://lore.kernel.org/r/20241019084048.22336-3-witcher@wiredspace.de Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/drm_panic_qr.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/drm_panic_qr.rs +++ b/drivers/gpu/drm/drm_panic_qr.rs @@ -239,7 +239,7 @@ impl Version { }
fn alignment_pattern(&self) -> &'static [u8] { - &ALIGNMENT_PATTERNS[self.0 - 1] + ALIGNMENT_PATTERNS[self.0 - 1] }
fn poly(&self) -> &'static [u8] {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Böhler witcher@wiredspace.de
commit ae75c40117b53ae3d91dfc9d0bf06984a079f044 upstream.
Eliding lifetimes when possible instead of specifying them directly is both shorter and easier to read. Clippy notes this in the `needless_lifetimes` lint:
error: the following explicit lifetimes could be elided: 'b --> drivers/gpu/drm/drm_panic_qr.rs:479:16 | 479 | fn new<'a, 'b>(segments: &[&Segment<'b>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> { | ^^ ^^ | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_lifetimes = note: `-D clippy::needless-lifetimes` implied by `-D warnings` = help: to override `-D warnings` add `#[allow(clippy::needless_lifetimes)]` help: elide the lifetimes | 479 - fn new<'a, 'b>(segments: &[&Segment<'b>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> { 479 + fn new<'a>(segments: &[&Segment<'_>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> { |
Remove the explicit lifetime annotation in favour of an elided lifetime.
Fixes: cb5164ac43d0 ("drm/panic: Add a QR code panic screen") Reported-by: Miguel Ojeda ojeda@kernel.org Link: https://github.com/Rust-for-Linux/linux/issues/1123 Signed-off-by: Thomas Böhler witcher@wiredspace.de Reviewed-by: Jocelyn Falempe jfalempe@redhat.com Link: https://lore.kernel.org/r/20241019084048.22336-4-witcher@wiredspace.de Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/drm_panic_qr.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/drm_panic_qr.rs +++ b/drivers/gpu/drm/drm_panic_qr.rs @@ -476,7 +476,7 @@ struct EncodedMsg<'a> { /// Data to be put in the QR code, with correct segment encoding, padding, and /// Error Code Correction. impl EncodedMsg<'_> { - fn new<'a, 'b>(segments: &[&Segment<'b>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> { + fn new<'a>(segments: &[&Segment<'_>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> { let version = Version::from_segments(segments)?; let ec_size = version.ec_size(); let g1_blocks = version.g1_blocks();
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Böhler witcher@wiredspace.de
commit da13129a3f2a75d49469e1d6f7dcefac2d11d205 upstream.
Rust allows initializing fields of a struct without specifying the attribute that is assigned if the variable has the same name. In this instance this is done for all other attributes of the struct except for `data`. Clippy notes the redundant field name:
error: redundant field names in struct initialization --> drivers/gpu/drm/drm_panic_qr.rs:495:13 | 495 | data: data, | ^^^^^^^^^^ help: replace it with: `data` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#redundant_field_na... = note: `-D clippy::redundant-field-names` implied by `-D warnings` = help: to override `-D warnings` add `#[allow(clippy::redundant_field_names)]`
Remove the redundant `data` in the assignment to be consistent.
Fixes: cb5164ac43d0 ("drm/panic: Add a QR code panic screen") Reported-by: Miguel Ojeda ojeda@kernel.org Link: https://github.com/Rust-for-Linux/linux/issues/1123 Signed-off-by: Thomas Böhler witcher@wiredspace.de Reviewed-by: Jocelyn Falempe jfalempe@redhat.com Link: https://lore.kernel.org/r/20241019084048.22336-5-witcher@wiredspace.de [ Reworded to add Clippy warning like it is done in the rest of the series. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/drm_panic_qr.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/drm_panic_qr.rs +++ b/drivers/gpu/drm/drm_panic_qr.rs @@ -489,7 +489,7 @@ impl EncodedMsg<'_> { data.fill(0);
let mut em = EncodedMsg { - data: data, + data, ec_size, g1_blocks, g2_blocks,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Böhler witcher@wiredspace.de
commit 5bb698e6fc514ddd9e23b6649b29a0934d8d8586 upstream.
It is common practice in Rust to indent the next line the same amount of space as the previous one if both belong to the same list item. Clippy checks for this with the lint `doc_lazy_continuation`.
error: doc list item without indentation --> drivers/gpu/drm/drm_panic_qr.rs:979:5 | 979 | /// conversion to numeric segments. | ^ | = help: if this is supposed to be its own paragraph, add a blank line = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#doc_lazy_continuat... = note: `-D clippy::doc-lazy-continuation` implied by `-D warnings` = help: to override `-D warnings` add `#[allow(clippy::doc_lazy_continuation)]` help: indent this line | 979 | /// conversion to numeric segments. | ++
Indent the offending line by 2 more spaces to remove this Clippy error.
Fixes: cb5164ac43d0 ("drm/panic: Add a QR code panic screen") Reported-by: Miguel Ojeda ojeda@kernel.org Link: https://github.com/Rust-for-Linux/linux/issues/1123 Signed-off-by: Thomas Böhler witcher@wiredspace.de Reviewed-by: Jocelyn Falempe jfalempe@redhat.com Link: https://lore.kernel.org/r/20241019084048.22336-6-witcher@wiredspace.de [ Reworded to indent Clippy's message. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/drm_panic_qr.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/drm_panic_qr.rs +++ b/drivers/gpu/drm/drm_panic_qr.rs @@ -975,7 +975,7 @@ pub unsafe extern "C" fn drm_panic_qr_ge /// * `url_len`: Length of the URL. /// /// * If `url_len` > 0, remove the 2 segments header/length and also count the -/// conversion to numeric segments. +/// conversion to numeric segments. /// * If `url_len` = 0, only removes 3 bytes for 1 binary segment. #[no_mangle] pub extern "C" fn drm_panic_qr_max_data_size(version: u8, url_len: usize) -> usize {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Böhler witcher@wiredspace.de
commit 27aef8a52e4b7f120ce47cd638d9d83065b759d2 upstream.
Clippy complains about a non-minimal boolean expression with `nonminimal_bool`:
error: this boolean expression can be simplified --> drivers/gpu/drm/drm_panic_qr.rs:722:9 | 722 | (x < 8 && y < 8) || (x < 8 && y >= end) || (x >= end && y < 8) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#nonminimal_bool = note: `-D clippy::nonminimal-bool` implied by `-D warnings` = help: to override `-D warnings` add `#[allow(clippy::nonminimal_bool)]` help: try | 722 | !(x >= 8 || y >= 8 && y < end) || (x >= end && y < 8) | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 722 | (y >= end || y < 8) && x < 8 || (x >= end && y < 8) | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
While this can be useful in a lot of cases, it isn't here because the line expresses clearly what the intention is. Simplifying the expression means losing clarity, so opt-out of this lint for the offending line.
Fixes: cb5164ac43d0 ("drm/panic: Add a QR code panic screen") Reported-by: Miguel Ojeda ojeda@kernel.org Link: https://github.com/Rust-for-Linux/linux/issues/1123 Signed-off-by: Thomas Böhler witcher@wiredspace.de Reviewed-by: Jocelyn Falempe jfalempe@redhat.com Link: https://lore.kernel.org/r/20241019084048.22336-7-witcher@wiredspace.de Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/drm_panic_qr.rs | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/drm_panic_qr.rs +++ b/drivers/gpu/drm/drm_panic_qr.rs @@ -719,7 +719,10 @@ impl QrImage<'_> {
fn is_finder(&self, x: u8, y: u8) -> bool { let end = self.width - 8; - (x < 8 && y < 8) || (x < 8 && y >= end) || (x >= end && y < 8) + #[expect(clippy::nonminimal_bool)] + { + (x < 8 && y < 8) || (x < 8 && y >= end) || (x >= end && y < 8) + } }
// Alignment pattern: 5x5 squares in a grid.
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Böhler witcher@wiredspace.de
commit 06b919e3fedf4798a1f0f60e0b67caa192f724a7 upstream.
Clippy warns about a reimplementation of `RangeInclusive::contains`:
error: manual `!RangeInclusive::contains` implementation --> drivers/gpu/drm/drm_panic_qr.rs:986:8 | 986 | if version < 1 || version > 40 { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use: `!(1..=40).contains(&version)` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#manual_range_conta... = note: `-D clippy::manual-range-contains` implied by `-D warnings` = help: to override `-D warnings` add `#[allow(clippy::manual_range_contains)]`
Ignore this and keep the current implementation as that makes it easier to read.
Fixes: cb5164ac43d0 ("drm/panic: Add a QR code panic screen") Reported-by: Miguel Ojeda ojeda@kernel.org Link: https://github.com/Rust-for-Linux/linux/issues/1123 Signed-off-by: Thomas Böhler witcher@wiredspace.de Reviewed-by: Jocelyn Falempe jfalempe@redhat.com Link: https://lore.kernel.org/r/20241019084048.22336-8-witcher@wiredspace.de Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/drm_panic_qr.rs | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/gpu/drm/drm_panic_qr.rs +++ b/drivers/gpu/drm/drm_panic_qr.rs @@ -982,6 +982,7 @@ pub unsafe extern "C" fn drm_panic_qr_ge /// * If `url_len` = 0, only removes 3 bytes for 1 binary segment. #[no_mangle] pub extern "C" fn drm_panic_qr_max_data_size(version: u8, url_len: usize) -> usize { + #[expect(clippy::manual_range_contains)] if version < 1 || version > 40 { return 0; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Ethan D. Twardy" ethan.twardy@gmail.com
commit b2c261fa8629dff2bd1143fa790797a773ace102 upstream.
Previously, the rusttest target for the macros crate did not specify the dependencies necessary to run the rustdoc tests. These tests rely on the kernel crate, so add the dependencies.
Signed-off-by: Ethan D. Twardy ethan.twardy@gmail.com Link: https://github.com/Rust-for-Linux/linux/issues/1076 Link: https://lore.kernel.org/r/20240704145607.17732-2-ethan.twardy@gmail.com [ Rebased (`alloc` is gone nowadays, sysroot handling is simpler) and simplified (reused `rustdoc_test` rule instead of adding a new one, no need for `rustdoc-compiler_builtins`, removed unneeded `macros` explicit path). Made `vtable` example fail (avoiding to increase the complexity in the `rusttest` target). Removed unstable `-Zproc-macro-backtrace` option. Reworded accordingly. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/Makefile | 17 +++++++++++++---- rust/macros/lib.rs | 2 +- 2 files changed, 14 insertions(+), 5 deletions(-)
--- a/rust/Makefile +++ b/rust/Makefile @@ -129,6 +129,14 @@ rusttestlib-macros: private rustc_test_l rusttestlib-macros: $(src)/macros/lib.rs FORCE +$(call if_changed,rustc_test_library)
+rusttestlib-kernel: private rustc_target_flags = \ + --extern build_error --extern macros \ + --extern bindings --extern uapi +rusttestlib-kernel: $(src)/kernel/lib.rs \ + rusttestlib-bindings rusttestlib-uapi rusttestlib-build_error \ + $(obj)/libmacros.so $(obj)/bindings.o FORCE + +$(call if_changed,rustc_test_library) + rusttestlib-bindings: $(src)/bindings/lib.rs FORCE +$(call if_changed,rustc_test_library)
@@ -181,19 +189,20 @@ quiet_cmd_rustc_test = RUSTC T $<
rusttest: rusttest-macros rusttest-kernel
-rusttest-macros: private rustc_target_flags = --extern proc_macro +rusttest-macros: private rustc_target_flags = --extern proc_macro \ + --extern macros --extern kernel rusttest-macros: private rustdoc_test_target_flags = --crate-type proc-macro -rusttest-macros: $(src)/macros/lib.rs FORCE +rusttest-macros: $(src)/macros/lib.rs \ + rusttestlib-macros rusttestlib-kernel FORCE +$(call if_changed,rustc_test) +$(call if_changed,rustdoc_test)
rusttest-kernel: private rustc_target_flags = \ --extern build_error --extern macros --extern bindings --extern uapi -rusttest-kernel: $(src)/kernel/lib.rs \ +rusttest-kernel: $(src)/kernel/lib.rs rusttestlib-kernel \ rusttestlib-build_error rusttestlib-macros rusttestlib-bindings \ rusttestlib-uapi FORCE +$(call if_changed,rustc_test) - +$(call if_changed,rustc_test_library)
ifdef CONFIG_CC_IS_CLANG bindgen_c_flags = $(c_flags) --- a/rust/macros/lib.rs +++ b/rust/macros/lib.rs @@ -132,7 +132,7 @@ pub fn module(ts: TokenStream) -> TokenS /// calls to this function at compile time: /// /// ```compile_fail -/// # use kernel::error::VTABLE_DEFAULT_ERROR; +/// # // Intentionally missing `use`s to simplify `rusttest`. /// kernel::build_error(VTABLE_DEFAULT_ERROR) /// ``` ///
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gary Guo gary@garyguo.net
commit 75c1fd41a671a0843b89d1526411a837a7163fa2 upstream.
Without `-fno-builtin`, for functions like memcpy/memmove (and many others), bindgen seems to be using the clang-provided prototype. This prototype is ABI-wise compatible, but the issue is that it does not have the same information as the source code w.r.t. typedefs.
For example, bindgen generates the following:
extern "C" { pub fn strlen(s: *const core::ffi::c_char) -> core::ffi::c_ulong; }
note that the return type is `c_ulong` (i.e. unsigned long), despite the size_t-is-usize behavior (this is default, and we have not opted out from it using --no-size_t-is-usize).
Similarly, memchr's size argument should be of type `__kernel_size_t`, but bindgen generates `c_ulong` directly.
We want to ensure any `size_t` is translated to Rust `usize` so that we can avoid having them be different type on 32-bit and 64-bit architectures, and hence would require a lot of excessive type casts when calling FFI functions.
I found that this bindgen behavior (which probably is caused by libclang) can be disabled by `-fno-builtin`. Using the flag for compiled code can result in less optimisation because compiler cannot assume about their properties anymore, but this should not affect bindgen.
[ Trevor asked: "I wonder how reliable this behavior is. Maybe bindgen could do a better job controlling this, is there an open issue?".
Gary replied: ..."apparently this is indeed the suggested approach in https://github.com/rust-lang/rust-bindgen/issues/1770". - Miguel ]
Signed-off-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240913213041.395655-2-gary@garyguo.net [ Formatted comment. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/Makefile | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/rust/Makefile +++ b/rust/Makefile @@ -264,7 +264,11 @@ else bindgen_c_flags_lto = $(bindgen_c_flags) endif
-bindgen_c_flags_final = $(bindgen_c_flags_lto) -D__BINDGEN__ +# `-fno-builtin` is passed to avoid `bindgen` from using `clang` builtin +# prototypes for functions like `memcpy` -- if this flag is not passed, +# `bindgen`-generated prototypes use `c_ulong` or `c_uint` depending on +# architecture instead of generating `usize`. +bindgen_c_flags_final = $(bindgen_c_flags_lto) -fno-builtin -D__BINDGEN__
# Each `bindgen` release may upgrade the list of Rust target versions. By # default, the highest stable release in their list is used. Thus we need to set
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gary Guo gary@garyguo.net
commit 2fd6f55c048d0c863ffbc8590b1bd2edb5ff13e5 upstream.
Currently bindgen has special logic to recognise `size_t` and `ssize_t` and map them to Rust `usize` and `isize`. Similarly, `ptrdiff_t` is mapped to `isize`.
However this falls short for `__kernel_size_t`, `__kernel_ssize_t` and `__kernel_ptrdiff_t`. To ensure that they are mapped to usize/isize rather than 32/64 integers depending on platform, blocklist them in bindgen parameters and manually provide their definition.
Signed-off-by: Gary Guo gary@garyguo.net Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Trevor Gross tmgross@umich.edu Link: https://lore.kernel.org/r/20240913213041.395655-3-gary@garyguo.net [ Formatted comment. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/bindgen_parameters | 5 +++++ rust/bindings/lib.rs | 5 +++++ rust/uapi/lib.rs | 5 +++++ 3 files changed, 15 insertions(+)
--- a/rust/bindgen_parameters +++ b/rust/bindgen_parameters @@ -1,5 +1,10 @@ # SPDX-License-Identifier: GPL-2.0
+# We want to map these types to `isize`/`usize` manually, instead of +# define them as `int`/`long` depending on platform bitwidth. +--blocklist-type __kernel_s?size_t +--blocklist-type __kernel_ptrdiff_t + --opaque-type xregs_state --opaque-type desc_struct --opaque-type arch_lbr_state --- a/rust/bindings/lib.rs +++ b/rust/bindings/lib.rs @@ -27,6 +27,11 @@ #[allow(dead_code)] #[allow(clippy::undocumented_unsafe_blocks)] mod bindings_raw { + // Manual definition for blocklisted types. + type __kernel_size_t = usize; + type __kernel_ssize_t = isize; + type __kernel_ptrdiff_t = isize; + // Use glob import here to expose all helpers. // Symbols defined within the module will take precedence to the glob import. pub use super::bindings_helper::*; --- a/rust/uapi/lib.rs +++ b/rust/uapi/lib.rs @@ -25,4 +25,9 @@ unsafe_op_in_unsafe_fn )]
+// Manual definition of blocklisted types. +type __kernel_size_t = usize; +type __kernel_ssize_t = isize; +type __kernel_ptrdiff_t = isize; + include!(concat!(env!("OBJTREE"), "/rust/uapi/uapi_generated.rs"));
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gary Guo gary@garyguo.net
commit d072acda4862f095ec9056979b654cc06a22cc68 upstream.
Currently FFI integer types are defined in libcore. This commit creates the `ffi` crate and asks bindgen to use that crate for FFI integer types instead of `core::ffi`.
This commit is preparatory and no type changes are made in this commit yet.
Signed-off-by: Gary Guo gary@garyguo.net Link: https://lore.kernel.org/r/20240913213041.395655-4-gary@garyguo.net [ Added `rustdoc`, `rusttest` and KUnit tests support. Rebased on top of `rust-next` (e.g. migrated more `core::ffi` cases). Reworded crate docs slightly and formatted. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/Makefile | 39 ++++++++++++++++++++++++------------ rust/ffi.rs | 13 ++++++++++++ rust/kernel/alloc/allocator.rs | 2 - rust/kernel/alloc/allocator_test.rs | 4 +-- rust/kernel/alloc/kbox.rs | 12 +++++------ rust/kernel/block/mq/operations.rs | 18 ++++++++-------- rust/kernel/block/mq/raw_writer.rs | 2 - rust/kernel/block/mq/tag_set.rs | 2 - rust/kernel/error.rs | 20 +++++++++--------- rust/kernel/init.rs | 2 - rust/kernel/lib.rs | 2 + rust/kernel/net/phy.rs | 16 +++++++------- rust/kernel/str.rs | 4 +-- rust/kernel/sync/arc.rs | 6 ++--- rust/kernel/sync/condvar.rs | 2 - rust/kernel/sync/lock.rs | 2 - rust/kernel/sync/lock/mutex.rs | 2 - rust/kernel/sync/lock/spinlock.rs | 2 - rust/kernel/task.rs | 8 +------ rust/kernel/time.rs | 4 +-- rust/kernel/types.rs | 14 ++++++------ rust/kernel/uaccess.rs | 6 ++--- rust/macros/module.rs | 8 +++---- 23 files changed, 107 insertions(+), 83 deletions(-) create mode 100644 rust/ffi.rs
--- a/rust/Makefile +++ b/rust/Makefile @@ -3,7 +3,7 @@ # Where to place rustdoc generated documentation rustdoc_output := $(objtree)/Documentation/output/rust/rustdoc
-obj-$(CONFIG_RUST) += core.o compiler_builtins.o +obj-$(CONFIG_RUST) += core.o compiler_builtins.o ffi.o always-$(CONFIG_RUST) += exports_core_generated.h
# Missing prototypes are expected in the helpers since these are exported @@ -103,10 +103,13 @@ rustdoc-core: $(RUST_LIB_SRC)/core/src/l rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE +$(call if_changed,rustdoc)
-rustdoc-kernel: private rustc_target_flags = \ +rustdoc-ffi: $(src)/ffi.rs rustdoc-core FORCE + +$(call if_changed,rustdoc) + +rustdoc-kernel: private rustc_target_flags = --extern ffi \ --extern build_error --extern macros=$(objtree)/$(obj)/libmacros.so \ --extern bindings --extern uapi -rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-macros \ +rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-ffi rustdoc-macros \ rustdoc-compiler_builtins $(obj)/libmacros.so \ $(obj)/bindings.o FORCE +$(call if_changed,rustdoc) @@ -124,12 +127,15 @@ quiet_cmd_rustc_test_library = RUSTC TL rusttestlib-build_error: $(src)/build_error.rs FORCE +$(call if_changed,rustc_test_library)
+rusttestlib-ffi: $(src)/ffi.rs FORCE + +$(call if_changed,rustc_test_library) + rusttestlib-macros: private rustc_target_flags = --extern proc_macro rusttestlib-macros: private rustc_test_library_proc = yes rusttestlib-macros: $(src)/macros/lib.rs FORCE +$(call if_changed,rustc_test_library)
-rusttestlib-kernel: private rustc_target_flags = \ +rusttestlib-kernel: private rustc_target_flags = --extern ffi \ --extern build_error --extern macros \ --extern bindings --extern uapi rusttestlib-kernel: $(src)/kernel/lib.rs \ @@ -137,10 +143,12 @@ rusttestlib-kernel: $(src)/kernel/lib.rs $(obj)/libmacros.so $(obj)/bindings.o FORCE +$(call if_changed,rustc_test_library)
-rusttestlib-bindings: $(src)/bindings/lib.rs FORCE +rusttestlib-bindings: private rustc_target_flags = --extern ffi +rusttestlib-bindings: $(src)/bindings/lib.rs rusttestlib-ffi FORCE +$(call if_changed,rustc_test_library)
-rusttestlib-uapi: $(src)/uapi/lib.rs FORCE +rusttestlib-uapi: private rustc_target_flags = --extern ffi +rusttestlib-uapi: $(src)/uapi/lib.rs rusttestlib-ffi FORCE +$(call if_changed,rustc_test_library)
quiet_cmd_rustdoc_test = RUSTDOC T $< @@ -159,7 +167,7 @@ quiet_cmd_rustdoc_test_kernel = RUSTDOC mkdir -p $(objtree)/$(obj)/test/doctests/kernel; \ OBJTREE=$(abspath $(objtree)) \ $(RUSTDOC) --test $(rust_flags) \ - -L$(objtree)/$(obj) --extern kernel \ + -L$(objtree)/$(obj) --extern ffi --extern kernel \ --extern build_error --extern macros \ --extern bindings --extern uapi \ --no-run --crate-name kernel -Zunstable-options \ @@ -197,9 +205,9 @@ rusttest-macros: $(src)/macros/lib.rs \ +$(call if_changed,rustc_test) +$(call if_changed,rustdoc_test)
-rusttest-kernel: private rustc_target_flags = \ +rusttest-kernel: private rustc_target_flags = --extern ffi \ --extern build_error --extern macros --extern bindings --extern uapi -rusttest-kernel: $(src)/kernel/lib.rs rusttestlib-kernel \ +rusttest-kernel: $(src)/kernel/lib.rs rusttestlib-ffi rusttestlib-kernel \ rusttestlib-build_error rusttestlib-macros rusttestlib-bindings \ rusttestlib-uapi FORCE +$(call if_changed,rustc_test) @@ -286,7 +294,7 @@ bindgen_c_flags_final = $(bindgen_c_flag quiet_cmd_bindgen = BINDGEN $@ cmd_bindgen = \ $(BINDGEN) $< $(bindgen_target_flags) --rust-target 1.68 \ - --use-core --with-derive-default --ctypes-prefix core::ffi --no-layout-tests \ + --use-core --with-derive-default --ctypes-prefix ffi --no-layout-tests \ --no-debug '.*' --enable-function-attribute-detection \ -o $@ -- $(bindgen_c_flags_final) -DMODULE \ $(bindgen_target_cflags) $(bindgen_target_extra) @@ -414,18 +422,23 @@ $(obj)/compiler_builtins.o: $(src)/compi $(obj)/build_error.o: $(src)/build_error.rs $(obj)/compiler_builtins.o FORCE +$(call if_changed_rule,rustc_library)
+$(obj)/ffi.o: $(src)/ffi.rs $(obj)/compiler_builtins.o FORCE + +$(call if_changed_rule,rustc_library) + +$(obj)/bindings.o: private rustc_target_flags = --extern ffi $(obj)/bindings.o: $(src)/bindings/lib.rs \ - $(obj)/compiler_builtins.o \ + $(obj)/ffi.o \ $(obj)/bindings/bindings_generated.rs \ $(obj)/bindings/bindings_helpers_generated.rs FORCE +$(call if_changed_rule,rustc_library)
+$(obj)/uapi.o: private rustc_target_flags = --extern ffi $(obj)/uapi.o: $(src)/uapi/lib.rs \ - $(obj)/compiler_builtins.o \ + $(obj)/ffi.o \ $(obj)/uapi/uapi_generated.rs FORCE +$(call if_changed_rule,rustc_library)
-$(obj)/kernel.o: private rustc_target_flags = \ +$(obj)/kernel.o: private rustc_target_flags = --extern ffi \ --extern build_error --extern macros --extern bindings --extern uapi $(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/build_error.o \ $(obj)/libmacros.so $(obj)/bindings.o $(obj)/uapi.o FORCE --- /dev/null +++ b/rust/ffi.rs @@ -0,0 +1,13 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Foreign function interface (FFI) types. +//! +//! This crate provides mapping from C primitive types to Rust ones. +//! +//! The Rust [`core`] crate provides [`core::ffi`], which maps integer types to the platform default +//! C ABI. The kernel does not use [`core::ffi`], so it can customise the mapping that deviates from +//! the platform default. + +#![no_std] + +pub use core::ffi::*; --- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -58,7 +58,7 @@ fn aligned_size(new_layout: Layout) -> u /// /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`. struct ReallocFunc( - unsafe extern "C" fn(*const core::ffi::c_void, usize, u32) -> *mut core::ffi::c_void, + unsafe extern "C" fn(*const crate::ffi::c_void, usize, u32) -> *mut crate::ffi::c_void, );
impl ReallocFunc { --- a/rust/kernel/alloc/allocator_test.rs +++ b/rust/kernel/alloc/allocator_test.rs @@ -24,10 +24,10 @@ pub type KVmalloc = Kmalloc;
extern "C" { #[link_name = "aligned_alloc"] - fn libc_aligned_alloc(align: usize, size: usize) -> *mut core::ffi::c_void; + fn libc_aligned_alloc(align: usize, size: usize) -> *mut crate::ffi::c_void;
#[link_name = "free"] - fn libc_free(ptr: *mut core::ffi::c_void); + fn libc_free(ptr: *mut crate::ffi::c_void); }
// SAFETY: --- a/rust/kernel/alloc/kbox.rs +++ b/rust/kernel/alloc/kbox.rs @@ -355,17 +355,17 @@ where { type Borrowed<'a> = &'a T;
- fn into_foreign(self) -> *const core::ffi::c_void { + fn into_foreign(self) -> *const crate::ffi::c_void { Box::into_raw(self) as _ }
- unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { + unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self { // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous // call to `Self::into_foreign`. unsafe { Box::from_raw(ptr as _) } }
- unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> &'a T { + unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> &'a T { // SAFETY: The safety requirements of this method ensure that the object remains alive and // immutable for the duration of 'a. unsafe { &*ptr.cast() } @@ -378,18 +378,18 @@ where { type Borrowed<'a> = Pin<&'a T>;
- fn into_foreign(self) -> *const core::ffi::c_void { + fn into_foreign(self) -> *const crate::ffi::c_void { // SAFETY: We are still treating the box as pinned. Box::into_raw(unsafe { Pin::into_inner_unchecked(self) }) as _ }
- unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { + unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self { // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous // call to `Self::into_foreign`. unsafe { Pin::new_unchecked(Box::from_raw(ptr as _)) } }
- unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> Pin<&'a T> { + unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> Pin<&'a T> { // SAFETY: The safety requirements for this function ensure that the object is still alive, // so it is safe to dereference the raw pointer. // The safety requirements of `from_foreign` also ensure that the object remains alive for --- a/rust/kernel/block/mq/operations.rs +++ b/rust/kernel/block/mq/operations.rs @@ -131,7 +131,7 @@ impl<T: Operations> OperationsVTable<T> unsafe extern "C" fn poll_callback( _hctx: *mut bindings::blk_mq_hw_ctx, _iob: *mut bindings::io_comp_batch, - ) -> core::ffi::c_int { + ) -> crate::ffi::c_int { T::poll().into() }
@@ -145,9 +145,9 @@ impl<T: Operations> OperationsVTable<T> /// for the same context. unsafe extern "C" fn init_hctx_callback( _hctx: *mut bindings::blk_mq_hw_ctx, - _tagset_data: *mut core::ffi::c_void, - _hctx_idx: core::ffi::c_uint, - ) -> core::ffi::c_int { + _tagset_data: *mut crate::ffi::c_void, + _hctx_idx: crate::ffi::c_uint, + ) -> crate::ffi::c_int { from_result(|| Ok(0)) }
@@ -159,7 +159,7 @@ impl<T: Operations> OperationsVTable<T> /// This function may only be called by blk-mq C infrastructure. unsafe extern "C" fn exit_hctx_callback( _hctx: *mut bindings::blk_mq_hw_ctx, - _hctx_idx: core::ffi::c_uint, + _hctx_idx: crate::ffi::c_uint, ) { }
@@ -176,9 +176,9 @@ impl<T: Operations> OperationsVTable<T> unsafe extern "C" fn init_request_callback( _set: *mut bindings::blk_mq_tag_set, rq: *mut bindings::request, - _hctx_idx: core::ffi::c_uint, - _numa_node: core::ffi::c_uint, - ) -> core::ffi::c_int { + _hctx_idx: crate::ffi::c_uint, + _numa_node: crate::ffi::c_uint, + ) -> crate::ffi::c_int { from_result(|| { // SAFETY: By the safety requirements of this function, `rq` points // to a valid allocation. @@ -203,7 +203,7 @@ impl<T: Operations> OperationsVTable<T> unsafe extern "C" fn exit_request_callback( _set: *mut bindings::blk_mq_tag_set, rq: *mut bindings::request, - _hctx_idx: core::ffi::c_uint, + _hctx_idx: crate::ffi::c_uint, ) { // SAFETY: The tagset invariants guarantee that all requests are allocated with extra memory // for the request data. --- a/rust/kernel/block/mq/raw_writer.rs +++ b/rust/kernel/block/mq/raw_writer.rs @@ -25,7 +25,7 @@ impl<'a> RawWriter<'a> { }
pub(crate) fn from_array<const N: usize>( - a: &'a mut [core::ffi::c_char; N], + a: &'a mut [crate::ffi::c_char; N], ) -> Result<RawWriter<'a>> { Self::new( // SAFETY: the buffer of `a` is valid for read and write as `u8` for --- a/rust/kernel/block/mq/tag_set.rs +++ b/rust/kernel/block/mq/tag_set.rs @@ -53,7 +53,7 @@ impl<T: Operations> TagSet<T> { queue_depth: num_tags, cmd_size, flags: bindings::BLK_MQ_F_SHOULD_MERGE, - driver_data: core::ptr::null_mut::core::ffi::c_void(), + driver_data: core::ptr::null_mut::crate::ffi::c_void(), nr_maps: num_maps, ..tag_set } --- a/rust/kernel/error.rs +++ b/rust/kernel/error.rs @@ -100,7 +100,7 @@ impl Error { /// /// It is a bug to pass an out-of-range `errno`. `EINVAL` would /// be returned in such a case. - pub fn from_errno(errno: core::ffi::c_int) -> Error { + pub fn from_errno(errno: crate::ffi::c_int) -> Error { if errno < -(bindings::MAX_ERRNO as i32) || errno >= 0 { // TODO: Make it a `WARN_ONCE` once available. crate::pr_warn!( @@ -119,7 +119,7 @@ impl Error { /// Creates an [`Error`] from a kernel error code. /// /// Returns [`None`] if `errno` is out-of-range. - const fn try_from_errno(errno: core::ffi::c_int) -> Option<Error> { + const fn try_from_errno(errno: crate::ffi::c_int) -> Option<Error> { if errno < -(bindings::MAX_ERRNO as i32) || errno >= 0 { return None; } @@ -133,7 +133,7 @@ impl Error { /// # Safety /// /// `errno` must be within error code range (i.e. `>= -MAX_ERRNO && < 0`). - const unsafe fn from_errno_unchecked(errno: core::ffi::c_int) -> Error { + const unsafe fn from_errno_unchecked(errno: crate::ffi::c_int) -> Error { // INVARIANT: The contract ensures the type invariant // will hold. // SAFETY: The caller guarantees `errno` is non-zero. @@ -141,7 +141,7 @@ impl Error { }
/// Returns the kernel error code. - pub fn to_errno(self) -> core::ffi::c_int { + pub fn to_errno(self) -> crate::ffi::c_int { self.0.get() }
@@ -259,7 +259,7 @@ pub type Result<T = (), E = Error> = cor
/// Converts an integer as returned by a C kernel function to an error if it's negative, and /// `Ok(())` otherwise. -pub fn to_result(err: core::ffi::c_int) -> Result { +pub fn to_result(err: crate::ffi::c_int) -> Result { if err < 0 { Err(Error::from_errno(err)) } else { @@ -282,15 +282,15 @@ pub fn to_result(err: core::ffi::c_int) /// fn devm_platform_ioremap_resource( /// pdev: &mut PlatformDevice, /// index: u32, -/// ) -> Result<*mut core::ffi::c_void> { +/// ) -> Result<*mut kernel::ffi::c_void> { /// // SAFETY: `pdev` points to a valid platform device. There are no safety requirements /// // on `index`. /// from_err_ptr(unsafe { bindings::devm_platform_ioremap_resource(pdev.to_ptr(), index) }) /// } /// ``` pub fn from_err_ptr<T>(ptr: *mut T) -> Result<*mut T> { - // CAST: Casting a pointer to `*const core::ffi::c_void` is always valid. - let const_ptr: *const core::ffi::c_void = ptr.cast(); + // CAST: Casting a pointer to `*const crate::ffi::c_void` is always valid. + let const_ptr: *const crate::ffi::c_void = ptr.cast(); // SAFETY: The FFI function does not deref the pointer. if unsafe { bindings::IS_ERR(const_ptr) } { // SAFETY: The FFI function does not deref the pointer. @@ -306,7 +306,7 @@ pub fn from_err_ptr<T>(ptr: *mut T) -> R // // SAFETY: `IS_ERR()` ensures `err` is a // negative value greater-or-equal to `-bindings::MAX_ERRNO`. - return Err(unsafe { Error::from_errno_unchecked(err as core::ffi::c_int) }); + return Err(unsafe { Error::from_errno_unchecked(err as crate::ffi::c_int) }); } Ok(ptr) } @@ -326,7 +326,7 @@ pub fn from_err_ptr<T>(ptr: *mut T) -> R /// # use kernel::bindings; /// unsafe extern "C" fn probe_callback( /// pdev: *mut bindings::platform_device, -/// ) -> core::ffi::c_int { +/// ) -> kernel::ffi::c_int { /// from_result(|| { /// let ptr = devm_alloc(pdev)?; /// bindings::platform_set_drvdata(pdev, ptr); --- a/rust/kernel/init.rs +++ b/rust/kernel/init.rs @@ -133,7 +133,7 @@ //! # } //! # // `Error::from_errno` is `pub(crate)` in the `kernel` crate, thus provide a workaround. //! # trait FromErrno { -//! # fn from_errno(errno: core::ffi::c_int) -> Error { +//! # fn from_errno(errno: kernel::ffi::c_int) -> Error { //! # // Dummy error that can be constructed outside the `kernel` crate. //! # Error::from(core::fmt::Error) //! # } --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -27,6 +27,8 @@ compile_error!("Missing kernel configura // Allow proc-macros to refer to `::kernel` inside the `kernel` crate (this crate). extern crate self as kernel;
+pub use ffi; + pub mod alloc; #[cfg(CONFIG_BLOCK)] pub mod block; --- a/rust/kernel/net/phy.rs +++ b/rust/kernel/net/phy.rs @@ -314,7 +314,7 @@ impl<T: Driver> Adapter<T> { /// `phydev` must be passed by the corresponding callback in `phy_driver`. unsafe extern "C" fn soft_reset_callback( phydev: *mut bindings::phy_device, - ) -> core::ffi::c_int { + ) -> crate::ffi::c_int { from_result(|| { // SAFETY: This callback is called only in contexts // where we hold `phy_device->lock`, so the accessors on @@ -328,7 +328,7 @@ impl<T: Driver> Adapter<T> { /// # Safety /// /// `phydev` must be passed by the corresponding callback in `phy_driver`. - unsafe extern "C" fn probe_callback(phydev: *mut bindings::phy_device) -> core::ffi::c_int { + unsafe extern "C" fn probe_callback(phydev: *mut bindings::phy_device) -> crate::ffi::c_int { from_result(|| { // SAFETY: This callback is called only in contexts // where we can exclusively access `phy_device` because @@ -345,7 +345,7 @@ impl<T: Driver> Adapter<T> { /// `phydev` must be passed by the corresponding callback in `phy_driver`. unsafe extern "C" fn get_features_callback( phydev: *mut bindings::phy_device, - ) -> core::ffi::c_int { + ) -> crate::ffi::c_int { from_result(|| { // SAFETY: This callback is called only in contexts // where we hold `phy_device->lock`, so the accessors on @@ -359,7 +359,7 @@ impl<T: Driver> Adapter<T> { /// # Safety /// /// `phydev` must be passed by the corresponding callback in `phy_driver`. - unsafe extern "C" fn suspend_callback(phydev: *mut bindings::phy_device) -> core::ffi::c_int { + unsafe extern "C" fn suspend_callback(phydev: *mut bindings::phy_device) -> crate::ffi::c_int { from_result(|| { // SAFETY: The C core code ensures that the accessors on // `Device` are okay to call even though `phy_device->lock` @@ -373,7 +373,7 @@ impl<T: Driver> Adapter<T> { /// # Safety /// /// `phydev` must be passed by the corresponding callback in `phy_driver`. - unsafe extern "C" fn resume_callback(phydev: *mut bindings::phy_device) -> core::ffi::c_int { + unsafe extern "C" fn resume_callback(phydev: *mut bindings::phy_device) -> crate::ffi::c_int { from_result(|| { // SAFETY: The C core code ensures that the accessors on // `Device` are okay to call even though `phy_device->lock` @@ -389,7 +389,7 @@ impl<T: Driver> Adapter<T> { /// `phydev` must be passed by the corresponding callback in `phy_driver`. unsafe extern "C" fn config_aneg_callback( phydev: *mut bindings::phy_device, - ) -> core::ffi::c_int { + ) -> crate::ffi::c_int { from_result(|| { // SAFETY: This callback is called only in contexts // where we hold `phy_device->lock`, so the accessors on @@ -405,7 +405,7 @@ impl<T: Driver> Adapter<T> { /// `phydev` must be passed by the corresponding callback in `phy_driver`. unsafe extern "C" fn read_status_callback( phydev: *mut bindings::phy_device, - ) -> core::ffi::c_int { + ) -> crate::ffi::c_int { from_result(|| { // SAFETY: This callback is called only in contexts // where we hold `phy_device->lock`, so the accessors on @@ -421,7 +421,7 @@ impl<T: Driver> Adapter<T> { /// `phydev` must be passed by the corresponding callback in `phy_driver`. unsafe extern "C" fn match_phy_device_callback( phydev: *mut bindings::phy_device, - ) -> core::ffi::c_int { + ) -> crate::ffi::c_int { // SAFETY: This callback is called only in contexts // where we hold `phy_device->lock`, so the accessors on // `Device` are okay to call. --- a/rust/kernel/str.rs +++ b/rust/kernel/str.rs @@ -184,7 +184,7 @@ impl CStr { /// last at least `'a`. When `CStr` is alive, the memory pointed by `ptr` /// must not be mutated. #[inline] - pub unsafe fn from_char_ptr<'a>(ptr: *const core::ffi::c_char) -> &'a Self { + pub unsafe fn from_char_ptr<'a>(ptr: *const crate::ffi::c_char) -> &'a Self { // SAFETY: The safety precondition guarantees `ptr` is a valid pointer // to a `NUL`-terminated C string. let len = unsafe { bindings::strlen(ptr) } + 1; @@ -247,7 +247,7 @@ impl CStr {
/// Returns a C pointer to the string. #[inline] - pub const fn as_char_ptr(&self) -> *const core::ffi::c_char { + pub const fn as_char_ptr(&self) -> *const crate::ffi::c_char { self.0.as_ptr() as _ }
--- a/rust/kernel/sync/arc.rs +++ b/rust/kernel/sync/arc.rs @@ -332,11 +332,11 @@ impl<T: ?Sized> Arc<T> { impl<T: 'static> ForeignOwnable for Arc<T> { type Borrowed<'a> = ArcBorrow<'a, T>;
- fn into_foreign(self) -> *const core::ffi::c_void { + fn into_foreign(self) -> *const crate::ffi::c_void { ManuallyDrop::new(self).ptr.as_ptr() as _ }
- unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> ArcBorrow<'a, T> { + unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> ArcBorrow<'a, T> { // By the safety requirement of this function, we know that `ptr` came from // a previous call to `Arc::into_foreign`. let inner = NonNull::new(ptr as *mut ArcInner<T>).unwrap(); @@ -346,7 +346,7 @@ impl<T: 'static> ForeignOwnable for Arc< unsafe { ArcBorrow::new(inner) } }
- unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self { + unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self { // SAFETY: By the safety requirement of this function, we know that `ptr` came from // a previous call to `Arc::into_foreign`, which guarantees that `ptr` is valid and // holds a reference count increment that is transferrable to us. --- a/rust/kernel/sync/condvar.rs +++ b/rust/kernel/sync/condvar.rs @@ -7,6 +7,7 @@
use super::{lock::Backend, lock::Guard, LockClassKey}; use crate::{ + ffi::{c_int, c_long}, init::PinInit, pin_init, str::CStr, @@ -14,7 +15,6 @@ use crate::{ time::Jiffies, types::Opaque, }; -use core::ffi::{c_int, c_long}; use core::marker::PhantomPinned; use core::ptr; use macros::pin_data; --- a/rust/kernel/sync/lock.rs +++ b/rust/kernel/sync/lock.rs @@ -46,7 +46,7 @@ pub unsafe trait Backend { /// remain valid for read indefinitely. unsafe fn init( ptr: *mut Self::State, - name: *const core::ffi::c_char, + name: *const crate::ffi::c_char, key: *mut bindings::lock_class_key, );
--- a/rust/kernel/sync/lock/mutex.rs +++ b/rust/kernel/sync/lock/mutex.rs @@ -96,7 +96,7 @@ unsafe impl super::Backend for MutexBack
unsafe fn init( ptr: *mut Self::State, - name: *const core::ffi::c_char, + name: *const crate::ffi::c_char, key: *mut bindings::lock_class_key, ) { // SAFETY: The safety requirements ensure that `ptr` is valid for writes, and `name` and --- a/rust/kernel/sync/lock/spinlock.rs +++ b/rust/kernel/sync/lock/spinlock.rs @@ -95,7 +95,7 @@ unsafe impl super::Backend for SpinLockB
unsafe fn init( ptr: *mut Self::State, - name: *const core::ffi::c_char, + name: *const crate::ffi::c_char, key: *mut bindings::lock_class_key, ) { // SAFETY: The safety requirements ensure that `ptr` is valid for writes, and `name` and --- a/rust/kernel/task.rs +++ b/rust/kernel/task.rs @@ -4,13 +4,9 @@ //! //! C header: [`include/linux/sched.h`](srctree/include/linux/sched.h).
+use crate::ffi::{c_int, c_long, c_uint}; use crate::types::Opaque; -use core::{ - ffi::{c_int, c_long, c_uint}, - marker::PhantomData, - ops::Deref, - ptr, -}; +use core::{marker::PhantomData, ops::Deref, ptr};
/// A sentinel value used for infinite timeouts. pub const MAX_SCHEDULE_TIMEOUT: c_long = c_long::MAX; --- a/rust/kernel/time.rs +++ b/rust/kernel/time.rs @@ -12,10 +12,10 @@ pub const NSEC_PER_MSEC: i64 = bindings::NSEC_PER_MSEC as i64;
/// The time unit of Linux kernel. One jiffy equals (1/HZ) second. -pub type Jiffies = core::ffi::c_ulong; +pub type Jiffies = crate::ffi::c_ulong;
/// The millisecond time unit. -pub type Msecs = core::ffi::c_uint; +pub type Msecs = crate::ffi::c_uint;
/// Converts milliseconds to jiffies. #[inline] --- a/rust/kernel/types.rs +++ b/rust/kernel/types.rs @@ -29,7 +29,7 @@ pub trait ForeignOwnable: Sized { /// For example, it might be invalid, dangling or pointing to uninitialized memory. Using it in /// any way except for [`ForeignOwnable::from_foreign`], [`ForeignOwnable::borrow`], /// [`ForeignOwnable::try_from_foreign`] can result in undefined behavior. - fn into_foreign(self) -> *const core::ffi::c_void; + fn into_foreign(self) -> *const crate::ffi::c_void;
/// Borrows a foreign-owned object. /// @@ -37,7 +37,7 @@ pub trait ForeignOwnable: Sized { /// /// `ptr` must have been returned by a previous call to [`ForeignOwnable::into_foreign`] for /// which a previous matching [`ForeignOwnable::from_foreign`] hasn't been called yet. - unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> Self::Borrowed<'a>; + unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> Self::Borrowed<'a>;
/// Converts a foreign-owned object back to a Rust-owned one. /// @@ -47,7 +47,7 @@ pub trait ForeignOwnable: Sized { /// which a previous matching [`ForeignOwnable::from_foreign`] hasn't been called yet. /// Additionally, all instances (if any) of values returned by [`ForeignOwnable::borrow`] for /// this object must have been dropped. - unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self; + unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self;
/// Tries to convert a foreign-owned object back to a Rust-owned one. /// @@ -58,7 +58,7 @@ pub trait ForeignOwnable: Sized { /// /// `ptr` must either be null or satisfy the safety requirements for /// [`ForeignOwnable::from_foreign`]. - unsafe fn try_from_foreign(ptr: *const core::ffi::c_void) -> Option<Self> { + unsafe fn try_from_foreign(ptr: *const crate::ffi::c_void) -> Option<Self> { if ptr.is_null() { None } else { @@ -72,13 +72,13 @@ pub trait ForeignOwnable: Sized { impl ForeignOwnable for () { type Borrowed<'a> = ();
- fn into_foreign(self) -> *const core::ffi::c_void { + fn into_foreign(self) -> *const crate::ffi::c_void { core::ptr::NonNull::dangling().as_ptr() }
- unsafe fn borrow<'a>(_: *const core::ffi::c_void) -> Self::Borrowed<'a> {} + unsafe fn borrow<'a>(_: *const crate::ffi::c_void) -> Self::Borrowed<'a> {}
- unsafe fn from_foreign(_: *const core::ffi::c_void) -> Self {} + unsafe fn from_foreign(_: *const crate::ffi::c_void) -> Self {} }
/// Runs a cleanup function/closure when dropped. --- a/rust/kernel/uaccess.rs +++ b/rust/kernel/uaccess.rs @@ -8,10 +8,10 @@ use crate::{ alloc::Flags, bindings, error::Result, + ffi::{c_ulong, c_void}, prelude::*, types::{AsBytes, FromBytes}, }; -use core::ffi::{c_ulong, c_void}; use core::mem::{size_of, MaybeUninit};
/// The type used for userspace addresses. @@ -45,7 +45,7 @@ pub type UserPtr = usize; /// every byte in the region. /// /// ```no_run -/// use core::ffi::c_void; +/// use kernel::ffi::c_void; /// use kernel::error::Result; /// use kernel::uaccess::{UserPtr, UserSlice}; /// @@ -67,7 +67,7 @@ pub type UserPtr = usize; /// Example illustrating a TOCTOU (time-of-check to time-of-use) bug. /// /// ```no_run -/// use core::ffi::c_void; +/// use kernel::ffi::c_void; /// use kernel::error::{code::EINVAL, Result}; /// use kernel::uaccess::{UserPtr, UserSlice}; /// --- a/rust/macros/module.rs +++ b/rust/macros/module.rs @@ -253,7 +253,7 @@ pub(crate) fn module(ts: TokenStream) -> #[doc(hidden)] #[no_mangle] #[link_section = ".init.text"] - pub unsafe extern "C" fn init_module() -> core::ffi::c_int {{ + pub unsafe extern "C" fn init_module() -> kernel::ffi::c_int {{ // SAFETY: This function is inaccessible to the outside due to the double // module wrapping it. It is called exactly once by the C side via its // unique name. @@ -292,7 +292,7 @@ pub(crate) fn module(ts: TokenStream) -> #[doc(hidden)] #[link_section = "{initcall_section}"] #[used] - pub static __{name}_initcall: extern "C" fn() -> core::ffi::c_int = __{name}_init; + pub static __{name}_initcall: extern "C" fn() -> kernel::ffi::c_int = __{name}_init;
#[cfg(not(MODULE))] #[cfg(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)] @@ -307,7 +307,7 @@ pub(crate) fn module(ts: TokenStream) -> #[cfg(not(MODULE))] #[doc(hidden)] #[no_mangle] - pub extern "C" fn __{name}_init() -> core::ffi::c_int {{ + pub extern "C" fn __{name}_init() -> kernel::ffi::c_int {{ // SAFETY: This function is inaccessible to the outside due to the double // module wrapping it. It is called exactly once by the C side via its // placement above in the initcall section. @@ -330,7 +330,7 @@ pub(crate) fn module(ts: TokenStream) -> /// # Safety /// /// This function must only be called once. - unsafe fn __init() -> core::ffi::c_int {{ + unsafe fn __init() -> kernel::ffi::c_int {{ match <{type_} as kernel::Module>::init(&super::super::THIS_MODULE) {{ Ok(m) => {{ // SAFETY: No data race, since `__MOD` can only be accessed by this
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Asahi Lina lina@asahilina.net
commit b7ed2b6f4e8d7f64649795e76ee9db67300de8eb upstream.
We were accidentally allocating a layout for the *square* of the object size due to a variable shadowing mishap.
Fixes memory bloat and page allocation failures in drm/asahi.
Reported-by: Janne Grunau j@jannau.net Fixes: 9e7bbfa18276 ("rust: alloc: introduce `ArrayLayout`") Signed-off-by: Asahi Lina lina@asahilina.net Acked-by: Danilo Krummrich dakr@kernel.org Reviewed-by: Neal Gompa neal@gompa.dev Link: https://lore.kernel.org/r/20241123-rust-fix-arraylayout-v1-1-197e64c95bd4@as... Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/alloc/layout.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/rust/kernel/alloc/layout.rs +++ b/rust/kernel/alloc/layout.rs @@ -45,7 +45,7 @@ impl<T> ArrayLayout<T> { /// When `len * size_of::<T>()` overflows or when `len * size_of::<T>() > isize::MAX`. pub const fn new(len: usize) -> Result<Self, LayoutError> { match len.checked_mul(core::mem::size_of::<T>()) { - Some(len) if len <= ISIZE_MAX => { + Some(size) if size <= ISIZE_MAX => { // INVARIANT: We checked above that `len * size_of::<T>() <= isize::MAX`. Ok(Self { len,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rob Herring (Arm) robh@kernel.org
commit 75f1f311d883dfaffb98be3c1da208d6ed5d4df9 upstream.
This reverts commit 267b21d0bef8e67dbe6c591c9991444e58237ec9.
Turns out some DTs do depend on this behavior. Specifically, a downstream Pixel 6 DT. Revert the change at least until we can decide if the DT spec can be changed instead.
Cc: stable@vger.kernel.org Signed-off-by: Rob Herring (Arm) robh@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/of/of_reserved_mem.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -360,12 +360,12 @@ static int __init __reserved_mem_alloc_s
prop = of_get_flat_dt_prop(node, "alignment", &len); if (prop) { - if (len != dt_root_size_cells * sizeof(__be32)) { + if (len != dt_root_addr_cells * sizeof(__be32)) { pr_err("invalid alignment property in '%s' node.\n", uname); return -EINVAL; } - align = dt_mem_next_cell(dt_root_size_cells, &prop); + align = dt_mem_next_cell(dt_root_addr_cells, &prop); }
nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Masami Hiramatsu (Google) mhiramat@kernel.org
commit ac965d7d88fc36fb42e3d50225c0a44dd8326da4 upstream.
Fix a memory leak when a tprobe is defined with $retval. This combination is not allowed, but the parse_symbol_and_return() does not free the *symbol which should not be used if it returns the error. Thus, it leaks the *symbol memory in that error path.
Link: https://lore.kernel.org/all/174055072650.4079315.3063014346697447838.stgit@m...
Fixes: ce51e6153f77 ("tracing: fprobe-event: Fix to check tracepoint event and return") Signed-off-by: Masami Hiramatsu (Google) mhiramat@kernel.org Reviewed-by: Steven Rostedt (Google) rostedt@goodmis.org Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_fprobe.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -1025,6 +1025,8 @@ static int parse_symbol_and_return(int a if (is_tracepoint) { trace_probe_log_set_index(i); trace_probe_log_err(tmp - argv[i], RETVAL_ON_PROBE); + kfree(*symbol); + *symbol = NULL; return -EINVAL; } *is_return = true;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Masami Hiramatsu (Google) mhiramat@kernel.org
commit d0453655b6ddc685a4837f3cc0776ae8eef62d01 upstream.
Commit 57a7e6de9e30 ("tracing/fprobe: Support raw tracepoints on future loaded modules") allows user to set a tprobe on non-exist tracepoint but it does not check the tracepoint name is acceptable. So it leads tprobe has a wrong character for events (e.g. with subsystem prefix). In this case, the event is not shown in the events directory.
Reject such invalid tracepoint name.
The tracepoint name must consist of alphabet or digit or '_'.
Link: https://lore.kernel.org/all/174055073461.4079315.15875502830565214255.stgit@...
Fixes: 57a7e6de9e30 ("tracing/fprobe: Support raw tracepoints on future loaded modules") Signed-off-by: Masami Hiramatsu (Google) mhiramat@kernel.org Reviewed-by: Steven Rostedt (Google) rostedt@goodmis.org Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_fprobe.c | 13 +++++++++++++ kernel/trace/trace_probe.h | 1 + 2 files changed, 14 insertions(+)
--- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -1018,6 +1018,19 @@ static int parse_symbol_and_return(int a if (*is_return) return 0;
+ if (is_tracepoint) { + tmp = *symbol; + while (*tmp && (isalnum(*tmp) || *tmp == '_')) + tmp++; + if (*tmp) { + /* find a wrong character. */ + trace_probe_log_err(tmp - *symbol, BAD_TP_NAME); + kfree(*symbol); + *symbol = NULL; + return -EINVAL; + } + } + /* If there is $retval, this should be a return fprobe. */ for (i = 2; i < argc; i++) { tmp = strstr(argv[i], "$retval"); --- a/kernel/trace/trace_probe.h +++ b/kernel/trace/trace_probe.h @@ -481,6 +481,7 @@ extern int traceprobe_define_arg_fields( C(NON_UNIQ_SYMBOL, "The symbol is not unique"), \ C(BAD_RETPROBE, "Retprobe address must be an function entry"), \ C(NO_TRACEPOINT, "Tracepoint is not found"), \ + C(BAD_TP_NAME, "Invalid character in tracepoint name"),\ C(BAD_ADDR_SUFFIX, "Invalid probed address suffix"), \ C(NO_GROUP_NAME, "Group name is not specified"), \ C(GROUP_TOO_LONG, "Group name is too long"), \
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Philipp Stanner phasta@kernel.org
commit 00371a3f48775967950c2fe3ec97b7c786ca956d upstream.
pcim_iomap_regions() should receive the driver's name as its third parameter, not the PCI device's name.
Define the driver name with a macro and use it at the appropriate places, including pcim_iomap_regions().
Cc: stable@vger.kernel.org # v5.14+ Fixes: 30bba69d7db4 ("stmmac: pci: Add dwmac support for Loongson") Signed-off-by: Philipp Stanner phasta@kernel.org Reviewed-by: Andrew Lunn andrew@lunn.ch Reviewed-by: Yanteng Si si.yanteng@linux.dev Tested-by: Henry Chen chenx97@aosc.io Link: https://patch.msgid.link/20250226085208.97891-2-phasta@kernel.org Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c @@ -11,6 +11,8 @@ #include "dwmac_dma.h" #include "dwmac1000.h"
+#define DRIVER_NAME "dwmac-loongson-pci" + /* Normal Loongson Tx Summary */ #define DMA_INTR_ENA_NIE_TX_LOONGSON 0x00040000 /* Normal Loongson Rx Summary */ @@ -568,7 +570,7 @@ static int loongson_dwmac_probe(struct p for (i = 0; i < PCI_STD_NUM_BARS; i++) { if (pci_resource_len(pdev, i) == 0) continue; - ret = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev)); + ret = pcim_iomap_regions(pdev, BIT(0), DRIVER_NAME); if (ret) goto err_disable_device; break; @@ -687,7 +689,7 @@ static const struct pci_device_id loongs MODULE_DEVICE_TABLE(pci, loongson_dwmac_id_table);
static struct pci_driver loongson_dwmac_driver = { - .name = "dwmac-loongson-pci", + .name = DRIVER_NAME, .id_table = loongson_dwmac_id_table, .probe = loongson_dwmac_probe, .remove = loongson_dwmac_remove,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tiezhu Yang yangtiezhu@loongson.cn
commit da64a2359092ceec4f9dea5b329d0aef20104217 upstream.
When compiling on LoongArch, there exists the following objtool warning in arch/loongarch/kernel/machine_kexec.o:
kexec_reboot() falls through to next function crash_shutdown_secondary()
Avoid using unreachable() as it can (and will in the absence of UBSAN) generate fall-through code. Use BUG() so we get a "break BRK_BUG" trap (with unreachable annotation).
Cc: stable@vger.kernel.org # 6.12+ Acked-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Tiezhu Yang yangtiezhu@loongson.cn Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/loongarch/kernel/machine_kexec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/loongarch/kernel/machine_kexec.c +++ b/arch/loongarch/kernel/machine_kexec.c @@ -126,14 +126,14 @@ void kexec_reboot(void) /* All secondary cpus go to kexec_smp_wait */ if (smp_processor_id() > 0) { relocated_kexec_smp_wait(NULL); - unreachable(); + BUG(); } #endif
do_kexec = (void *)reboot_code_buffer; do_kexec(efi_boot, cmdline_ptr, systable_ptr, start_addr, first_ind_entry);
- unreachable(); + BUG(); }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Huacai Chen chenhuacai@loongson.cn
commit c9117434c8f7523f0b77db4c5766f5011cc94677 upstream.
When CONFIG_RANDOM_KMALLOC_CACHES or other randomization infrastructrue enabled, the idle_task's stack may different between the booting kernel and target kernel. So when resuming from hibernation, an ACTION_BOOT_CPU IPI wakeup the idle instruction in arch_cpu_idle_dead() and jump to the interrupt handler. But since the stack pointer is changed, the interrupt handler cannot restore correct context.
So rename the current arch_cpu_idle_dead() to idle_play_dead(), make it as the default version of play_dead(), and the new arch_cpu_idle_dead() call play_dead() directly. For hibernation, implement an arch-specific hibernate_resume_nonboot_cpu_disable() to use the polling version (idle instruction is replace by nop, and irq is disabled) of play_dead(), i.e. poll_play_dead(), to avoid IPI handler corrupting the idle_task's stack when resuming from hibernation.
This solution is a little similar to commit 406f992e4a372dafbe3c ("x86 / hibernate: Use hlt_play_dead() when resuming from hibernation").
Cc: stable@vger.kernel.org Tested-by: Erpeng Xu xuerpeng@uniontech.com Tested-by: Yuli Wang wangyuli@uniontech.com Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/loongarch/kernel/smp.c | 47 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 46 insertions(+), 1 deletion(-)
--- a/arch/loongarch/kernel/smp.c +++ b/arch/loongarch/kernel/smp.c @@ -19,6 +19,7 @@ #include <linux/smp.h> #include <linux/threads.h> #include <linux/export.h> +#include <linux/suspend.h> #include <linux/syscore_ops.h> #include <linux/time.h> #include <linux/tracepoint.h> @@ -423,7 +424,7 @@ void loongson_cpu_die(unsigned int cpu) mb(); }
-void __noreturn arch_cpu_idle_dead(void) +static void __noreturn idle_play_dead(void) { register uint64_t addr; register void (*init_fn)(void); @@ -447,6 +448,50 @@ void __noreturn arch_cpu_idle_dead(void) BUG(); }
+#ifdef CONFIG_HIBERNATION +static void __noreturn poll_play_dead(void) +{ + register uint64_t addr; + register void (*init_fn)(void); + + idle_task_exit(); + __this_cpu_write(cpu_state, CPU_DEAD); + + __smp_mb(); + do { + __asm__ __volatile__("nop\n\t"); + addr = iocsr_read64(LOONGARCH_IOCSR_MBUF0); + } while (addr == 0); + + init_fn = (void *)TO_CACHE(addr); + iocsr_write32(0xffffffff, LOONGARCH_IOCSR_IPI_CLEAR); + + init_fn(); + BUG(); +} +#endif + +static void (*play_dead)(void) = idle_play_dead; + +void __noreturn arch_cpu_idle_dead(void) +{ + play_dead(); + BUG(); /* play_dead() doesn't return */ +} + +#ifdef CONFIG_HIBERNATION +int hibernate_resume_nonboot_cpu_disable(void) +{ + int ret; + + play_dead = poll_play_dead; + ret = suspend_disable_secondary_cpus(); + play_dead = idle_play_dead; + + return ret; +} +#endif + #endif
/*
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bibo Mao maobibo@loongson.cn
commit c8477bb0a8e7f6b2e47952b403c5cb67a6929e55 upstream.
The current max_pfn equals to zero. In this case, it causes user cannot get some page information through /proc filesystem such as kpagecount. The following message is displayed by stress-ng test suite with command "stress-ng --verbose --physpage 1 -t 1".
# stress-ng --verbose --physpage 1 -t 1 stress-ng: error: [1691] physpage: cannot read page count for address 0x134ac000 in /proc/kpagecount, errno=22 (Invalid argument) stress-ng: error: [1691] physpage: cannot read page count for address 0x7ffff207c3a8 in /proc/kpagecount, errno=22 (Invalid argument) stress-ng: error: [1691] physpage: cannot read page count for address 0x134b0000 in /proc/kpagecount, errno=22 (Invalid argument) ...
After applying this patch, the kernel can pass the test.
# stress-ng --verbose --physpage 1 -t 1 stress-ng: debug: [1701] physpage: [1701] started (instance 0 on CPU 3) stress-ng: debug: [1701] physpage: [1701] exited (instance 0 on CPU 3) stress-ng: debug: [1700] physpage: [1701] terminated (success)
Cc: stable@vger.kernel.org # 6.8+ Fixes: ff6c3d81f2e8 ("NUMA: optimize detection of memory with no node id assigned by firmware") Signed-off-by: Bibo Mao maobibo@loongson.cn Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/loongarch/kernel/setup.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/arch/loongarch/kernel/setup.c +++ b/arch/loongarch/kernel/setup.c @@ -387,6 +387,9 @@ static void __init check_kernel_sections */ static void __init arch_mem_init(char **cmdline_p) { + /* Recalculate max_low_pfn for "mem=xxx" */ + max_pfn = max_low_pfn = PHYS_PFN(memblock_end_of_DRAM()); + if (usermem) pr_info("User-defined physical RAM map overwrite\n");
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bibo Mao maobibo@loongson.cn
commit 6fb1867d5a44b0a061cf39d2492d23d314bcb8ce upstream.
There is a newly added macro INT_AVEC with CSR ESTAT register, which is bit 14 used for LoongArch AVEC support. AVEC interrupt status bit 14 is supported with macro CSR_ESTAT_IS, so here replace the hard-coded value 0x1fff with macro CSR_ESTAT_IS so that the AVEC interrupt status is also supported by KVM.
Cc: stable@vger.kernel.org Signed-off-by: Bibo Mao maobibo@loongson.cn Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/loongarch/kvm/vcpu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -311,7 +311,7 @@ static int kvm_handle_exit(struct kvm_ru { int ret = RESUME_GUEST; unsigned long estat = vcpu->arch.host_estat; - u32 intr = estat & 0x1fff; /* Ignore NMI */ + u32 intr = estat & CSR_ESTAT_IS; u32 ecode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
vcpu->mode = OUTSIDE_GUEST_MODE;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bibo Mao maobibo@loongson.cn
commit 78d7bc5a02e1468df53896df354fa80727f35b7d upstream.
On host, the HW guest CSR registers are lost after suspend and resume operation. Since last_vcpu of boot CPU still records latest vCPU pointer so that the guest CSR register skips to reload when boot CPU resumes and vCPU is scheduled.
Here last_vcpu is cleared so that guest CSR registers will reload from scheduled vCPU context after suspend and resume.
Cc: stable@vger.kernel.org Signed-off-by: Bibo Mao maobibo@loongson.cn Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/loongarch/kvm/main.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/arch/loongarch/kvm/main.c +++ b/arch/loongarch/kvm/main.c @@ -297,6 +297,13 @@ int kvm_arch_enable_virtualization_cpu(v kvm_debug("GCFG:%lx GSTAT:%lx GINTC:%lx GTLBC:%lx", read_csr_gcfg(), read_csr_gstat(), read_csr_gintc(), read_csr_gtlbc());
+ /* + * HW Guest CSR registers are lost after CPU suspend and resume. + * Clear last_vcpu so that Guest CSR registers forced to reload + * from vCPU SW state. + */ + this_cpu_ptr(vmcs)->last_vcpu = NULL; + return 0; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bibo Mao maobibo@loongson.cn
commit 6bdbb73dc8d99fbb77f5db79dbb6f108708090b4 upstream.
Physical address space is 48 bit on Loongson-3A5000 physical machine, however it is 47 bit for VM on Loongson-3A5000 system. Size of physical address space of VM is the same with the size of virtual user space (a half) of physical machine.
Variable cpu_vabits represents user address space, kernel address space is not included (user space and kernel space are both a half of total). Here cpu_vabits, rather than cpu_vabits - 1, is to represent the size of guest physical address space.
Also there is strict checking about page fault GPA address, inject error if it is larger than maximum GPA address of VM.
Cc: stable@vger.kernel.org Signed-off-by: Bibo Mao maobibo@loongson.cn Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/loongarch/kvm/exit.c | 6 ++++++ arch/loongarch/kvm/vm.c | 6 +++++- 2 files changed, 11 insertions(+), 1 deletion(-)
--- a/arch/loongarch/kvm/exit.c +++ b/arch/loongarch/kvm/exit.c @@ -624,6 +624,12 @@ static int kvm_handle_rdwr_fault(struct struct kvm_run *run = vcpu->run; unsigned long badv = vcpu->arch.badv;
+ /* Inject ADE exception if exceed max GPA size */ + if (unlikely(badv >= vcpu->kvm->arch.gpa_size)) { + kvm_queue_exception(vcpu, EXCCODE_ADE, EXSUBCODE_ADEM); + return RESUME_GUEST; + } + ret = kvm_handle_mm_fault(vcpu, badv, write); if (ret) { /* Treat as MMIO */ --- a/arch/loongarch/kvm/vm.c +++ b/arch/loongarch/kvm/vm.c @@ -46,7 +46,11 @@ int kvm_arch_init_vm(struct kvm *kvm, un if (kvm_pvtime_supported()) kvm->arch.pv_features |= BIT(KVM_FEATURE_STEAL_TIME);
- kvm->arch.gpa_size = BIT(cpu_vabits - 1); + /* + * cpu_vabits means user address space only (a half of total). + * GPA size of VM is the same with the size of user address space. + */ + kvm->arch.gpa_size = BIT(cpu_vabits); kvm->arch.root_level = CONFIG_PGTABLE_LEVELS - 1; kvm->arch.invalid_ptes[0] = 0; kvm->arch.invalid_ptes[1] = (unsigned long)invalid_pte_table;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniil Dulov d.dulov@aladdin.ru
commit 2ff5baa9b5275e3acafdf7f2089f74cccb2f38d1 upstream.
Syzkaller reports a NULL pointer dereference issue in input_event().
BUG: KASAN: null-ptr-deref in instrument_atomic_read include/linux/instrumented.h:68 [inline] BUG: KASAN: null-ptr-deref in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline] BUG: KASAN: null-ptr-deref in is_event_supported drivers/input/input.c:67 [inline] BUG: KASAN: null-ptr-deref in input_event+0x42/0xa0 drivers/input/input.c:395 Read of size 8 at addr 0000000000000028 by task syz-executor199/2949
CPU: 0 UID: 0 PID: 2949 Comm: syz-executor199 Not tainted 6.13.0-rc4-syzkaller-00076-gf097a36ef88d #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: <IRQ> __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 kasan_report+0xd9/0x110 mm/kasan/report.c:602 check_region_inline mm/kasan/generic.c:183 [inline] kasan_check_range+0xef/0x1a0 mm/kasan/generic.c:189 instrument_atomic_read include/linux/instrumented.h:68 [inline] _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline] is_event_supported drivers/input/input.c:67 [inline] input_event+0x42/0xa0 drivers/input/input.c:395 input_report_key include/linux/input.h:439 [inline] key_down drivers/hid/hid-appleir.c:159 [inline] appleir_raw_event+0x3e5/0x5e0 drivers/hid/hid-appleir.c:232 __hid_input_report.constprop.0+0x312/0x440 drivers/hid/hid-core.c:2111 hid_ctrl+0x49f/0x550 drivers/hid/usbhid/hid-core.c:484 __usb_hcd_giveback_urb+0x389/0x6e0 drivers/usb/core/hcd.c:1650 usb_hcd_giveback_urb+0x396/0x450 drivers/usb/core/hcd.c:1734 dummy_timer+0x17f7/0x3960 drivers/usb/gadget/udc/dummy_hcd.c:1993 __run_hrtimer kernel/time/hrtimer.c:1739 [inline] __hrtimer_run_queues+0x20a/0xae0 kernel/time/hrtimer.c:1803 hrtimer_run_softirq+0x17d/0x350 kernel/time/hrtimer.c:1820 handle_softirqs+0x206/0x8d0 kernel/softirq.c:561 __do_softirq kernel/softirq.c:595 [inline] invoke_softirq kernel/softirq.c:435 [inline] __irq_exit_rcu+0xfa/0x160 kernel/softirq.c:662 irq_exit_rcu+0x9/0x30 kernel/softirq.c:678 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1049 [inline] sysvec_apic_timer_interrupt+0x90/0xb0 arch/x86/kernel/apic/apic.c:1049 </IRQ> <TASK> asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702 __mod_timer+0x8f6/0xdc0 kernel/time/timer.c:1185 add_timer+0x62/0x90 kernel/time/timer.c:1295 schedule_timeout+0x11f/0x280 kernel/time/sleep_timeout.c:98 usbhid_wait_io+0x1c7/0x380 drivers/hid/usbhid/hid-core.c:645 usbhid_init_reports+0x19f/0x390 drivers/hid/usbhid/hid-core.c:784 hiddev_ioctl+0x1133/0x15b0 drivers/hid/usbhid/hiddev.c:794 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:906 [inline] __se_sys_ioctl fs/ioctl.c:892 [inline] __x64_sys_ioctl+0x190/0x200 fs/ioctl.c:892 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f </TASK>
This happens due to the malformed report items sent by the emulated device which results in a report, that has no fields, being added to the report list. Due to this appleir_input_configured() is never called, hidinput_connect() fails which results in the HID_CLAIMED_INPUT flag is not being set. However, it does not make appleir_probe() fail and lets the event callback to be called without the associated input device.
Thus, add a check for the HID_CLAIMED_INPUT flag and leave the event hook early if the driver didn't claim any input_dev for some reason. Moreover, some other hid drivers accessing input_dev in their event callbacks do have similar checks, too.
Found by Linux Verification Center (linuxtesting.org) with Syzkaller.
Fixes: 9a4a5574ce42 ("HID: appleir: add support for Apple ir devices") Cc: stable@vger.kernel.org Signed-off-by: Daniil Dulov d.dulov@aladdin.ru Signed-off-by: Jiri Kosina jkosina@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hid/hid-appleir.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/hid/hid-appleir.c +++ b/drivers/hid/hid-appleir.c @@ -188,7 +188,7 @@ static int appleir_raw_event(struct hid_ static const u8 flatbattery[] = { 0x25, 0x87, 0xe0 }; unsigned long flags;
- if (len != 5) + if (len != 5 || !(hid->claimed & HID_CLAIMED_INPUT)) goto out;
if (!memcmp(data, keydown, sizeof(keydown))) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Namjae Jeon linkinjeon@kernel.org
commit e2ff19f0b7a30e03516e6eb73b948e27a55bc9d2 upstream.
req->handle is allocated using ksmbd_acquire_id(&ipc_ida), based on ida_alloc. req->handle from ksmbd_ipc_login_request and FSCTL_PIPE_TRANSCEIVE ioctl can be same and it could lead to type confusion between messages, resulting in access to unexpected parts of memory after an incorrect delivery. ksmbd check type of ipc response but missing add continue to check next ipc reponse.
Cc: stable@vger.kernel.org Reported-by: Norbert Szetei norbert@doyensec.com Tested-by: Norbert Szetei norbert@doyensec.com Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/smb/server/transport_ipc.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/smb/server/transport_ipc.c +++ b/fs/smb/server/transport_ipc.c @@ -281,6 +281,7 @@ static int handle_response(int type, voi if (entry->type + 1 != type) { pr_err("Waiting for IPC type %d, got %d. Ignore.\n", entry->type + 1, type); + continue; }
entry->response = kvzalloc(sz, GFP_KERNEL);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Namjae Jeon linkinjeon@kernel.org
commit d6e13e19063db24f94b690159d0633aaf72a0f03 upstream.
If osidoffset, gsidoffset and dacloffset could be greater than smb_ntsd struct size. If it is smaller, It could cause slab-out-of-bounds. And when validating sid, It need to check it included subauth array size.
Cc: stable@vger.kernel.org Reported-by: Norbert Szetei norbert@doyensec.com Tested-by: Norbert Szetei norbert@doyensec.com Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/smb/server/smbacl.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
--- a/fs/smb/server/smbacl.c +++ b/fs/smb/server/smbacl.c @@ -807,6 +807,13 @@ static int parse_sid(struct smb_sid *psi return -EINVAL; }
+ if (!psid->num_subauth) + return 0; + + if (psid->num_subauth > SID_MAX_SUB_AUTHORITIES || + end_of_acl < (char *)psid + 8 + sizeof(__le32) * psid->num_subauth) + return -EINVAL; + return 0; }
@@ -848,6 +855,9 @@ int parse_sec_desc(struct mnt_idmap *idm pntsd->type = cpu_to_le16(DACL_PRESENT);
if (pntsd->osidoffset) { + if (le32_to_cpu(pntsd->osidoffset) < sizeof(struct smb_ntsd)) + return -EINVAL; + rc = parse_sid(owner_sid_ptr, end_of_acl); if (rc) { pr_err("%s: Error %d parsing Owner SID\n", __func__, rc); @@ -863,6 +873,9 @@ int parse_sec_desc(struct mnt_idmap *idm }
if (pntsd->gsidoffset) { + if (le32_to_cpu(pntsd->gsidoffset) < sizeof(struct smb_ntsd)) + return -EINVAL; + rc = parse_sid(group_sid_ptr, end_of_acl); if (rc) { pr_err("%s: Error %d mapping Owner SID to gid\n", @@ -884,6 +897,9 @@ int parse_sec_desc(struct mnt_idmap *idm pntsd->type |= cpu_to_le16(DACL_PROTECTED);
if (dacloffset) { + if (dacloffset < sizeof(struct smb_ntsd)) + return -EINVAL; + parse_dacl(idmap, dacl_ptr, end_of_acl, owner_sid_ptr, group_sid_ptr, fattr); }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Namjae Jeon linkinjeon@kernel.org
commit 84d2d1641b71dec326e8736a749b7ee76a9599fc upstream.
If smb_lock->zero_len has value, ->llist of smb_lock is not delete and flock is old one. It will cause use-after-free on error handling routine.
Cc: stable@vger.kernel.org Reported-by: Norbert Szetei norbert@doyensec.com Tested-by: Norbert Szetei norbert@doyensec.com Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/smb/server/smb2pdu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/fs/smb/server/smb2pdu.c +++ b/fs/smb/server/smb2pdu.c @@ -7441,13 +7441,13 @@ out_check_cl: }
no_check_cl: + flock = smb_lock->fl; + list_del(&smb_lock->llist); + if (smb_lock->zero_len) { err = 0; goto skip; } - - flock = smb_lock->fl; - list_del(&smb_lock->llist); retry: rc = vfs_lock_file(filp, smb_lock->cmd, flock, NULL); skip:
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Namjae Jeon linkinjeon@kernel.org
commit e26e2d2e15daf1ab33e0135caf2304a0cfa2744b upstream.
If lock count is greater than 1, flags could be old value. It should be checked with flags of smb_lock, not flags. It will cause bug-on trap from locks_free_lock in error handling routine.
Cc: stable@vger.kernel.org Reported-by: Norbert Szetei norbert@doyensec.com Tested-by: Norbert Szetei norbert@doyensec.com Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/smb/server/smb2pdu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/smb/server/smb2pdu.c +++ b/fs/smb/server/smb2pdu.c @@ -7451,7 +7451,7 @@ no_check_cl: retry: rc = vfs_lock_file(filp, smb_lock->cmd, flock, NULL); skip: - if (flags & SMB2_LOCKFLAG_UNLOCK) { + if (smb_lock->flags & SMB2_LOCKFLAG_UNLOCK) { if (!rc) { ksmbd_debug(SMB, "File unlocked\n"); } else if (rc == -ENOENT) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Niklas Söderlund niklas.soderlund+renesas@ragnatech.se
commit f02c41f87cfe61440c18bf77d1ef0a884b9ee2b5 upstream.
Use raw_spinlock in order to fix spurious messages about invalid context when spinlock debugging is enabled. The lock is only used to serialize register access.
[ 4.239592] ============================= [ 4.239595] [ BUG: Invalid wait context ] [ 4.239599] 6.13.0-rc7-arm64-renesas-05496-gd088502a519f #35 Not tainted [ 4.239603] ----------------------------- [ 4.239606] kworker/u8:5/76 is trying to lock: [ 4.239609] ffff0000091898a0 (&p->lock){....}-{3:3}, at: gpio_rcar_config_interrupt_input_mode+0x34/0x164 [ 4.239641] other info that might help us debug this: [ 4.239643] context-{5:5} [ 4.239646] 5 locks held by kworker/u8:5/76: [ 4.239651] #0: ffff0000080fb148 ((wq_completion)async){+.+.}-{0:0}, at: process_one_work+0x190/0x62c [ 4.250180] OF: /soc/sound@ec500000/ports/port@0/endpoint: Read of boolean property 'frame-master' with a value. [ 4.254094] #1: ffff80008299bd80 ((work_completion)(&entry->work)){+.+.}-{0:0}, at: process_one_work+0x1b8/0x62c [ 4.254109] #2: ffff00000920c8f8 [ 4.258345] OF: /soc/sound@ec500000/ports/port@1/endpoint: Read of boolean property 'bitclock-master' with a value. [ 4.264803] (&dev->mutex){....}-{4:4}, at: __device_attach_async_helper+0x3c/0xdc [ 4.264820] #3: ffff00000a50ca40 (request_class#2){+.+.}-{4:4}, at: __setup_irq+0xa0/0x690 [ 4.264840] #4: [ 4.268872] OF: /soc/sound@ec500000/ports/port@1/endpoint: Read of boolean property 'frame-master' with a value. [ 4.273275] ffff00000a50c8c8 (lock_class){....}-{2:2}, at: __setup_irq+0xc4/0x690 [ 4.296130] renesas_sdhi_internal_dmac ee100000.mmc: mmc1 base at 0x00000000ee100000, max clock rate 200 MHz [ 4.304082] stack backtrace: [ 4.304086] CPU: 1 UID: 0 PID: 76 Comm: kworker/u8:5 Not tainted 6.13.0-rc7-arm64-renesas-05496-gd088502a519f #35 [ 4.304092] Hardware name: Renesas Salvator-X 2nd version board based on r8a77965 (DT) [ 4.304097] Workqueue: async async_run_entry_fn [ 4.304106] Call trace: [ 4.304110] show_stack+0x14/0x20 (C) [ 4.304122] dump_stack_lvl+0x6c/0x90 [ 4.304131] dump_stack+0x14/0x1c [ 4.304138] __lock_acquire+0xdfc/0x1584 [ 4.426274] lock_acquire+0x1c4/0x33c [ 4.429942] _raw_spin_lock_irqsave+0x5c/0x80 [ 4.434307] gpio_rcar_config_interrupt_input_mode+0x34/0x164 [ 4.440061] gpio_rcar_irq_set_type+0xd4/0xd8 [ 4.444422] __irq_set_trigger+0x5c/0x178 [ 4.448435] __setup_irq+0x2e4/0x690 [ 4.452012] request_threaded_irq+0xc4/0x190 [ 4.456285] devm_request_threaded_irq+0x7c/0xf4 [ 4.459398] ata1: link resume succeeded after 1 retries [ 4.460902] mmc_gpiod_request_cd_irq+0x68/0xe0 [ 4.470660] mmc_start_host+0x50/0xac [ 4.474327] mmc_add_host+0x80/0xe4 [ 4.477817] tmio_mmc_host_probe+0x2b0/0x440 [ 4.482094] renesas_sdhi_probe+0x488/0x6f4 [ 4.486281] renesas_sdhi_internal_dmac_probe+0x60/0x78 [ 4.491509] platform_probe+0x64/0xd8 [ 4.495178] really_probe+0xb8/0x2a8 [ 4.498756] __driver_probe_device+0x74/0x118 [ 4.503116] driver_probe_device+0x3c/0x154 [ 4.507303] __device_attach_driver+0xd4/0x160 [ 4.511750] bus_for_each_drv+0x84/0xe0 [ 4.515588] __device_attach_async_helper+0xb0/0xdc [ 4.520470] async_run_entry_fn+0x30/0xd8 [ 4.524481] process_one_work+0x210/0x62c [ 4.528494] worker_thread+0x1ac/0x340 [ 4.532245] kthread+0x10c/0x110 [ 4.535476] ret_from_fork+0x10/0x20
Signed-off-by: Niklas Söderlund niklas.soderlund+renesas@ragnatech.se Reviewed-by: Geert Uytterhoeven geert+renesas@glider.be Tested-by: Geert Uytterhoeven geert+renesas@glider.be Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250121135833.3769310-1-niklas.soderlund+renesas@... Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpio/gpio-rcar.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-)
--- a/drivers/gpio/gpio-rcar.c +++ b/drivers/gpio/gpio-rcar.c @@ -40,7 +40,7 @@ struct gpio_rcar_info {
struct gpio_rcar_priv { void __iomem *base; - spinlock_t lock; + raw_spinlock_t lock; struct device *dev; struct gpio_chip gpio_chip; unsigned int irq_parent; @@ -123,7 +123,7 @@ static void gpio_rcar_config_interrupt_i * "Setting Level-Sensitive Interrupt Input Mode" */
- spin_lock_irqsave(&p->lock, flags); + raw_spin_lock_irqsave(&p->lock, flags);
/* Configure positive or negative logic in POSNEG */ gpio_rcar_modify_bit(p, POSNEG, hwirq, !active_high_rising_edge); @@ -142,7 +142,7 @@ static void gpio_rcar_config_interrupt_i if (!level_trigger) gpio_rcar_write(p, INTCLR, BIT(hwirq));
- spin_unlock_irqrestore(&p->lock, flags); + raw_spin_unlock_irqrestore(&p->lock, flags); }
static int gpio_rcar_irq_set_type(struct irq_data *d, unsigned int type) @@ -246,7 +246,7 @@ static void gpio_rcar_config_general_inp * "Setting General Input Mode" */
- spin_lock_irqsave(&p->lock, flags); + raw_spin_lock_irqsave(&p->lock, flags);
/* Configure positive logic in POSNEG */ gpio_rcar_modify_bit(p, POSNEG, gpio, false); @@ -261,7 +261,7 @@ static void gpio_rcar_config_general_inp if (p->info.has_outdtsel && output) gpio_rcar_modify_bit(p, OUTDTSEL, gpio, false);
- spin_unlock_irqrestore(&p->lock, flags); + raw_spin_unlock_irqrestore(&p->lock, flags); }
static int gpio_rcar_request(struct gpio_chip *chip, unsigned offset) @@ -347,7 +347,7 @@ static int gpio_rcar_get_multiple(struct return 0; }
- spin_lock_irqsave(&p->lock, flags); + raw_spin_lock_irqsave(&p->lock, flags); outputs = gpio_rcar_read(p, INOUTSEL); m = outputs & bankmask; if (m) @@ -356,7 +356,7 @@ static int gpio_rcar_get_multiple(struct m = ~outputs & bankmask; if (m) val |= gpio_rcar_read(p, INDT) & m; - spin_unlock_irqrestore(&p->lock, flags); + raw_spin_unlock_irqrestore(&p->lock, flags);
bits[0] = val; return 0; @@ -367,9 +367,9 @@ static void gpio_rcar_set(struct gpio_ch struct gpio_rcar_priv *p = gpiochip_get_data(chip); unsigned long flags;
- spin_lock_irqsave(&p->lock, flags); + raw_spin_lock_irqsave(&p->lock, flags); gpio_rcar_modify_bit(p, OUTDT, offset, value); - spin_unlock_irqrestore(&p->lock, flags); + raw_spin_unlock_irqrestore(&p->lock, flags); }
static void gpio_rcar_set_multiple(struct gpio_chip *chip, unsigned long *mask, @@ -386,12 +386,12 @@ static void gpio_rcar_set_multiple(struc if (!bankmask) return;
- spin_lock_irqsave(&p->lock, flags); + raw_spin_lock_irqsave(&p->lock, flags); val = gpio_rcar_read(p, OUTDT); val &= ~bankmask; val |= (bankmask & bits[0]); gpio_rcar_write(p, OUTDT, val); - spin_unlock_irqrestore(&p->lock, flags); + raw_spin_unlock_irqrestore(&p->lock, flags); }
static int gpio_rcar_direction_output(struct gpio_chip *chip, unsigned offset, @@ -505,7 +505,7 @@ static int gpio_rcar_probe(struct platfo return -ENOMEM;
p->dev = dev; - spin_lock_init(&p->lock); + raw_spin_lock_init(&p->lock);
/* Get device configuration from DT node */ ret = gpio_rcar_parse_dt(p, &npins);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Koichiro Den koichiro.den@canonical.com
commit 12f65d1203507f7db3ba59930fe29a3b8eee9945 upstream.
Both new_device_store and delete_device_store touch module global resources (e.g. gpio_aggregator_lock). To prevent race conditions with module unload, a reference needs to be held.
Add try_module_get() in these handlers.
For new_device_store, this eliminates what appears to be the most dangerous scenario: if an id is allocated from gpio_aggregator_idr but platform_device_register has not yet been called or completed, a concurrent module unload could fail to unregister/delete the device, leaving behind a dangling platform device/GPIO forwarder. This can result in various issues. The following simple reproducer demonstrates these problems:
#!/bin/bash while :; do # note: whether 'gpiochip0 0' exists or not does not matter. echo 'gpiochip0 0' > /sys/bus/platform/drivers/gpio-aggregator/new_device done & while :; do modprobe gpio-aggregator modprobe -r gpio-aggregator done & wait
Starting with the following warning, several kinds of warnings will appear and the system may become unstable:
------------[ cut here ]------------ list_del corruption, ffff888103e2e980->next is LIST_POISON1 (dead000000000100) WARNING: CPU: 1 PID: 1327 at lib/list_debug.c:56 __list_del_entry_valid_or_report+0xa3/0x120 [...] RIP: 0010:__list_del_entry_valid_or_report+0xa3/0x120 [...] Call Trace: <TASK> ? __list_del_entry_valid_or_report+0xa3/0x120 ? __warn.cold+0x93/0xf2 ? __list_del_entry_valid_or_report+0xa3/0x120 ? report_bug+0xe6/0x170 ? __irq_work_queue_local+0x39/0xe0 ? handle_bug+0x58/0x90 ? exc_invalid_op+0x13/0x60 ? asm_exc_invalid_op+0x16/0x20 ? __list_del_entry_valid_or_report+0xa3/0x120 gpiod_remove_lookup_table+0x22/0x60 new_device_store+0x315/0x350 [gpio_aggregator] kernfs_fop_write_iter+0x137/0x1f0 vfs_write+0x262/0x430 ksys_write+0x60/0xd0 do_syscall_64+0x6c/0x180 entry_SYSCALL_64_after_hwframe+0x76/0x7e [...] </TASK> ---[ end trace 0000000000000000 ]---
Fixes: 828546e24280 ("gpio: Add GPIO Aggregator") Cc: stable@vger.kernel.org Signed-off-by: Koichiro Den koichiro.den@canonical.com Link: https://lore.kernel.org/r/20250224143134.3024598-2-koichiro.den@canonical.co... Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpio/gpio-aggregator.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-)
--- a/drivers/gpio/gpio-aggregator.c +++ b/drivers/gpio/gpio-aggregator.c @@ -121,10 +121,15 @@ static ssize_t new_device_store(struct d struct platform_device *pdev; int res, id;
+ if (!try_module_get(THIS_MODULE)) + return -ENOENT; + /* kernfs guarantees string termination, so count + 1 is safe */ aggr = kzalloc(sizeof(*aggr) + count + 1, GFP_KERNEL); - if (!aggr) - return -ENOMEM; + if (!aggr) { + res = -ENOMEM; + goto put_module; + }
memcpy(aggr->args, buf, count + 1);
@@ -163,6 +168,7 @@ static ssize_t new_device_store(struct d }
aggr->pdev = pdev; + module_put(THIS_MODULE); return count;
remove_table: @@ -177,6 +183,8 @@ free_table: kfree(aggr->lookups); free_ga: kfree(aggr); +put_module: + module_put(THIS_MODULE); return res; }
@@ -205,13 +213,19 @@ static ssize_t delete_device_store(struc if (error) return error;
+ if (!try_module_get(THIS_MODULE)) + return -ENOENT; + mutex_lock(&gpio_aggregator_lock); aggr = idr_remove(&gpio_aggregator_idr, id); mutex_unlock(&gpio_aggregator_lock); - if (!aggr) + if (!aggr) { + module_put(THIS_MODULE); return -ENOENT; + }
gpio_aggregator_free(aggr); + module_put(THIS_MODULE); return count; } static DRIVER_ATTR_WO(delete_device);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
commit c9ce148ea753bef66686460fa3cec6641cdfbb9f upstream.
snd_seq_client_use_ptr() is supposed to return the snd_seq_client object for the given client ID, and it tries to handle the module auto-loading when no matching object is found. Although the module handling is performed only conditionally with "!in_interrupt()", this condition may be fragile, e.g. when the code is called from the ALSA timer callback where the spinlock is temporarily disabled while the irq is disabled. Then his doesn't fit well and spews the error about sleep from invalid context, as complained recently by syzbot.
Also, in general, handling the module-loading at each time if no matching object is found is really an overkill. It can be still useful when performed at the top-level ioctl or proc reads, but it shouldn't be done at event delivery at all.
For addressing the issues above, this patch disables the module handling in snd_seq_client_use_ptr() in normal cases like event deliveries, but allow only in limited and safe situations. A new function client_load_and_use_ptr() is used for the cases where the module loading can be done safely, instead.
Reported-by: syzbot+4cb9fad083898f54c517@syzkaller.appspotmail.com Closes: https://lore.kernel.org/67c272e5.050a0220.dc10f.0159.GAE@google.com Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20250301114530.8975-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/core/seq/seq_clientmgr.c | 46 ++++++++++++++++++++++++++--------------- 1 file changed, 30 insertions(+), 16 deletions(-)
--- a/sound/core/seq/seq_clientmgr.c +++ b/sound/core/seq/seq_clientmgr.c @@ -106,7 +106,7 @@ static struct snd_seq_client *clientptr( return clienttab[clientid]; }
-struct snd_seq_client *snd_seq_client_use_ptr(int clientid) +static struct snd_seq_client *client_use_ptr(int clientid, bool load_module) { unsigned long flags; struct snd_seq_client *client; @@ -126,7 +126,7 @@ struct snd_seq_client *snd_seq_client_us } spin_unlock_irqrestore(&clients_lock, flags); #ifdef CONFIG_MODULES - if (!in_interrupt()) { + if (load_module) { static DECLARE_BITMAP(client_requested, SNDRV_SEQ_GLOBAL_CLIENTS); static DECLARE_BITMAP(card_requested, SNDRV_CARDS);
@@ -168,6 +168,20 @@ struct snd_seq_client *snd_seq_client_us return client; }
+/* get snd_seq_client object for the given id quickly */ +struct snd_seq_client *snd_seq_client_use_ptr(int clientid) +{ + return client_use_ptr(clientid, false); +} + +/* get snd_seq_client object for the given id; + * if not found, retry after loading the modules + */ +static struct snd_seq_client *client_load_and_use_ptr(int clientid) +{ + return client_use_ptr(clientid, IS_ENABLED(CONFIG_MODULES)); +} + /* Take refcount and perform ioctl_mutex lock on the given client; * used only for OSS sequencer * Unlock via snd_seq_client_ioctl_unlock() below @@ -176,7 +190,7 @@ bool snd_seq_client_ioctl_lock(int clien { struct snd_seq_client *client;
- client = snd_seq_client_use_ptr(clientid); + client = client_load_and_use_ptr(clientid); if (!client) return false; mutex_lock(&client->ioctl_mutex); @@ -1195,7 +1209,7 @@ static int snd_seq_ioctl_running_mode(st int err = 0;
/* requested client number */ - cptr = snd_seq_client_use_ptr(info->client); + cptr = client_load_and_use_ptr(info->client); if (cptr == NULL) return -ENOENT; /* don't change !!! */
@@ -1257,7 +1271,7 @@ static int snd_seq_ioctl_get_client_info struct snd_seq_client *cptr;
/* requested client number */ - cptr = snd_seq_client_use_ptr(client_info->client); + cptr = client_load_and_use_ptr(client_info->client); if (cptr == NULL) return -ENOENT; /* don't change !!! */
@@ -1392,7 +1406,7 @@ static int snd_seq_ioctl_get_port_info(s struct snd_seq_client *cptr; struct snd_seq_client_port *port;
- cptr = snd_seq_client_use_ptr(info->addr.client); + cptr = client_load_and_use_ptr(info->addr.client); if (cptr == NULL) return -ENXIO;
@@ -1496,10 +1510,10 @@ static int snd_seq_ioctl_subscribe_port( struct snd_seq_client *receiver = NULL, *sender = NULL; struct snd_seq_client_port *sport = NULL, *dport = NULL;
- receiver = snd_seq_client_use_ptr(subs->dest.client); + receiver = client_load_and_use_ptr(subs->dest.client); if (!receiver) goto __end; - sender = snd_seq_client_use_ptr(subs->sender.client); + sender = client_load_and_use_ptr(subs->sender.client); if (!sender) goto __end; sport = snd_seq_port_use_ptr(sender, subs->sender.port); @@ -1864,7 +1878,7 @@ static int snd_seq_ioctl_get_client_pool struct snd_seq_client_pool *info = arg; struct snd_seq_client *cptr;
- cptr = snd_seq_client_use_ptr(info->client); + cptr = client_load_and_use_ptr(info->client); if (cptr == NULL) return -ENOENT; memset(info, 0, sizeof(*info)); @@ -1968,7 +1982,7 @@ static int snd_seq_ioctl_get_subscriptio struct snd_seq_client_port *sport = NULL;
result = -EINVAL; - sender = snd_seq_client_use_ptr(subs->sender.client); + sender = client_load_and_use_ptr(subs->sender.client); if (!sender) goto __end; sport = snd_seq_port_use_ptr(sender, subs->sender.port); @@ -1999,7 +2013,7 @@ static int snd_seq_ioctl_query_subs(stru struct list_head *p; int i;
- cptr = snd_seq_client_use_ptr(subs->root.client); + cptr = client_load_and_use_ptr(subs->root.client); if (!cptr) goto __end; port = snd_seq_port_use_ptr(cptr, subs->root.port); @@ -2066,7 +2080,7 @@ static int snd_seq_ioctl_query_next_clie if (info->client < 0) info->client = 0; for (; info->client < SNDRV_SEQ_MAX_CLIENTS; info->client++) { - cptr = snd_seq_client_use_ptr(info->client); + cptr = client_load_and_use_ptr(info->client); if (cptr) break; /* found */ } @@ -2089,7 +2103,7 @@ static int snd_seq_ioctl_query_next_port struct snd_seq_client *cptr; struct snd_seq_client_port *port = NULL;
- cptr = snd_seq_client_use_ptr(info->addr.client); + cptr = client_load_and_use_ptr(info->addr.client); if (cptr == NULL) return -ENXIO;
@@ -2186,7 +2200,7 @@ static int snd_seq_ioctl_client_ump_info size = sizeof(struct snd_ump_endpoint_info); else size = sizeof(struct snd_ump_block_info); - cptr = snd_seq_client_use_ptr(client); + cptr = client_load_and_use_ptr(client); if (!cptr) return -ENOENT;
@@ -2458,7 +2472,7 @@ int snd_seq_kernel_client_enqueue(int cl if (check_event_type_and_length(ev)) return -EINVAL;
- cptr = snd_seq_client_use_ptr(client); + cptr = client_load_and_use_ptr(client); if (cptr == NULL) return -EINVAL; @@ -2690,7 +2704,7 @@ void snd_seq_info_clients_read(struct sn
/* list the client table */ for (c = 0; c < SNDRV_SEQ_MAX_CLIENTS; c++) { - client = snd_seq_client_use_ptr(c); + client = client_load_and_use_ptr(c); if (client == NULL) continue; if (client->type == NO_CLIENT) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hoku Ishibe me@hokuishi.be
commit 1ee5aa765c22a0577ec552d460bf2035300b4b51 upstream.
Dell XPS 13 7390 with the Realtek ALC3271 codec experiences persistent humming noise when the power_save mode is enabled. This issue occurs when the codec enters power saving mode, leading to unwanted noise from the speakers.
This patch adds the affected model (PCI ID 0x1028:0x0962) to the power_save denylist to ensure power_save is disabled by default, preventing power-off related noise issues.
Steps to Reproduce 1. Boot the system with `snd_hda_intel` loaded. 2. Verify that `power_save` mode is enabled: ```sh cat /sys/module/snd_hda_intel/parameters/power_save ```` output: 10 (default power save timeout) 3. Wait for the power save timeout 4. Observe a persistent humming noise from the speakers 5. Disable `power_save` manually: ```sh echo 0 | sudo tee /sys/module/snd_hda_intel/parameters/power_save ```` 6. Confirm that the noise disappears immediately.
This issue has been observed on my system, and this patch successfully eliminates the unwanted noise. If other users experience similar issues, additional reports would be helpful.
Signed-off-by: Hoku Ishibe me@hokuishi.be Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20250224020517.51035-1-me@hokuishi.be Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/hda_intel.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -2242,6 +2242,8 @@ static const struct snd_pci_quirk power_ SND_PCI_QUIRK(0x1631, 0xe017, "Packard Bell NEC IMEDIA 5204", 0), /* KONTRON SinglePC may cause a stall at runtime resume */ SND_PCI_QUIRK(0x1734, 0x1232, "KONTRON SinglePC", 0), + /* Dell ALC3271 */ + SND_PCI_QUIRK(0x1028, 0x0962, "Dell ALC3271", 0), {} };
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kailang Yang kailang@realtek.com
commit f603b159231b0c58f0c27ab39348534063d38223 upstream.
Support Mic Mute LED for ThinkCentre M series.
Signed-off-by: Kailang Yang kailang@realtek.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/c211a2702f1f411e86bd7420d7eebc03@realtek.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/patch_realtek.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -5055,6 +5055,16 @@ static void alc269_fixup_hp_line1_mic1_l } }
+static void alc233_fixup_lenovo_low_en_micmute_led(struct hda_codec *codec, + const struct hda_fixup *fix, int action) +{ + struct alc_spec *spec = codec->spec; + + if (action == HDA_FIXUP_ACT_PRE_PROBE) + spec->micmute_led_polarity = 1; + alc233_fixup_lenovo_line2_mic_hotkey(codec, fix, action); +} + static void alc_hp_mute_disable(struct hda_codec *codec, unsigned int delay) { if (delay <= 0) @@ -7588,6 +7598,7 @@ enum { ALC275_FIXUP_DELL_XPS, ALC293_FIXUP_LENOVO_SPK_NOISE, ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY, + ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED, ALC255_FIXUP_DELL_SPK_NOISE, ALC225_FIXUP_DISABLE_MIC_VREF, ALC225_FIXUP_DELL1_MIC_NO_PRESENCE, @@ -8574,6 +8585,10 @@ static const struct hda_fixup alc269_fix .type = HDA_FIXUP_FUNC, .v.func = alc233_fixup_lenovo_line2_mic_hotkey, }, + [ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc233_fixup_lenovo_low_en_micmute_led, + }, [ALC233_FIXUP_INTEL_NUC8_DMIC] = { .type = HDA_FIXUP_FUNC, .v.func = alc_fixup_inv_dmic, @@ -10852,6 +10867,9 @@ static const struct hda_quirk alc269_fix SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340), SND_PCI_QUIRK(0x17aa, 0x334b, "Lenovo ThinkCentre M70 Gen5", ALC283_FIXUP_HEADSET_MIC), + SND_PCI_QUIRK(0x17aa, 0x3384, "ThinkCentre M90a PRO", ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED), + SND_PCI_QUIRK(0x17aa, 0x3386, "ThinkCentre M90a Gen6", ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED), + SND_PCI_QUIRK(0x17aa, 0x3387, "ThinkCentre M70a Gen6", ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED), SND_PCI_QUIRK(0x17aa, 0x3801, "Lenovo Yoga9 14IAP7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), HDA_CODEC_QUIRK(0x17aa, 0x3802, "DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga Pro 9 14IRP8", ALC287_FIXUP_TAS2781_I2C),
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kailang Yang kailang@realtek.com
commit ca0dedaff92307591f66c9206933fbdfe87add10 upstream.
Add ALC222 its own depop functions for alc_init and alc_shutup.
[note: this fixes pop noise issues on the models with two headphone jacks -- tiwai ]
Signed-off-by: Kailang Yang kailang@realtek.com Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/patch_realtek.c | 76 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 76 insertions(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -3845,6 +3845,79 @@ static void alc225_shutup(struct hda_cod } }
+static void alc222_init(struct hda_codec *codec) +{ + struct alc_spec *spec = codec->spec; + hda_nid_t hp_pin = alc_get_hp_pin(spec); + bool hp1_pin_sense, hp2_pin_sense; + + if (!hp_pin) + return; + + msleep(30); + + hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin); + hp2_pin_sense = snd_hda_jack_detect(codec, 0x14); + + if (hp1_pin_sense || hp2_pin_sense) { + msleep(2); + + if (hp1_pin_sense) + snd_hda_codec_write(codec, hp_pin, 0, + AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); + if (hp2_pin_sense) + snd_hda_codec_write(codec, 0x14, 0, + AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); + msleep(75); + + if (hp1_pin_sense) + snd_hda_codec_write(codec, hp_pin, 0, + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE); + if (hp2_pin_sense) + snd_hda_codec_write(codec, 0x14, 0, + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE); + + msleep(75); + } +} + +static void alc222_shutup(struct hda_codec *codec) +{ + struct alc_spec *spec = codec->spec; + hda_nid_t hp_pin = alc_get_hp_pin(spec); + bool hp1_pin_sense, hp2_pin_sense; + + if (!hp_pin) + hp_pin = 0x21; + + hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin); + hp2_pin_sense = snd_hda_jack_detect(codec, 0x14); + + if (hp1_pin_sense || hp2_pin_sense) { + msleep(2); + + if (hp1_pin_sense) + snd_hda_codec_write(codec, hp_pin, 0, + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); + if (hp2_pin_sense) + snd_hda_codec_write(codec, 0x14, 0, + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); + + msleep(75); + + if (hp1_pin_sense) + snd_hda_codec_write(codec, hp_pin, 0, + AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); + if (hp2_pin_sense) + snd_hda_codec_write(codec, 0x14, 0, + AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); + + msleep(75); + } + alc_auto_setup_eapd(codec, false); + alc_shutup_pins(codec); +} + static void alc_default_init(struct hda_codec *codec) { struct alc_spec *spec = codec->spec; @@ -11856,8 +11929,11 @@ static int patch_alc269(struct hda_codec spec->codec_variant = ALC269_TYPE_ALC300; spec->gen.mixer_nid = 0; /* no loopback on ALC300 */ break; + case 0x10ec0222: case 0x10ec0623: spec->codec_variant = ALC269_TYPE_ALC623; + spec->shutup = alc222_shutup; + spec->init_hook = alc222_init; break; case 0x10ec0700: case 0x10ec0701:
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Haoxiang Li haoxiang_li2024@163.com
commit 35d99c68af40a8ca175babc5a89ef7e2226fb3ca upstream.
Add btrfs_free_chunk_map() to free the memory allocated by btrfs_alloc_chunk_map() if btrfs_add_chunk_map() fails.
Fixes: 7dc66abb5a47 ("btrfs: use a dedicated data structure for chunk maps") CC: stable@vger.kernel.org Reviewed-by: Qu Wenruo wqu@suse.com Reviewed-by: Filipe Manana fdmanana@suse.com Signed-off-by: Haoxiang Li haoxiang_li2024@163.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/volumes.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -7094,6 +7094,7 @@ static int read_one_chunk(struct btrfs_k btrfs_err(fs_info, "failed to add chunk map, start=%llu len=%llu: %d", map->start, map->chunk_len, ret); + btrfs_free_chunk_map(map); }
return ret;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Fertser fercerpav@gmail.com
commit 5797c04400ee117bfe459ff1e468d0ea38054ab4 upstream.
When an Icelake or Sapphire Rapids CPU isn't providing the maximum and critical thresholds for particular DIMM the driver should return an error to the userspace instead of giving it stale (best case) or wrong (the structure contains all zeros after kzalloc() call) data.
The issue can be reproduced by binding the peci driver while the host is fully booted and idle, this makes PECI interaction unreliable enough.
Fixes: 73bc1b885dae ("hwmon: peci: Add dimmtemp driver") Fixes: 621995b6d795 ("hwmon: (peci/dimmtemp) Add Sapphire Rapids support") Cc: stable@vger.kernel.org Signed-off-by: Paul Fertser fercerpav@gmail.com Reviewed-by: Iwona Winiarska iwona.winiarska@intel.com Link: https://lore.kernel.org/r/20250123122003.6010-1-fercerpav@gmail.com Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hwmon/peci/dimmtemp.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-)
--- a/drivers/hwmon/peci/dimmtemp.c +++ b/drivers/hwmon/peci/dimmtemp.c @@ -127,8 +127,6 @@ static int update_thresholds(struct peci return 0;
ret = priv->gen_info->read_thresholds(priv, dimm_order, chan_rank, &data); - if (ret == -ENODATA) /* Use default or previous value */ - return 0; if (ret) return ret;
@@ -509,11 +507,11 @@ read_thresholds_icx(struct peci_dimmtemp
ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd4, ®_val); if (ret || !(reg_val & BIT(31))) - return -ENODATA; /* Use default or previous value */ + return -ENODATA;
ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd0, ®_val); if (ret) - return -ENODATA; /* Use default or previous value */ + return -ENODATA;
/* * Device 26, Offset 224e0: IMC 0 channel 0 -> rank 0 @@ -546,11 +544,11 @@ read_thresholds_spr(struct peci_dimmtemp
ret = peci_ep_pci_local_read(priv->peci_dev, 0, 30, 0, 2, 0xd4, ®_val); if (ret || !(reg_val & BIT(31))) - return -ENODATA; /* Use default or previous value */ + return -ENODATA;
ret = peci_ep_pci_local_read(priv->peci_dev, 0, 30, 0, 2, 0xd0, ®_val); if (ret) - return -ENODATA; /* Use default or previous value */ + return -ENODATA;
/* * Device 26, Offset 219a8: IMC 0 channel 0 -> rank 0
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ma Ke make24@iscas.ac.cn
commit 374c9faac5a763a05bc3f68ad9f73dab3c6aec90 upstream.
Null pointer dereference issue could occur when pipe_ctx->plane_state is null. The fix adds a check to ensure 'pipe_ctx->plane_state' is not null before accessing. This prevents a null pointer dereference.
Found by code review.
Fixes: 3be5262e353b ("drm/amd/display: Rename more dc_surface stuff to plane_state") Reviewed-by: Alex Hung alex.hung@amd.com Signed-off-by: Ma Ke make24@iscas.ac.cn Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit 63e6a77ccf239337baa9b1e7787cde9fa0462092) Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c @@ -1455,7 +1455,8 @@ bool resource_build_scaling_params(struc DC_LOGGER_INIT(pipe_ctx->stream->ctx->logger);
/* Invalid input */ - if (!plane_state->dst_rect.width || + if (!plane_state || + !plane_state->dst_rect.width || !plane_state->dst_rect.height || !plane_state->src_rect.width || !plane_state->src_rect.height) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew Martin Andrew.Martin@amd.com
commit fd617ea3b79d2116d53f76cdb5a3601c0ba6e42f upstream.
Through KFD IOCTL Fuzzing we encountered a NULL pointer derefrence when calling kfd_queue_acquire_buffers.
Fixes: 629568d25fea ("drm/amdkfd: Validate queue cwsr area and eop buffer size") Signed-off-by: Andrew Martin Andrew.Martin@amd.com Reviewed-by: Philip Yang Philip.Yang@amd.com Signed-off-by: Andrew Martin Andrew.Martin@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit 049e5bf3c8406f87c3d8e1958e0a16804fa1d530) Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdkfd/kfd_queue.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_queue.c b/drivers/gpu/drm/amd/amdkfd/kfd_queue.c index ecccd7adbab4..24396a2c77bd 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_queue.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_queue.c @@ -266,8 +266,8 @@ int kfd_queue_acquire_buffers(struct kfd_process_device *pdd, struct queue_prope /* EOP buffer is not required for all ASICs */ if (properties->eop_ring_buffer_address) { if (properties->eop_ring_buffer_size != topo_dev->node_props.eop_buffer_size) { - pr_debug("queue eop bo size 0x%lx not equal to node eop buf size 0x%x\n", - properties->eop_buf_bo->tbo.base.size, + pr_debug("queue eop bo size 0x%x not equal to node eop buf size 0x%x\n", + properties->eop_ring_buffer_size, topo_dev->node_props.eop_buffer_size); err = -EINVAL; goto out_err_unreserve;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kenneth Feng kenneth.feng@amd.com
commit da552bda987420e877500fdd90bd0172e3bf412b upstream.
always allow ih interrupt from fw on smu v14 based on the interface requirement
Signed-off-by: Kenneth Feng kenneth.feng@amd.com Reviewed-by: Yang Wang kevinyang.wang@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit a3199eba46c54324193607d9114a1e321292d7a1) Cc: stable@vger.kernel.org # 6.12.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-)
--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c +++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c @@ -1883,16 +1883,6 @@ static int smu_v14_0_allow_ih_interrupt( NULL); }
-static int smu_v14_0_process_pending_interrupt(struct smu_context *smu) -{ - int ret = 0; - - if (smu_cmn_feature_is_enabled(smu, SMU_FEATURE_ACDC_BIT)) - ret = smu_v14_0_allow_ih_interrupt(smu); - - return ret; -} - int smu_v14_0_enable_thermal_alert(struct smu_context *smu) { int ret = 0; @@ -1904,7 +1894,7 @@ int smu_v14_0_enable_thermal_alert(struc if (ret) return ret;
- return smu_v14_0_process_pending_interrupt(smu); + return smu_v14_0_allow_ih_interrupt(smu); }
int smu_v14_0_disable_thermal_alert(struct smu_context *smu)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brendan King Brendan.King@imgtec.com
commit df1a1ed5e1bdd9cc13148e0e5549f5ebcf76cf13 upstream.
Do scheduler queue fence release processing on a workqueue, rather than in the release function itself.
Fixes deadlock issues such as the following:
[ 607.400437] ============================================ [ 607.405755] WARNING: possible recursive locking detected [ 607.415500] -------------------------------------------- [ 607.420817] weston:zfq0/24149 is trying to acquire lock: [ 607.426131] ffff000017d041a0 (reservation_ww_class_mutex){+.+.}-{3:3}, at: pvr_gem_object_vunmap+0x40/0xc0 [powervr] [ 607.436728] but task is already holding lock: [ 607.442554] ffff000017d105a0 (reservation_ww_class_mutex){+.+.}-{3:3}, at: dma_buf_ioctl+0x250/0x554 [ 607.451727] other info that might help us debug this: [ 607.458245] Possible unsafe locking scenario:
[ 607.464155] CPU0 [ 607.466601] ---- [ 607.469044] lock(reservation_ww_class_mutex); [ 607.473584] lock(reservation_ww_class_mutex); [ 607.478114] *** DEADLOCK ***
Cc: stable@vger.kernel.org Fixes: eaf01ee5ba28 ("drm/imagination: Implement job submission and scheduling") Signed-off-by: Brendan King brendan.king@imgtec.com Reviewed-by: Matt Coster matt.coster@imgtec.com Link: https://patchwork.freedesktop.org/patch/msgid/20250226-fence-release-deadloc... Signed-off-by: Matt Coster matt.coster@imgtec.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/imagination/pvr_queue.c | 13 +++++++++++-- drivers/gpu/drm/imagination/pvr_queue.h | 4 ++++ 2 files changed, 15 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/imagination/pvr_queue.c +++ b/drivers/gpu/drm/imagination/pvr_queue.c @@ -109,12 +109,20 @@ pvr_queue_fence_get_driver_name(struct d return PVR_DRIVER_NAME; }
+static void pvr_queue_fence_release_work(struct work_struct *w) +{ + struct pvr_queue_fence *fence = container_of(w, struct pvr_queue_fence, release_work); + + pvr_context_put(fence->queue->ctx); + dma_fence_free(&fence->base); +} + static void pvr_queue_fence_release(struct dma_fence *f) { struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base); + struct pvr_device *pvr_dev = fence->queue->ctx->pvr_dev;
- pvr_context_put(fence->queue->ctx); - dma_fence_free(f); + queue_work(pvr_dev->sched_wq, &fence->release_work); }
static const char * @@ -268,6 +276,7 @@ pvr_queue_fence_init(struct dma_fence *f
pvr_context_get(queue->ctx); fence->queue = queue; + INIT_WORK(&fence->release_work, pvr_queue_fence_release_work); dma_fence_init(&fence->base, fence_ops, &fence_ctx->lock, fence_ctx->id, atomic_inc_return(&fence_ctx->seqno)); --- a/drivers/gpu/drm/imagination/pvr_queue.h +++ b/drivers/gpu/drm/imagination/pvr_queue.h @@ -5,6 +5,7 @@ #define PVR_QUEUE_H
#include <drm/gpu_scheduler.h> +#include <linux/workqueue.h>
#include "pvr_cccb.h" #include "pvr_device.h" @@ -63,6 +64,9 @@ struct pvr_queue_fence {
/** @queue: Queue that created this fence. */ struct pvr_queue *queue; + + /** @release_work: Fence release work structure. */ + struct work_struct release_work; };
/**
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brendan King Brendan.King@imgtec.com
commit a5c4c3ba95a52d66315acdfbaba9bd82ed39c250 upstream.
Avoid a warning from drm_gem_gpuva_assert_lock_held in drm_gpuva_unlink.
The Imagination driver uses the GEM object reservation lock to protect the gpuva list, but the GEM object was not always known in the code paths that ended up calling drm_gpuva_unlink. When the GEM object isn't known, it is found by calling drm_gpuva_find to lookup the object associated with a given virtual address range, or by calling drm_gpuva_find_first when removing all mappings.
Cc: stable@vger.kernel.org Fixes: 4bc736f890ce ("drm/imagination: vm: make use of GPUVM's drm_exec helper") Signed-off-by: Brendan King brendan.king@imgtec.com Reviewed-by: Matt Coster matt.coster@imgtec.com Link: https://patchwork.freedesktop.org/patch/msgid/20250226-hold-drm_gem_gpuva-lo... Signed-off-by: Matt Coster matt.coster@imgtec.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/imagination/pvr_fw_meta.c | 6 - drivers/gpu/drm/imagination/pvr_vm.c | 134 ++++++++++++++++++++++++------ drivers/gpu/drm/imagination/pvr_vm.h | 3 3 files changed, 115 insertions(+), 28 deletions(-)
--- a/drivers/gpu/drm/imagination/pvr_fw_meta.c +++ b/drivers/gpu/drm/imagination/pvr_fw_meta.c @@ -527,8 +527,10 @@ pvr_meta_vm_map(struct pvr_device *pvr_d static void pvr_meta_vm_unmap(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj) { - pvr_vm_unmap(pvr_dev->kernel_vm_ctx, fw_obj->fw_mm_node.start, - fw_obj->fw_mm_node.size); + struct pvr_gem_object *pvr_obj = fw_obj->gem; + + pvr_vm_unmap_obj(pvr_dev->kernel_vm_ctx, pvr_obj, + fw_obj->fw_mm_node.start, fw_obj->fw_mm_node.size); }
static bool --- a/drivers/gpu/drm/imagination/pvr_vm.c +++ b/drivers/gpu/drm/imagination/pvr_vm.c @@ -293,8 +293,9 @@ err_bind_op_fini:
static int pvr_vm_bind_op_unmap_init(struct pvr_vm_bind_op *bind_op, - struct pvr_vm_context *vm_ctx, u64 device_addr, - u64 size) + struct pvr_vm_context *vm_ctx, + struct pvr_gem_object *pvr_obj, + u64 device_addr, u64 size) { int err;
@@ -318,6 +319,7 @@ pvr_vm_bind_op_unmap_init(struct pvr_vm_ goto err_bind_op_fini; }
+ bind_op->pvr_obj = pvr_obj; bind_op->vm_ctx = vm_ctx; bind_op->device_addr = device_addr; bind_op->size = size; @@ -598,20 +600,6 @@ err_free: }
/** - * pvr_vm_unmap_all() - Unmap all mappings associated with a VM context. - * @vm_ctx: Target VM context. - * - * This function ensures that no mappings are left dangling by unmapping them - * all in order of ascending device-virtual address. - */ -void -pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx) -{ - WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuvm_mgr.mm_start, - vm_ctx->gpuvm_mgr.mm_range)); -} - -/** * pvr_vm_context_release() - Teardown a VM context. * @ref_count: Pointer to reference counter of the VM context. * @@ -703,11 +691,7 @@ pvr_vm_lock_extra(struct drm_gpuvm_exec struct pvr_vm_bind_op *bind_op = vm_exec->extra.priv; struct pvr_gem_object *pvr_obj = bind_op->pvr_obj;
- /* Unmap operations don't have an object to lock. */ - if (!pvr_obj) - return 0; - - /* Acquire lock on the GEM being mapped. */ + /* Acquire lock on the GEM object being mapped/unmapped. */ return drm_exec_lock_obj(&vm_exec->exec, gem_from_pvr_gem(pvr_obj)); }
@@ -772,8 +756,10 @@ err_cleanup: }
/** - * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory. + * pvr_vm_unmap_obj_locked() - Unmap an already mapped section of device-virtual + * memory. * @vm_ctx: Target VM context. + * @pvr_obj: Target PowerVR memory object. * @device_addr: Virtual device address at the start of the target mapping. * @size: Size of the target mapping. * @@ -784,9 +770,13 @@ err_cleanup: * * Any error encountered while performing internal operations required to * destroy the mapping (returned from pvr_vm_gpuva_unmap or * pvr_vm_gpuva_remap). + * + * The vm_ctx->lock must be held when calling this function. */ -int -pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size) +static int +pvr_vm_unmap_obj_locked(struct pvr_vm_context *vm_ctx, + struct pvr_gem_object *pvr_obj, + u64 device_addr, u64 size) { struct pvr_vm_bind_op bind_op = {0}; struct drm_gpuvm_exec vm_exec = { @@ -799,11 +789,13 @@ pvr_vm_unmap(struct pvr_vm_context *vm_c }, };
- int err = pvr_vm_bind_op_unmap_init(&bind_op, vm_ctx, device_addr, - size); + int err = pvr_vm_bind_op_unmap_init(&bind_op, vm_ctx, pvr_obj, + device_addr, size); if (err) return err;
+ pvr_gem_object_get(pvr_obj); + err = drm_gpuvm_exec_lock(&vm_exec); if (err) goto err_cleanup; @@ -818,6 +810,96 @@ err_cleanup: return err; }
+/** + * pvr_vm_unmap_obj() - Unmap an already mapped section of device-virtual + * memory. + * @vm_ctx: Target VM context. + * @pvr_obj: Target PowerVR memory object. + * @device_addr: Virtual device address at the start of the target mapping. + * @size: Size of the target mapping. + * + * Return: + * * 0 on success, + * * Any error encountered by pvr_vm_unmap_obj_locked. + */ +int +pvr_vm_unmap_obj(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj, + u64 device_addr, u64 size) +{ + int err; + + mutex_lock(&vm_ctx->lock); + err = pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj, device_addr, size); + mutex_unlock(&vm_ctx->lock); + + return err; +} + +/** + * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory. + * @vm_ctx: Target VM context. + * @device_addr: Virtual device address at the start of the target mapping. + * @size: Size of the target mapping. + * + * Return: + * * 0 on success, + * * Any error encountered by drm_gpuva_find, + * * Any error encountered by pvr_vm_unmap_obj_locked. + */ +int +pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size) +{ + struct pvr_gem_object *pvr_obj; + struct drm_gpuva *va; + int err; + + mutex_lock(&vm_ctx->lock); + + va = drm_gpuva_find(&vm_ctx->gpuvm_mgr, device_addr, size); + if (va) { + pvr_obj = gem_to_pvr_gem(va->gem.obj); + err = pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj, + va->va.addr, va->va.range); + } else { + err = -ENOENT; + } + + mutex_unlock(&vm_ctx->lock); + + return err; +} + +/** + * pvr_vm_unmap_all() - Unmap all mappings associated with a VM context. + * @vm_ctx: Target VM context. + * + * This function ensures that no mappings are left dangling by unmapping them + * all in order of ascending device-virtual address. + */ +void +pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx) +{ + mutex_lock(&vm_ctx->lock); + + for (;;) { + struct pvr_gem_object *pvr_obj; + struct drm_gpuva *va; + + va = drm_gpuva_find_first(&vm_ctx->gpuvm_mgr, + vm_ctx->gpuvm_mgr.mm_start, + vm_ctx->gpuvm_mgr.mm_range); + if (!va) + break; + + pvr_obj = gem_to_pvr_gem(va->gem.obj); + + WARN_ON(pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj, + va->va.addr, va->va.range)); + } + + mutex_unlock(&vm_ctx->lock); +} + /* Static data areas are determined by firmware. */ static const struct drm_pvr_static_data_area static_data_areas[] = { { --- a/drivers/gpu/drm/imagination/pvr_vm.h +++ b/drivers/gpu/drm/imagination/pvr_vm.h @@ -38,6 +38,9 @@ struct pvr_vm_context *pvr_vm_create_con int pvr_vm_map(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset, u64 device_addr, u64 size); +int pvr_vm_unmap_obj(struct pvr_vm_context *vm_ctx, + struct pvr_gem_object *pvr_obj, + u64 device_addr, u64 size); int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size); void pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brendan King Brendan.King@imgtec.com
commit 68c3de7f707e8a70e0a6d8087cf0fe4a3d5dbfb0 upstream.
Ensure job done fences are only initialised once.
This fixes a memory manager not clean warning from drm_mm_takedown on module unload.
Cc: stable@vger.kernel.org Fixes: eaf01ee5ba28 ("drm/imagination: Implement job submission and scheduling") Signed-off-by: Brendan King brendan.king@imgtec.com Reviewed-by: Matt Coster matt.coster@imgtec.com Link: https://patchwork.freedesktop.org/patch/msgid/20250226-init-done-fences-once... Signed-off-by: Matt Coster matt.coster@imgtec.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/imagination/pvr_queue.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/imagination/pvr_queue.c +++ b/drivers/gpu/drm/imagination/pvr_queue.c @@ -313,8 +313,9 @@ pvr_queue_cccb_fence_init(struct dma_fen static void pvr_queue_job_fence_init(struct dma_fence *fence, struct pvr_queue *queue) { - pvr_queue_fence_init(fence, queue, &pvr_queue_job_fence_ops, - &queue->job_fence_ctx); + if (!fence->ops) + pvr_queue_fence_init(fence, queue, &pvr_queue_job_fence_ops, + &queue->job_fence_ctx); }
/**
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Richard Thier u9vata@gmail.com
commit 29ffeb73b216ce3eff10229eb077cf9b7812119d upstream.
num_gb_pipes was set to a wrong value using r420_pipe_config
This have lead to HyperZ glitches on fast Z clearing.
Closes: https://bugs.freedesktop.org/show_bug.cgi?id=110897 Reviewed-by: Marek Olšák marek.olsak@amd.com Signed-off-by: Richard Thier u9vata@gmail.com Signed-off-by: Alex Deucher alexander.deucher@amd.com (cherry picked from commit 044e59a85c4d84e3c8d004c486e5c479640563a6) Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/radeon/r300.c | 3 ++- drivers/gpu/drm/radeon/radeon_asic.h | 1 + drivers/gpu/drm/radeon/rs400.c | 18 ++++++++++++++++-- 3 files changed, 19 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/radeon/r300.c +++ b/drivers/gpu/drm/radeon/r300.c @@ -359,7 +359,8 @@ int r300_mc_wait_for_idle(struct radeon_ return -1; }
-static void r300_gpu_init(struct radeon_device *rdev) +/* rs400_gpu_init also calls this! */ +void r300_gpu_init(struct radeon_device *rdev) { uint32_t gb_tile_config, tmp;
--- a/drivers/gpu/drm/radeon/radeon_asic.h +++ b/drivers/gpu/drm/radeon/radeon_asic.h @@ -165,6 +165,7 @@ void r200_set_safe_registers(struct rade */ extern int r300_init(struct radeon_device *rdev); extern void r300_fini(struct radeon_device *rdev); +extern void r300_gpu_init(struct radeon_device *rdev); extern int r300_suspend(struct radeon_device *rdev); extern int r300_resume(struct radeon_device *rdev); extern int r300_asic_reset(struct radeon_device *rdev, bool hard); --- a/drivers/gpu/drm/radeon/rs400.c +++ b/drivers/gpu/drm/radeon/rs400.c @@ -256,8 +256,22 @@ int rs400_mc_wait_for_idle(struct radeon
static void rs400_gpu_init(struct radeon_device *rdev) { - /* FIXME: is this correct ? */ - r420_pipes_init(rdev); + /* Earlier code was calling r420_pipes_init and then + * rs400_mc_wait_for_idle(rdev). The problem is that + * at least on my Mobility Radeon Xpress 200M RC410 card + * that ends up in this code path ends up num_gb_pipes == 3 + * while the card seems to have only one pipe. With the + * r420 pipe initialization method. + * + * Problems shown up as HyperZ glitches, see: + * https://bugs.freedesktop.org/show_bug.cgi?id=110897 + * + * Delegating initialization to r300 code seems to work + * and results in proper pipe numbers. The rs400 cards + * are said to be not r400, but r300 kind of cards. + */ + r300_gpu_init(rdev); + if (rs400_mc_wait_for_idle(rdev)) { pr_warn("rs400: Failed to wait MC idle while programming pipes. Bad things might happen. %08x\n", RREG32(RADEON_MC_STATUS));
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gabriel Krisman Bertazi krisman@suse.de
commit eae116d1f0449ade3269ca47a67432622f5c6438 upstream.
Commit 96a5c186efff ("mm/page_alloc.c: don't show protection in zone's ->lowmem_reserve[] for empty zone") removes the protection of lower zones from allocations targeting memory-less high zones. This had an unintended impact on the pattern of reclaims because it makes the high-zone-targeted allocation more likely to succeed in lower zones, which adds pressure to said zones. I.e, the following corresponding checks in zone_watermark_ok/zone_watermark_fast are less likely to trigger:
if (free_pages <= min + z->lowmem_reserve[highest_zoneidx]) return false;
As a result, we are observing an increase in reclaim and kswapd scans, due to the increased pressure. This was initially observed as increased latency in filesystem operations when benchmarking with fio on a machine with some memory-less zones, but it has since been associated with increased contention in locks related to memory reclaim. By reverting this patch, the original performance was recovered on that machine.
The original commit was introduced as a clarification of the /proc/zoneinfo output, so it doesn't seem there are usecases depending on it, making the revert a simple solution.
For reference, I collected vmstat with and without this patch on a freshly booted system running intensive randread io from an nvme for 5 minutes. I got:
rpm-6.12.0-slfo.1.2 -> pgscan_kswapd 5629543865 Patched -> pgscan_kswapd 33580844
33M scans is similar to what we had in kernels predating this patch. These numbers is fairly representative of the workload on this machine, as measured in several runs. So we are talking about a 2-order of magnitude increase.
Link: https://lkml.kernel.org/r/20250226032258.234099-1-krisman@suse.de Fixes: 96a5c186efff ("mm/page_alloc.c: don't show protection in zone's ->lowmem_reserve[] for empty zone") Signed-off-by: Gabriel Krisman Bertazi krisman@suse.de Reviewed-by: Vlastimil Babka vbabka@suse.cz Acked-by: Michal Hocko mhocko@suse.com Acked-by: Mel Gorman mgorman@suse.de Cc: Baoquan He bhe@redhat.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/page_alloc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5991,11 +5991,10 @@ static void setup_per_zone_lowmem_reserv
for (j = i + 1; j < MAX_NR_ZONES; j++) { struct zone *upper_zone = &pgdat->node_zones[j]; - bool empty = !zone_managed_pages(upper_zone);
managed_pages += zone_managed_pages(upper_zone);
- if (clear || empty) + if (clear) zone->lowmem_reserve[j] = 0; else zone->lowmem_reserve[j] = managed_pages / ratio;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: John Hubbard jhubbard@nvidia.com
commit 0a7565ee6ec31eb16c0476adbfc1af3f2271cb6b upstream.
This reverts commit a5c6bc590094a1a73cf6fa3f505e1945d2bf2461.
The general approach described in commit e076eaca5906 ("selftests: break the dependency upon local header files") was taken one step too far here: it should not have been extended to include the syscall numbers. This is because doing so would require per-arch support in tools/include/uapi, and no such support exists.
This revert fixes two separate reports of test failures, from Dave Hansen[1], and Li Wang[2]. An excerpt of Dave's report:
Before this commit (a5c6bc590094a1a73cf6fa3f505e1945d2bf2461) things are fine. But after, I get:
running PKEY tests for unsupported CPU/OS
An excerpt of Li's report:
I just found that mlock2_() return a wrong value in mlock2-test
[1] https://lore.kernel.org/dc585017-6740-4cab-a536-b12b37a7582d@intel.com [2] https://lore.kernel.org/CAEemH2eW=UMu9+turT2jRie7+6ewUazXmA6kL+VBo3cGDGU6RA@...
Link: https://lkml.kernel.org/r/20250214033850.235171-1-jhubbard@nvidia.com Fixes: a5c6bc590094 ("selftests/mm: remove local __NR_* definitions") Signed-off-by: John Hubbard jhubbard@nvidia.com Cc: Dave Hansen dave.hansen@intel.com Cc: Li Wang liwang@redhat.com Cc: David Hildenbrand david@redhat.com Cc: Jeff Xu jeffxu@chromium.org Cc: Andrei Vagin avagin@google.com Cc: Axel Rasmussen axelrasmussen@google.com Cc: Christian Brauner brauner@kernel.org Cc: Kees Cook kees@kernel.org Cc: Kent Overstreet kent.overstreet@linux.dev Cc: Liam R. Howlett Liam.Howlett@oracle.com Cc: Muhammad Usama Anjum usama.anjum@collabora.com Cc: Peter Xu peterx@redhat.com Cc: Rich Felker dalias@libc.org Cc: Shuah Khan shuah@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/testing/selftests/mm/hugepage-mremap.c | 2 +- tools/testing/selftests/mm/ksm_functional_tests.c | 8 +++++++- tools/testing/selftests/mm/memfd_secret.c | 14 +++++++++++++- tools/testing/selftests/mm/mkdirty.c | 8 +++++++- tools/testing/selftests/mm/mlock2.h | 1 - tools/testing/selftests/mm/protection_keys.c | 2 +- tools/testing/selftests/mm/uffd-common.c | 4 ++++ tools/testing/selftests/mm/uffd-stress.c | 15 ++++++++++++++- tools/testing/selftests/mm/uffd-unit-tests.c | 14 +++++++++++++- 9 files changed, 60 insertions(+), 8 deletions(-)
--- a/tools/testing/selftests/mm/hugepage-mremap.c +++ b/tools/testing/selftests/mm/hugepage-mremap.c @@ -15,7 +15,7 @@ #define _GNU_SOURCE #include <stdlib.h> #include <stdio.h> -#include <asm-generic/unistd.h> +#include <unistd.h> #include <sys/mman.h> #include <errno.h> #include <fcntl.h> /* Definition of O_* constants */ --- a/tools/testing/selftests/mm/ksm_functional_tests.c +++ b/tools/testing/selftests/mm/ksm_functional_tests.c @@ -11,7 +11,7 @@ #include <string.h> #include <stdbool.h> #include <stdint.h> -#include <asm-generic/unistd.h> +#include <unistd.h> #include <errno.h> #include <fcntl.h> #include <sys/mman.h> @@ -369,6 +369,7 @@ unmap: munmap(map, size); }
+#ifdef __NR_userfaultfd static void test_unmerge_uffd_wp(void) { struct uffdio_writeprotect uffd_writeprotect; @@ -429,6 +430,7 @@ close_uffd: unmap: munmap(map, size); } +#endif
/* Verify that KSM can be enabled / queried with prctl. */ static void test_prctl(void) @@ -684,7 +686,9 @@ int main(int argc, char **argv) exit(test_child_ksm()); }
+#ifdef __NR_userfaultfd tests++; +#endif
ksft_print_header(); ksft_set_plan(tests); @@ -696,7 +700,9 @@ int main(int argc, char **argv) test_unmerge(); test_unmerge_zero_pages(); test_unmerge_discarded(); +#ifdef __NR_userfaultfd test_unmerge_uffd_wp(); +#endif
test_prot_none();
--- a/tools/testing/selftests/mm/memfd_secret.c +++ b/tools/testing/selftests/mm/memfd_secret.c @@ -17,7 +17,7 @@
#include <stdlib.h> #include <string.h> -#include <asm-generic/unistd.h> +#include <unistd.h> #include <errno.h> #include <stdio.h> #include <fcntl.h> @@ -28,6 +28,8 @@ #define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__) #define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+#ifdef __NR_memfd_secret + #define PATTERN 0x55
static const int prot = PROT_READ | PROT_WRITE; @@ -332,3 +334,13 @@ int main(int argc, char *argv[])
ksft_finished(); } + +#else /* __NR_memfd_secret */ + +int main(int argc, char *argv[]) +{ + printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n"); + return KSFT_SKIP; +} + +#endif /* __NR_memfd_secret */ --- a/tools/testing/selftests/mm/mkdirty.c +++ b/tools/testing/selftests/mm/mkdirty.c @@ -9,7 +9,7 @@ */ #include <fcntl.h> #include <signal.h> -#include <asm-generic/unistd.h> +#include <unistd.h> #include <string.h> #include <errno.h> #include <stdlib.h> @@ -265,6 +265,7 @@ munmap: munmap(mmap_mem, mmap_size); }
+#ifdef __NR_userfaultfd static void test_uffdio_copy(void) { struct uffdio_register uffdio_register; @@ -321,6 +322,7 @@ munmap: munmap(dst, pagesize); free(src); } +#endif /* __NR_userfaultfd */
int main(void) { @@ -333,7 +335,9 @@ int main(void) thpsize / 1024); tests += 3; } +#ifdef __NR_userfaultfd tests += 1; +#endif /* __NR_userfaultfd */
ksft_print_header(); ksft_set_plan(tests); @@ -363,7 +367,9 @@ int main(void) if (thpsize) test_pte_mapped_thp(); /* Placing a fresh page via userfaultfd may set the PTE dirty. */ +#ifdef __NR_userfaultfd test_uffdio_copy(); +#endif /* __NR_userfaultfd */
err = ksft_get_fail_cnt(); if (err) --- a/tools/testing/selftests/mm/mlock2.h +++ b/tools/testing/selftests/mm/mlock2.h @@ -3,7 +3,6 @@ #include <errno.h> #include <stdio.h> #include <stdlib.h> -#include <asm-generic/unistd.h>
static int mlock2_(void *start, size_t len, int flags) { --- a/tools/testing/selftests/mm/protection_keys.c +++ b/tools/testing/selftests/mm/protection_keys.c @@ -42,7 +42,7 @@ #include <sys/wait.h> #include <sys/stat.h> #include <fcntl.h> -#include <asm-generic/unistd.h> +#include <unistd.h> #include <sys/ptrace.h> #include <setjmp.h>
--- a/tools/testing/selftests/mm/uffd-common.c +++ b/tools/testing/selftests/mm/uffd-common.c @@ -673,7 +673,11 @@ int uffd_open_dev(unsigned int flags)
int uffd_open_sys(unsigned int flags) { +#ifdef __NR_userfaultfd return syscall(__NR_userfaultfd, flags); +#else + return -1; +#endif }
int uffd_open(unsigned int flags) --- a/tools/testing/selftests/mm/uffd-stress.c +++ b/tools/testing/selftests/mm/uffd-stress.c @@ -33,10 +33,11 @@ * pthread_mutex_lock will also verify the atomicity of the memory * transfer (UFFDIO_COPY). */ -#include <asm-generic/unistd.h> + #include "uffd-common.h"
uint64_t features; +#ifdef __NR_userfaultfd
#define BOUNCE_RANDOM (1<<0) #define BOUNCE_RACINGFAULTS (1<<1) @@ -471,3 +472,15 @@ int main(int argc, char **argv) nr_pages, nr_pages_per_cpu); return userfaultfd_stress(); } + +#else /* __NR_userfaultfd */ + +#warning "missing __NR_userfaultfd definition" + +int main(void) +{ + printf("skip: Skipping userfaultfd test (missing __NR_userfaultfd)\n"); + return KSFT_SKIP; +} + +#endif /* __NR_userfaultfd */ --- a/tools/testing/selftests/mm/uffd-unit-tests.c +++ b/tools/testing/selftests/mm/uffd-unit-tests.c @@ -5,11 +5,12 @@ * Copyright (C) 2015-2023 Red Hat, Inc. */
-#include <asm-generic/unistd.h> #include "uffd-common.h"
#include "../../../../mm/gup_test.h"
+#ifdef __NR_userfaultfd + /* The unit test doesn't need a large or random size, make it 32MB for now */ #define UFFD_TEST_MEM_SIZE (32UL << 20)
@@ -1558,3 +1559,14 @@ int main(int argc, char *argv[]) return ksft_get_fail_cnt() ? KSFT_FAIL : KSFT_PASS; }
+#else /* __NR_userfaultfd */ + +#warning "missing __NR_userfaultfd definition" + +int main(void) +{ + printf("Skipping %s (missing __NR_userfaultfd)\n", __file__); + return KSFT_SKIP; +} + +#endif /* __NR_userfaultfd */
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mingcong Bai jeffbai@aosc.io
commit d0d10eaedcb53740883d7e5d53c5e15c879b48fb upstream.
Based on the dmesg messages from the original reporter:
[ 4.964073] ACPI: _SB_.PCI0.LPCB.EC__.HKEY: BCTG evaluated but flagged as error [ 4.964083] thinkpad_acpi: Error probing battery 2
Lenovo ThinkPad X131e also needs this battery quirk.
Reported-by: Fan Yang 804284660@qq.com Tested-by: Fan Yang 804284660@qq.com Co-developed-by: Xi Ruoyao xry111@xry111.site Signed-off-by: Xi Ruoyao xry111@xry111.site Signed-off-by: Mingcong Bai jeffbai@aosc.io Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250221164825.77315-1-jeffbai@aosc.io Reviewed-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/platform/x86/thinkpad_acpi.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/platform/x86/thinkpad_acpi.c +++ b/drivers/platform/x86/thinkpad_acpi.c @@ -9958,6 +9958,7 @@ static const struct tpacpi_quirk battery * Individual addressing is broken on models that expose the * primary battery as BAT1. */ + TPACPI_Q_LNV('G', '8', true), /* ThinkPad X131e */ TPACPI_Q_LNV('8', 'F', true), /* Thinkpad X120e */ TPACPI_Q_LNV('J', '7', true), /* B5400 */ TPACPI_Q_LNV('J', 'I', true), /* Thinkpad 11e */
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ard Biesheuvel ardb@kernel.org
commit c00b413a96261faef4ce22329153c6abd4acef25 upstream.
The 5-level paging code parses the command line to look for the 'no5lvl' string, and does so very early, before sanitize_boot_params() has been called and has been given the opportunity to wipe bogus data from the fields in boot_params that are not covered by struct setup_header, and are therefore supposed to be initialized to zero by the bootloader.
This triggers an early boot crash when using syslinux-efi to boot a recent kernel built with CONFIG_X86_5LEVEL=y and CONFIG_EFI_STUB=n, as the 0xff padding that now fills the unused PE/COFF header is copied into boot_params by the bootloader, and interpreted as the top half of the command line pointer.
Fix this by sanitizing the boot_params before use. Note that there is no harm in calling this more than once; subsequent invocations are able to spot that the boot_params have already been cleaned up.
Signed-off-by: Ard Biesheuvel ardb@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Cc: "H. Peter Anvin" hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: stable@vger.kernel.org # v6.1+ Link: https://lore.kernel.org/r/20250306155915.342465-2-ardb+git@google.com Closes: https://lore.kernel.org/all/202503041549.35913.ulrich.gemkow@ikr.uni-stuttga... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/boot/compressed/pgtable_64.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/arch/x86/boot/compressed/pgtable_64.c +++ b/arch/x86/boot/compressed/pgtable_64.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include "misc.h" #include <asm/bootparam.h> +#include <asm/bootparam_utils.h> #include <asm/e820/types.h> #include <asm/processor.h> #include "pgtable.h" @@ -107,6 +108,7 @@ asmlinkage void configure_5level_paging( bool l5_required = false;
/* Initialize boot_params. Required for cmdline_find_option_bool(). */ + sanitize_boot_params(bp); boot_params_ptr = bp;
/*
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ahmed S. Darwish darwi@linutronix.de
commit 8177c6bedb7013cf736137da586cf783922309dd upstream.
CPUID leaf 0x2 emits one-byte descriptors in its four output registers EAX, EBX, ECX, and EDX. For these descriptors to be valid, the most significant bit (MSB) of each register must be clear.
The historical Git commit:
019361a20f016 ("- pre6: Intel: start to add Pentium IV specific stuff (128-byte cacheline etc)...")
introduced leaf 0x2 output parsing. It only validated the MSBs of EAX, EBX, and ECX, but left EDX unchecked.
Validate EDX's most-significant bit.
Signed-off-by: Ahmed S. Darwish darwi@linutronix.de Signed-off-by: Ingo Molnar mingo@kernel.org Cc: stable@vger.kernel.org Cc: "H. Peter Anvin" hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Link: https://lore.kernel.org/r/20250304085152.51092-2-darwi@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/cacheinfo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -808,7 +808,7 @@ void init_intel_cacheinfo(struct cpuinfo cpuid(2, ®s[0], ®s[1], ®s[2], ®s[3]);
/* If bit 31 is set, this is an unknown format */ - for (j = 0 ; j < 3 ; j++) + for (j = 0 ; j < 4 ; j++) if (regs[j] & (1 << 31)) regs[j] = 0;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ahmed S. Darwish darwi@linutronix.de
commit 1881148215c67151b146450fb89ec22fd92337a7 upstream.
CPUID leaf 0x2 emits one-byte descriptors in its four output registers EAX, EBX, ECX, and EDX. For these descriptors to be valid, the most significant bit (MSB) of each register must be clear.
Leaf 0x2 parsing at intel.c only validated the MSBs of EAX, EBX, and ECX, but left EDX unchecked.
Validate EDX's most-significant bit as well.
Fixes: e0ba94f14f74 ("x86/tlb_info: get last level TLB entry number of CPU") Signed-off-by: Ahmed S. Darwish darwi@linutronix.de Signed-off-by: Ingo Molnar mingo@kernel.org Cc: stable@kernel.org Cc: "H. Peter Anvin" hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Link: https://lore.kernel.org/r/20250304085152.51092-3-darwi@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/intel.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -836,7 +836,7 @@ static void intel_detect_tlb(struct cpui cpuid(2, ®s[0], ®s[1], ®s[2], ®s[3]);
/* If bit 31 is set, this is an unknown format */ - for (j = 0 ; j < 3 ; j++) + for (j = 0 ; j < 4 ; j++) if (regs[j] & (1 << 31)) regs[j] = 0;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ahmed S. Darwish darwi@linutronix.de
commit f6bdaab79ee4228a143ee1b4cb80416d6ffc0c63 upstream.
CPUID leaf 0x2's one-byte TLB descriptors report the number of entries for specific TLB types, among other properties.
Typically, each emitted descriptor implies the same number of entries for its respective TLB type(s). An emitted 0x63 descriptor is an exception: it implies 4 data TLB entries for 1GB pages and 32 data TLB entries for 2MB or 4MB pages.
For the TLB descriptors parsing code, the entry count for 1GB pages is encoded at the intel_tlb_table[] mapping, but the 2MB/4MB entry count is totally ignored.
Update leaf 0x2's parsing logic 0x2 to account for 32 data TLB entries for 2MB/4MB pages implied by the 0x63 descriptor.
Fixes: e0ba94f14f74 ("x86/tlb_info: get last level TLB entry number of CPU") Signed-off-by: Ahmed S. Darwish darwi@linutronix.de Signed-off-by: Ingo Molnar mingo@kernel.org Cc: stable@kernel.org Cc: "H. Peter Anvin" hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Link: https://lore.kernel.org/r/20250304085152.51092-4-darwi@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/cpu/intel.c | 60 ++++++++++++++++++++++++++++---------------- 1 file changed, 39 insertions(+), 21 deletions(-)
--- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -672,26 +672,37 @@ static unsigned int intel_size_cache(str } #endif
-#define TLB_INST_4K 0x01 -#define TLB_INST_4M 0x02 -#define TLB_INST_2M_4M 0x03 - -#define TLB_INST_ALL 0x05 -#define TLB_INST_1G 0x06 - -#define TLB_DATA_4K 0x11 -#define TLB_DATA_4M 0x12 -#define TLB_DATA_2M_4M 0x13 -#define TLB_DATA_4K_4M 0x14 - -#define TLB_DATA_1G 0x16 - -#define TLB_DATA0_4K 0x21 -#define TLB_DATA0_4M 0x22 -#define TLB_DATA0_2M_4M 0x23 - -#define STLB_4K 0x41 -#define STLB_4K_2M 0x42 +#define TLB_INST_4K 0x01 +#define TLB_INST_4M 0x02 +#define TLB_INST_2M_4M 0x03 + +#define TLB_INST_ALL 0x05 +#define TLB_INST_1G 0x06 + +#define TLB_DATA_4K 0x11 +#define TLB_DATA_4M 0x12 +#define TLB_DATA_2M_4M 0x13 +#define TLB_DATA_4K_4M 0x14 + +#define TLB_DATA_1G 0x16 +#define TLB_DATA_1G_2M_4M 0x17 + +#define TLB_DATA0_4K 0x21 +#define TLB_DATA0_4M 0x22 +#define TLB_DATA0_2M_4M 0x23 + +#define STLB_4K 0x41 +#define STLB_4K_2M 0x42 + +/* + * All of leaf 0x2's one-byte TLB descriptors implies the same number of + * entries for their respective TLB types. The 0x63 descriptor is an + * exception: it implies 4 dTLB entries for 1GB pages 32 dTLB entries + * for 2MB or 4MB pages. Encode descriptor 0x63 dTLB entry count for + * 2MB/4MB pages here, as its count for dTLB 1GB pages is already at the + * intel_tlb_table[] mapping. + */ +#define TLB_0x63_2M_4M_ENTRIES 32
static const struct _tlb_table intel_tlb_table[] = { { 0x01, TLB_INST_4K, 32, " TLB_INST 4 KByte pages, 4-way set associative" }, @@ -713,7 +724,8 @@ static const struct _tlb_table intel_tlb { 0x5c, TLB_DATA_4K_4M, 128, " TLB_DATA 4 KByte and 4 MByte pages" }, { 0x5d, TLB_DATA_4K_4M, 256, " TLB_DATA 4 KByte and 4 MByte pages" }, { 0x61, TLB_INST_4K, 48, " TLB_INST 4 KByte pages, full associative" }, - { 0x63, TLB_DATA_1G, 4, " TLB_DATA 1 GByte pages, 4-way set associative" }, + { 0x63, TLB_DATA_1G_2M_4M, 4, " TLB_DATA 1 GByte pages, 4-way set associative" + " (plus 32 entries TLB_DATA 2 MByte or 4 MByte pages, not encoded here)" }, { 0x6b, TLB_DATA_4K, 256, " TLB_DATA 4 KByte pages, 8-way associative" }, { 0x6c, TLB_DATA_2M_4M, 128, " TLB_DATA 2 MByte or 4 MByte pages, 8-way associative" }, { 0x6d, TLB_DATA_1G, 16, " TLB_DATA 1 GByte pages, fully associative" }, @@ -813,6 +825,12 @@ static void intel_tlb_lookup(const unsig if (tlb_lld_4m[ENTRIES] < intel_tlb_table[k].entries) tlb_lld_4m[ENTRIES] = intel_tlb_table[k].entries; break; + case TLB_DATA_1G_2M_4M: + if (tlb_lld_2m[ENTRIES] < TLB_0x63_2M_4M_ENTRIES) + tlb_lld_2m[ENTRIES] = TLB_0x63_2M_4M_ENTRIES; + if (tlb_lld_4m[ENTRIES] < TLB_0x63_2M_4M_ENTRIES) + tlb_lld_4m[ENTRIES] = TLB_0x63_2M_4M_ENTRIES; + fallthrough; case TLB_DATA_1G: if (tlb_lld_1g[ENTRIES] < intel_tlb_table[k].entries) tlb_lld_1g[ENTRIES] = intel_tlb_table[k].entries;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matthew Brost matthew.brost@intel.com
commit ae482ec8cd1a85bde3307f71921a7780086fbec0 upstream.
Concurrent VM bind staging and zapping of PTEs from a userptr notifier do not work because the view of PTEs is not stable. VM binds cannot acquire the notifier lock during staging, as memory allocations are required. To resolve this race condition, use a staging tree for VM binds that is committed only under the userptr notifier lock during the final step of the bind. This ensures a consistent view of the PTEs in the userptr notifier.
A follow up may only use staging for VM in fault mode as this is the only mode in which the above race exists.
v3: - Drop zap PTE change (Thomas) - s/xe_pt_entry/xe_pt_entry_staging (Thomas)
Suggested-by: Thomas Hellström thomas.hellstrom@linux.intel.com Cc: stable@vger.kernel.org Fixes: e8babb280b5e ("drm/xe: Convert multiple bind ops into single job") Fixes: a708f6501c69 ("drm/xe: Update PT layer with better error handling") Signed-off-by: Matthew Brost matthew.brost@intel.com Reviewed-by: Thomas Hellström thomas.hellstrom@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250228073058.59510-5-thomas.... Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com (cherry picked from commit 6f39b0c5ef0385eae586760d10b9767168037aa5) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/xe/xe_pt.c | 58 +++++++++++++++++++++++++++------------- drivers/gpu/drm/xe/xe_pt_walk.c | 3 +- drivers/gpu/drm/xe/xe_pt_walk.h | 4 ++ 3 files changed, 46 insertions(+), 19 deletions(-)
--- a/drivers/gpu/drm/xe/xe_pt.c +++ b/drivers/gpu/drm/xe/xe_pt.c @@ -28,6 +28,8 @@ struct xe_pt_dir { struct xe_pt pt; /** @children: Array of page-table child nodes */ struct xe_ptw *children[XE_PDES]; + /** @staging: Array of page-table staging nodes */ + struct xe_ptw *staging[XE_PDES]; };
#if IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM) @@ -48,9 +50,10 @@ static struct xe_pt_dir *as_xe_pt_dir(st return container_of(pt, struct xe_pt_dir, pt); }
-static struct xe_pt *xe_pt_entry(struct xe_pt_dir *pt_dir, unsigned int index) +static struct xe_pt * +xe_pt_entry_staging(struct xe_pt_dir *pt_dir, unsigned int index) { - return container_of(pt_dir->children[index], struct xe_pt, base); + return container_of(pt_dir->staging[index], struct xe_pt, base); }
static u64 __xe_pt_empty_pte(struct xe_tile *tile, struct xe_vm *vm, @@ -125,6 +128,7 @@ struct xe_pt *xe_pt_create(struct xe_vm } pt->bo = bo; pt->base.children = level ? as_xe_pt_dir(pt)->children : NULL; + pt->base.staging = level ? as_xe_pt_dir(pt)->staging : NULL;
if (vm->xef) xe_drm_client_add_bo(vm->xef->client, pt->bo); @@ -205,8 +209,8 @@ void xe_pt_destroy(struct xe_pt *pt, u32 struct xe_pt_dir *pt_dir = as_xe_pt_dir(pt);
for (i = 0; i < XE_PDES; i++) { - if (xe_pt_entry(pt_dir, i)) - xe_pt_destroy(xe_pt_entry(pt_dir, i), flags, + if (xe_pt_entry_staging(pt_dir, i)) + xe_pt_destroy(xe_pt_entry_staging(pt_dir, i), flags, deferred); } } @@ -375,8 +379,10 @@ xe_pt_insert_entry(struct xe_pt_stage_bi /* Continue building a non-connected subtree. */ struct iosys_map *map = &parent->bo->vmap;
- if (unlikely(xe_child)) + if (unlikely(xe_child)) { parent->base.children[offset] = &xe_child->base; + parent->base.staging[offset] = &xe_child->base; + }
xe_pt_write(xe_walk->vm->xe, map, offset, pte); parent->num_live++; @@ -613,6 +619,7 @@ xe_pt_stage_bind(struct xe_tile *tile, s .ops = &xe_pt_stage_bind_ops, .shifts = xe_normal_pt_shifts, .max_level = XE_PT_HIGHEST_LEVEL, + .staging = true, }, .vm = xe_vma_vm(vma), .tile = tile, @@ -872,7 +879,7 @@ static void xe_pt_cancel_bind(struct xe_ } }
-static void xe_pt_commit_locks_assert(struct xe_vma *vma) +static void xe_pt_commit_prepare_locks_assert(struct xe_vma *vma) { struct xe_vm *vm = xe_vma_vm(vma);
@@ -884,6 +891,16 @@ static void xe_pt_commit_locks_assert(st xe_vm_assert_held(vm); }
+static void xe_pt_commit_locks_assert(struct xe_vma *vma) +{ + struct xe_vm *vm = xe_vma_vm(vma); + + xe_pt_commit_prepare_locks_assert(vma); + + if (xe_vma_is_userptr(vma)) + lockdep_assert_held_read(&vm->userptr.notifier_lock); +} + static void xe_pt_commit(struct xe_vma *vma, struct xe_vm_pgtable_update *entries, u32 num_entries, struct llist_head *deferred) @@ -894,13 +911,17 @@ static void xe_pt_commit(struct xe_vma *
for (i = 0; i < num_entries; i++) { struct xe_pt *pt = entries[i].pt; + struct xe_pt_dir *pt_dir;
if (!pt->level) continue;
+ pt_dir = as_xe_pt_dir(pt); for (j = 0; j < entries[i].qwords; j++) { struct xe_pt *oldpte = entries[i].pt_entries[j].pt; + int j_ = j + entries[i].ofs;
+ pt_dir->children[j_] = pt_dir->staging[j_]; xe_pt_destroy(oldpte, xe_vma_vm(vma)->flags, deferred); } } @@ -912,7 +933,7 @@ static void xe_pt_abort_bind(struct xe_v { int i, j;
- xe_pt_commit_locks_assert(vma); + xe_pt_commit_prepare_locks_assert(vma);
for (i = num_entries - 1; i >= 0; --i) { struct xe_pt *pt = entries[i].pt; @@ -927,10 +948,10 @@ static void xe_pt_abort_bind(struct xe_v pt_dir = as_xe_pt_dir(pt); for (j = 0; j < entries[i].qwords; j++) { u32 j_ = j + entries[i].ofs; - struct xe_pt *newpte = xe_pt_entry(pt_dir, j_); + struct xe_pt *newpte = xe_pt_entry_staging(pt_dir, j_); struct xe_pt *oldpte = entries[i].pt_entries[j].pt;
- pt_dir->children[j_] = oldpte ? &oldpte->base : 0; + pt_dir->staging[j_] = oldpte ? &oldpte->base : 0; xe_pt_destroy(newpte, xe_vma_vm(vma)->flags, NULL); } } @@ -942,7 +963,7 @@ static void xe_pt_commit_prepare_bind(st { u32 i, j;
- xe_pt_commit_locks_assert(vma); + xe_pt_commit_prepare_locks_assert(vma);
for (i = 0; i < num_entries; i++) { struct xe_pt *pt = entries[i].pt; @@ -960,10 +981,10 @@ static void xe_pt_commit_prepare_bind(st struct xe_pt *newpte = entries[i].pt_entries[j].pt; struct xe_pt *oldpte = NULL;
- if (xe_pt_entry(pt_dir, j_)) - oldpte = xe_pt_entry(pt_dir, j_); + if (xe_pt_entry_staging(pt_dir, j_)) + oldpte = xe_pt_entry_staging(pt_dir, j_);
- pt_dir->children[j_] = &newpte->base; + pt_dir->staging[j_] = &newpte->base; entries[i].pt_entries[j].pt = oldpte; } } @@ -1513,6 +1534,7 @@ static unsigned int xe_pt_stage_unbind(s .ops = &xe_pt_stage_unbind_ops, .shifts = xe_normal_pt_shifts, .max_level = XE_PT_HIGHEST_LEVEL, + .staging = true, }, .tile = tile, .modified_start = xe_vma_start(vma), @@ -1554,7 +1576,7 @@ static void xe_pt_abort_unbind(struct xe { int i, j;
- xe_pt_commit_locks_assert(vma); + xe_pt_commit_prepare_locks_assert(vma);
for (i = num_entries - 1; i >= 0; --i) { struct xe_vm_pgtable_update *entry = &entries[i]; @@ -1567,7 +1589,7 @@ static void xe_pt_abort_unbind(struct xe continue;
for (j = entry->ofs; j < entry->ofs + entry->qwords; j++) - pt_dir->children[j] = + pt_dir->staging[j] = entries[i].pt_entries[j - entry->ofs].pt ? &entries[i].pt_entries[j - entry->ofs].pt->base : NULL; } @@ -1580,7 +1602,7 @@ xe_pt_commit_prepare_unbind(struct xe_vm { int i, j;
- xe_pt_commit_locks_assert(vma); + xe_pt_commit_prepare_locks_assert(vma);
for (i = 0; i < num_entries; ++i) { struct xe_vm_pgtable_update *entry = &entries[i]; @@ -1594,8 +1616,8 @@ xe_pt_commit_prepare_unbind(struct xe_vm pt_dir = as_xe_pt_dir(pt); for (j = entry->ofs; j < entry->ofs + entry->qwords; j++) { entry->pt_entries[j - entry->ofs].pt = - xe_pt_entry(pt_dir, j); - pt_dir->children[j] = NULL; + xe_pt_entry_staging(pt_dir, j); + pt_dir->staging[j] = NULL; } } } --- a/drivers/gpu/drm/xe/xe_pt_walk.c +++ b/drivers/gpu/drm/xe/xe_pt_walk.c @@ -74,7 +74,8 @@ int xe_pt_walk_range(struct xe_ptw *pare u64 addr, u64 end, struct xe_pt_walk *walk) { pgoff_t offset = xe_pt_offset(addr, level, walk); - struct xe_ptw **entries = parent->children ? parent->children : NULL; + struct xe_ptw **entries = walk->staging ? (parent->staging ?: NULL) : + (parent->children ?: NULL); const struct xe_pt_walk_ops *ops = walk->ops; enum page_walk_action action; struct xe_ptw *child; --- a/drivers/gpu/drm/xe/xe_pt_walk.h +++ b/drivers/gpu/drm/xe/xe_pt_walk.h @@ -11,12 +11,14 @@ /** * struct xe_ptw - base class for driver pagetable subclassing. * @children: Pointer to an array of children if any. + * @staging: Pointer to an array of staging if any. * * Drivers could subclass this, and if it's a page-directory, typically * embed an array of xe_ptw pointers. */ struct xe_ptw { struct xe_ptw **children; + struct xe_ptw **staging; };
/** @@ -41,6 +43,8 @@ struct xe_pt_walk { * as shared pagetables. */ bool shared_pt_mode; + /** @staging: Walk staging PT structure */ + bool staging; };
/**
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Hellström thomas.hellstrom@linux.intel.com
commit e3e2e7fc4cd8414c9a966ef1b344db543f8614f4 upstream.
Add proper #ifndef around the xe_hmm.h header, proper spacing and since the documentation mostly follows kerneldoc format, make it kerneldoc. Also prepare for upcoming -stable fixes.
Fixes: 81e058a3e7fd ("drm/xe: Introduce helper to populate userptr") Cc: Oak Zeng oak.zeng@intel.com Cc: stable@vger.kernel.org # v6.10+ Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com Reviewed-by: Matthew Auld matthew.auld@intel.com Acked-by: Matthew Brost <Matthew Brost matthew.brost@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250304173342.22009-2-thomas.... (cherry picked from commit bbe2b06b55bc061c8fcec034ed26e88287f39143) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/xe/xe_hmm.c | 9 +++------ drivers/gpu/drm/xe/xe_hmm.h | 5 +++++ 2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_hmm.c b/drivers/gpu/drm/xe/xe_hmm.c index 2e4ae61567d8..6ddcf88d8a39 100644 --- a/drivers/gpu/drm/xe/xe_hmm.c +++ b/drivers/gpu/drm/xe/xe_hmm.c @@ -19,11 +19,10 @@ static u64 xe_npages_in_range(unsigned long start, unsigned long end) return (end - start) >> PAGE_SHIFT; }
-/* +/** * xe_mark_range_accessed() - mark a range is accessed, so core mm * have such information for memory eviction or write back to * hard disk - * * @range: the range to mark * @write: if write to this range, we mark pages in this range * as dirty @@ -43,11 +42,10 @@ static void xe_mark_range_accessed(struct hmm_range *range, bool write) } }
-/* +/** * xe_build_sg() - build a scatter gather table for all the physical pages/pfn * in a hmm_range. dma-map pages if necessary. dma-address is save in sg table * and will be used to program GPU page table later. - * * @xe: the xe device who will access the dma-address in sg table * @range: the hmm range that we build the sg table from. range->hmm_pfns[] * has the pfn numbers of pages that back up this hmm address range. @@ -112,9 +110,8 @@ static int xe_build_sg(struct xe_device *xe, struct hmm_range *range, return ret; }
-/* +/** * xe_hmm_userptr_free_sg() - Free the scatter gather table of userptr - * * @uvma: the userptr vma which hold the scatter gather table * * With function xe_userptr_populate_range, we allocate storage of diff --git a/drivers/gpu/drm/xe/xe_hmm.h b/drivers/gpu/drm/xe/xe_hmm.h index 909dc2bdcd97..9602cb7d976d 100644 --- a/drivers/gpu/drm/xe/xe_hmm.h +++ b/drivers/gpu/drm/xe/xe_hmm.h @@ -3,9 +3,14 @@ * Copyright © 2024 Intel Corporation */
+#ifndef _XE_HMM_H_ +#define _XE_HMM_H_ + #include <linux/types.h>
struct xe_userptr_vma;
int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma, bool is_mm_mmap_locked); + void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma); +#endif
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Hellström thomas.hellstrom@linux.intel.com
commit 0a98219bcc961edd3388960576e4353e123b4a51 upstream.
The pnfs that we obtain from hmm_range_fault() point to pages that we don't have a reference on, and the guarantee that they are still in the cpu page-tables is that the notifier lock must be held and the notifier seqno is still valid.
So while building the sg table and marking the pages accesses / dirty we need to hold this lock with a validated seqno.
However, the lock is reclaim tainted which makes sg_alloc_table_from_pages_segment() unusable, since it internally allocates memory.
Instead build the sg-table manually. For the non-iommu case this might lead to fewer coalesces, but if that's a problem it can be fixed up later in the resource cursor code. For the iommu case, the whole sg-table may still be coalesced to a single contigous device va region.
This avoids marking pages that we don't own dirty and accessed, and it also avoid dereferencing struct pages that we don't own.
v2: - Use assert to check whether hmm pfns are valid (Matthew Auld) - Take into account that large pages may cross range boundaries (Matthew Auld)
v3: - Don't unnecessarily check for a non-freed sg-table. (Matthew Auld) - Add a missing up_read() in an error path. (Matthew Auld)
Fixes: 81e058a3e7fd ("drm/xe: Introduce helper to populate userptr") Cc: Oak Zeng oak.zeng@intel.com Cc: stable@vger.kernel.org # v6.10+ Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com Reviewed-by: Matthew Auld matthew.auld@intel.com Acked-by: Matthew Brost matthew.brost@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250304173342.22009-3-thomas.... (cherry picked from commit ea3e66d280ce2576664a862693d1da8fd324c317) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/xe/xe_hmm.c | 120 +++++++++++++++++++++++++++++++++----------- 1 file changed, 90 insertions(+), 30 deletions(-)
--- a/drivers/gpu/drm/xe/xe_hmm.c +++ b/drivers/gpu/drm/xe/xe_hmm.c @@ -42,6 +42,42 @@ static void xe_mark_range_accessed(struc } }
+static int xe_alloc_sg(struct xe_device *xe, struct sg_table *st, + struct hmm_range *range, struct rw_semaphore *notifier_sem) +{ + unsigned long i, npages, hmm_pfn; + unsigned long num_chunks = 0; + int ret; + + /* HMM docs says this is needed. */ + ret = down_read_interruptible(notifier_sem); + if (ret) + return ret; + + if (mmu_interval_read_retry(range->notifier, range->notifier_seq)) { + up_read(notifier_sem); + return -EAGAIN; + } + + npages = xe_npages_in_range(range->start, range->end); + for (i = 0; i < npages;) { + unsigned long len; + + hmm_pfn = range->hmm_pfns[i]; + xe_assert(xe, hmm_pfn & HMM_PFN_VALID); + + len = 1UL << hmm_pfn_to_map_order(hmm_pfn); + + /* If order > 0 the page may extend beyond range->start */ + len -= (hmm_pfn & ~HMM_PFN_FLAGS) & (len - 1); + i += len; + num_chunks++; + } + up_read(notifier_sem); + + return sg_alloc_table(st, num_chunks, GFP_KERNEL); +} + /** * xe_build_sg() - build a scatter gather table for all the physical pages/pfn * in a hmm_range. dma-map pages if necessary. dma-address is save in sg table @@ -50,6 +86,7 @@ static void xe_mark_range_accessed(struc * @range: the hmm range that we build the sg table from. range->hmm_pfns[] * has the pfn numbers of pages that back up this hmm address range. * @st: pointer to the sg table. + * @notifier_sem: The xe notifier lock. * @write: whether we write to this range. This decides dma map direction * for system pages. If write we map it bi-diretional; otherwise * DMA_TO_DEVICE @@ -76,38 +113,41 @@ static void xe_mark_range_accessed(struc * Returns 0 if successful; -ENOMEM if fails to allocate memory */ static int xe_build_sg(struct xe_device *xe, struct hmm_range *range, - struct sg_table *st, bool write) + struct sg_table *st, + struct rw_semaphore *notifier_sem, + bool write) { + unsigned long npages = xe_npages_in_range(range->start, range->end); struct device *dev = xe->drm.dev; - struct page **pages; - u64 i, npages; - int ret; - - npages = xe_npages_in_range(range->start, range->end); - pages = kvmalloc_array(npages, sizeof(*pages), GFP_KERNEL); - if (!pages) - return -ENOMEM; - - for (i = 0; i < npages; i++) { - pages[i] = hmm_pfn_to_page(range->hmm_pfns[i]); - xe_assert(xe, !is_device_private_page(pages[i])); - } - - ret = sg_alloc_table_from_pages_segment(st, pages, npages, 0, npages << PAGE_SHIFT, - xe_sg_segment_size(dev), GFP_KERNEL); - if (ret) - goto free_pages; - - ret = dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, - DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING); - if (ret) { - sg_free_table(st); - st = NULL; + struct scatterlist *sgl; + struct page *page; + unsigned long i, j; + + lockdep_assert_held(notifier_sem); + + i = 0; + for_each_sg(st->sgl, sgl, st->nents, j) { + unsigned long hmm_pfn, size; + + hmm_pfn = range->hmm_pfns[i]; + page = hmm_pfn_to_page(hmm_pfn); + xe_assert(xe, !is_device_private_page(page)); + + size = 1UL << hmm_pfn_to_map_order(hmm_pfn); + size -= page_to_pfn(page) & (size - 1); + i += size; + + if (unlikely(j == st->nents - 1)) { + if (i > npages) + size -= (i - npages); + sg_mark_end(sgl); + } + sg_set_page(sgl, page, size << PAGE_SHIFT, 0); } + xe_assert(xe, i == npages);
-free_pages: - kvfree(pages); - return ret; + return dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, + DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING); }
/** @@ -235,16 +275,36 @@ int xe_hmm_userptr_populate_range(struct if (ret) goto free_pfns;
- ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt, write); + ret = xe_alloc_sg(vm->xe, &userptr->sgt, &hmm_range, &vm->userptr.notifier_lock); if (ret) goto free_pfns;
+ ret = down_read_interruptible(&vm->userptr.notifier_lock); + if (ret) + goto free_st; + + if (mmu_interval_read_retry(hmm_range.notifier, hmm_range.notifier_seq)) { + ret = -EAGAIN; + goto out_unlock; + } + + ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt, + &vm->userptr.notifier_lock, write); + if (ret) + goto out_unlock; + xe_mark_range_accessed(&hmm_range, write); userptr->sg = &userptr->sgt; userptr->notifier_seq = hmm_range.notifier_seq; + up_read(&vm->userptr.notifier_lock); + kvfree(pfns); + return 0;
+out_unlock: + up_read(&vm->userptr.notifier_lock); +free_st: + sg_free_table(&userptr->sgt); free_pfns: kvfree(pfns); return ret; } -
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Hellström thomas.hellstrom@linux.intel.com
commit 1414d95d5805b1dc221d22db9b8dc5287ef083bc upstream.
Fix a (harmless) misplaced #endif leading to declarations appearing multiple times.
Fixes: 0eb2a18a8fad ("drm/xe: Implement VM snapshot support for BO's and userptr") Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: José Roberto de Souza jose.souza@intel.com Cc: stable@vger.kernel.org # v6.12+ Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com Reviewed-by: Lucas De Marchi lucas.demarchi@intel.com Reviewed-by: Tejas Upadhyay tejas.upadhyay@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250228073058.59510-3-thomas.... (cherry picked from commit fcc20a4c752214b3e25632021c57d7d1d71ee1dd) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/xe/xe_vm.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/xe/xe_vm.h +++ b/drivers/gpu/drm/xe/xe_vm.h @@ -275,9 +275,9 @@ static inline void vm_dbg(const struct d const char *format, ...) { /* noop */ } #endif -#endif
struct xe_vm_snapshot *xe_vm_snapshot_capture(struct xe_vm *vm); void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap); void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p); void xe_vm_snapshot_free(struct xe_vm_snapshot *snap); +#endif
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Hellström thomas.hellstrom@linux.intel.com
commit e775e2a060d99180edc5366fb9f4299d0f07b66c upstream.
If a userptr vma subject to prefetching was already invalidated or invalidated during the prefetch operation, the operation would repeatedly return -EAGAIN which would typically cause an infinite loop.
Validate the userptr to ensure this doesn't happen.
v2: - Don't fallthrough from UNMAP to PREFETCH (Matthew Brost)
Fixes: 5bd24e78829a ("drm/xe/vm: Subclass userptr vmas") Fixes: 617eebb9c480 ("drm/xe: Fix array of binds") Cc: Matthew Brost matthew.brost@intel.com Cc: stable@vger.kernel.org # v6.9+ Suggested-by: Matthew Brost matthew.brost@intel.com Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com Reviewed-by: Matthew Brost matthew.brost@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250228073058.59510-2-thomas.... (cherry picked from commit 03c346d4d0d85d210d549d43c8cfb3dfb7f20e0a) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/xe/xe_vm.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -2284,8 +2284,17 @@ static int vm_bind_ioctl_ops_parse(struc break; } case DRM_GPUVA_OP_UNMAP: + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask); + break; case DRM_GPUVA_OP_PREFETCH: - /* FIXME: Need to skip some prefetch ops */ + vma = gpuva_to_vma(op->base.prefetch.va); + + if (xe_vma_is_userptr(vma)) { + err = xe_vma_userptr_pin_pages(to_userptr_vma(vma)); + if (err) + return err; + } + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask); break; default:
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Krister Johansen kjlx@templeofstupid.com
commit 022bfe24aad8937705704ff2e414b100cf0f2e1a upstream.
If multiple connection requests attempt to create an implicit mptcp endpoint in parallel, more than one caller may end up in mptcp_pm_nl_append_new_local_addr because none found the address in local_addr_list during their call to mptcp_pm_nl_get_local_id. In this case, the concurrent new_local_addr calls may delete the address entry created by the previous caller. These deletes use synchronize_rcu, but this is not permitted in some of the contexts where this function may be called. During packet recv, the caller may be in a rcu read critical section and have preemption disabled.
An example stack:
BUG: scheduling while atomic: swapper/2/0/0x00000302
Call Trace: <IRQ> dump_stack_lvl (lib/dump_stack.c:117 (discriminator 1)) dump_stack (lib/dump_stack.c:124) __schedule_bug (kernel/sched/core.c:5943) schedule_debug.constprop.0 (arch/x86/include/asm/preempt.h:33 kernel/sched/core.c:5970) __schedule (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:207 kernel/sched/features.h:29 kernel/sched/core.c:6621) schedule (arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6804 kernel/sched/core.c:6818) schedule_timeout (kernel/time/timer.c:2160) wait_for_completion (kernel/sched/completion.c:96 kernel/sched/completion.c:116 kernel/sched/completion.c:127 kernel/sched/completion.c:148) __wait_rcu_gp (include/linux/rcupdate.h:311 kernel/rcu/update.c:444) synchronize_rcu (kernel/rcu/tree.c:3609) mptcp_pm_nl_append_new_local_addr (net/mptcp/pm_netlink.c:966 net/mptcp/pm_netlink.c:1061) mptcp_pm_nl_get_local_id (net/mptcp/pm_netlink.c:1164) mptcp_pm_get_local_id (net/mptcp/pm.c:420) subflow_check_req (net/mptcp/subflow.c:98 net/mptcp/subflow.c:213) subflow_v4_route_req (net/mptcp/subflow.c:305) tcp_conn_request (net/ipv4/tcp_input.c:7216) subflow_v4_conn_request (net/mptcp/subflow.c:651) tcp_rcv_state_process (net/ipv4/tcp_input.c:6709) tcp_v4_do_rcv (net/ipv4/tcp_ipv4.c:1934) tcp_v4_rcv (net/ipv4/tcp_ipv4.c:2334) ip_protocol_deliver_rcu (net/ipv4/ip_input.c:205 (discriminator 1)) ip_local_deliver_finish (include/linux/rcupdate.h:813 net/ipv4/ip_input.c:234) ip_local_deliver (include/linux/netfilter.h:314 include/linux/netfilter.h:308 net/ipv4/ip_input.c:254) ip_sublist_rcv_finish (include/net/dst.h:461 net/ipv4/ip_input.c:580) ip_sublist_rcv (net/ipv4/ip_input.c:640) ip_list_rcv (net/ipv4/ip_input.c:675) __netif_receive_skb_list_core (net/core/dev.c:5583 net/core/dev.c:5631) netif_receive_skb_list_internal (net/core/dev.c:5685 net/core/dev.c:5774) napi_complete_done (include/linux/list.h:37 include/net/gro.h:449 include/net/gro.h:444 net/core/dev.c:6114) igb_poll (drivers/net/ethernet/intel/igb/igb_main.c:8244) igb __napi_poll (net/core/dev.c:6582) net_rx_action (net/core/dev.c:6653 net/core/dev.c:6787) handle_softirqs (kernel/softirq.c:553) __irq_exit_rcu (kernel/softirq.c:588 kernel/softirq.c:427 kernel/softirq.c:636) irq_exit_rcu (kernel/softirq.c:651) common_interrupt (arch/x86/kernel/irq.c:247 (discriminator 14)) </IRQ>
This problem seems particularly prevalent if the user advertises an endpoint that has a different external vs internal address. In the case where the external address is advertised and multiple connections already exist, multiple subflow SYNs arrive in parallel which tends to trigger the race during creation of the first local_addr_list entries which have the internal address instead.
Fix by skipping the replacement of an existing implicit local address if called via mptcp_pm_nl_get_local_id.
Fixes: d045b9eb95a9 ("mptcp: introduce implicit endpoints") Cc: stable@vger.kernel.org Suggested-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Krister Johansen kjlx@templeofstupid.com Reviewed-by: Matthieu Baerts (NGI0) matttbe@kernel.org Signed-off-by: Matthieu Baerts (NGI0) matttbe@kernel.org Link: https://patch.msgid.link/20250303-net-mptcp-fix-sched-while-atomic-v1-1-f6a2... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/mptcp/pm_netlink.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-)
--- a/net/mptcp/pm_netlink.c +++ b/net/mptcp/pm_netlink.c @@ -968,7 +968,7 @@ static void __mptcp_pm_release_addr_entr
static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet, struct mptcp_pm_addr_entry *entry, - bool needs_id) + bool needs_id, bool replace) { struct mptcp_pm_addr_entry *cur, *del_entry = NULL; unsigned int addr_max; @@ -1008,6 +1008,17 @@ static int mptcp_pm_nl_append_new_local_ if (entry->addr.id) goto out;
+ /* allow callers that only need to look up the local + * addr's id to skip replacement. This allows them to + * avoid calling synchronize_rcu in the packet recv + * path. + */ + if (!replace) { + kfree(entry); + ret = cur->addr.id; + goto out; + } + pernet->addrs--; entry->addr.id = cur->addr.id; list_del_rcu(&cur->list); @@ -1160,7 +1171,7 @@ int mptcp_pm_nl_get_local_id(struct mptc entry->ifindex = 0; entry->flags = MPTCP_PM_ADDR_FLAG_IMPLICIT; entry->lsk = NULL; - ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true); + ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true, false); if (ret < 0) kfree(entry);
@@ -1432,7 +1443,8 @@ int mptcp_pm_nl_add_addr_doit(struct sk_ } } ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, - !mptcp_pm_has_addr_attr_id(attr, info)); + !mptcp_pm_has_addr_attr_id(attr, info), + true); if (ret < 0) { GENL_SET_ERR_MSG_FMT(info, "too many addresses or duplicate one: %d", ret); goto out_free;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tvrtko Ursulin tvrtko.ursulin@igalia.com
commit 54f94dc7f6b4db45dbc23b4db3d20c7194e2c54f upstream.
Any rules using engine matching are currently broken due RTP processing happening too in early init, before the list of hardware engines has been initialised.
Fix this by moving workaround processing to later in the driver probe sequence, to just before the processed list is used for the first time.
Looking at the debugfs gt0/workarounds on ADL-P we notice 14011060649 should be present while we see, before:
GT Workarounds 14011059788 14015795083
And with the patch:
GT Workarounds 14011060649 14011059788 14015795083
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@igalia.com Cc: Lucas De Marchi lucas.demarchi@intel.com Cc: Matt Roper matthew.d.roper@intel.com Cc: stable@vger.kernel.org # v6.11+ Reviewed-by: Lucas De Marchi lucas.demarchi@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250227101304.46660-2-tvrtko.... Signed-off-by: Lucas De Marchi lucas.demarchi@intel.com (cherry picked from commit 25d434cef791e03cf40680f5441b576c639bfa84) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/xe/xe_gt.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/xe/xe_gt.c +++ b/drivers/gpu/drm/xe/xe_gt.c @@ -379,9 +379,7 @@ int xe_gt_init_early(struct xe_gt *gt) if (err) return err;
- xe_wa_process_gt(gt); xe_wa_process_oob(gt); - xe_tuning_process_gt(gt);
xe_force_wake_init_gt(gt, gt_to_fw(gt)); spin_lock_init(>->global_invl_lock); @@ -469,6 +467,8 @@ static int all_fw_domain_init(struct xe_ goto err_hw_fence_irq;
xe_gt_mcr_set_implicit_defaults(gt); + xe_wa_process_gt(gt); + xe_tuning_process_gt(gt); xe_reg_sr_apply_mmio(>->reg_sr, gt);
err = xe_gt_clock_init(gt);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Hellström thomas.hellstrom@linux.intel.com
commit 84211b1c0db6b9dbe0020fa97192fb9661617f24 upstream.
Fix fault mode invalidation racing with unbind leading to the PTE zapping potentially traversing an invalid page-table tree. Do this by holding the notifier lock across PTE zapping. This might transfer any contention waiting on the notifier seqlock read side to the notifier lock read side, but that shouldn't be a major problem.
At the same time get rid of the open-coded invalidation in the bind code by relying on the notifier even when the vma bind is not yet committed.
Finally let userptr invalidation call a dedicated xe_vm function performing a full invalidation.
Fixes: e8babb280b5e ("drm/xe: Convert multiple bind ops into single job") Cc: Thomas Hellström thomas.hellstrom@linux.intel.com Cc: Matthew Brost matthew.brost@intel.com Cc: Matthew Auld matthew.auld@intel.com Cc: stable@vger.kernel.org # v6.12+ Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com Reviewed-by: Matthew Brost matthew.brost@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250228073058.59510-4-thomas.... (cherry picked from commit 100a5b8dadfca50d91d9a4c9fc01431b42a25cab) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/xe/xe_pt.c | 38 ++++------------- drivers/gpu/drm/xe/xe_vm.c | 85 +++++++++++++++++++++++++-------------- drivers/gpu/drm/xe/xe_vm.h | 8 +++ drivers/gpu/drm/xe/xe_vm_types.h | 4 - 4 files changed, 75 insertions(+), 60 deletions(-)
--- a/drivers/gpu/drm/xe/xe_pt.c +++ b/drivers/gpu/drm/xe/xe_pt.c @@ -1233,42 +1233,22 @@ static int vma_check_userptr(struct xe_v return 0;
uvma = to_userptr_vma(vma); - notifier_seq = uvma->userptr.notifier_seq; + if (xe_pt_userptr_inject_eagain(uvma)) + xe_vma_userptr_force_invalidate(uvma);
- if (uvma->userptr.initial_bind && !xe_vm_in_fault_mode(vm)) - return 0; + notifier_seq = uvma->userptr.notifier_seq;
if (!mmu_interval_read_retry(&uvma->userptr.notifier, - notifier_seq) && - !xe_pt_userptr_inject_eagain(uvma)) + notifier_seq)) return 0;
- if (xe_vm_in_fault_mode(vm)) { + if (xe_vm_in_fault_mode(vm)) return -EAGAIN; - } else { - spin_lock(&vm->userptr.invalidated_lock); - list_move_tail(&uvma->userptr.invalidate_link, - &vm->userptr.invalidated); - spin_unlock(&vm->userptr.invalidated_lock); - - if (xe_vm_in_preempt_fence_mode(vm)) { - struct dma_resv_iter cursor; - struct dma_fence *fence; - long err; - - dma_resv_iter_begin(&cursor, xe_vm_resv(vm), - DMA_RESV_USAGE_BOOKKEEP); - dma_resv_for_each_fence_unlocked(&cursor, fence) - dma_fence_enable_sw_signaling(fence); - dma_resv_iter_end(&cursor); - - err = dma_resv_wait_timeout(xe_vm_resv(vm), - DMA_RESV_USAGE_BOOKKEEP, - false, MAX_SCHEDULE_TIMEOUT); - XE_WARN_ON(err <= 0); - } - }
+ /* + * Just continue the operation since exec or rebind worker + * will take care of rebinding. + */ return 0; }
--- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -580,51 +580,26 @@ out_unlock_outer: trace_xe_vm_rebind_worker_exit(vm); }
-static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni, - const struct mmu_notifier_range *range, - unsigned long cur_seq) +static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma) { - struct xe_userptr *userptr = container_of(mni, typeof(*userptr), notifier); - struct xe_userptr_vma *uvma = container_of(userptr, typeof(*uvma), userptr); + struct xe_userptr *userptr = &uvma->userptr; struct xe_vma *vma = &uvma->vma; - struct xe_vm *vm = xe_vma_vm(vma); struct dma_resv_iter cursor; struct dma_fence *fence; long err;
- xe_assert(vm->xe, xe_vma_is_userptr(vma)); - trace_xe_vma_userptr_invalidate(vma); - - if (!mmu_notifier_range_blockable(range)) - return false; - - vm_dbg(&xe_vma_vm(vma)->xe->drm, - "NOTIFIER: addr=0x%016llx, range=0x%016llx", - xe_vma_start(vma), xe_vma_size(vma)); - - down_write(&vm->userptr.notifier_lock); - mmu_interval_set_seq(mni, cur_seq); - - /* No need to stop gpu access if the userptr is not yet bound. */ - if (!userptr->initial_bind) { - up_write(&vm->userptr.notifier_lock); - return true; - } - /* * Tell exec and rebind worker they need to repin and rebind this * userptr. */ if (!xe_vm_in_fault_mode(vm) && - !(vma->gpuva.flags & XE_VMA_DESTROYED) && vma->tile_present) { + !(vma->gpuva.flags & XE_VMA_DESTROYED)) { spin_lock(&vm->userptr.invalidated_lock); list_move_tail(&userptr->invalidate_link, &vm->userptr.invalidated); spin_unlock(&vm->userptr.invalidated_lock); }
- up_write(&vm->userptr.notifier_lock); - /* * Preempt fences turn into schedule disables, pipeline these. * Note that even in fault mode, we need to wait for binds and @@ -642,11 +617,35 @@ static bool vma_userptr_invalidate(struc false, MAX_SCHEDULE_TIMEOUT); XE_WARN_ON(err <= 0);
- if (xe_vm_in_fault_mode(vm)) { + if (xe_vm_in_fault_mode(vm) && userptr->initial_bind) { err = xe_vm_invalidate_vma(vma); XE_WARN_ON(err); } +} + +static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni, + const struct mmu_notifier_range *range, + unsigned long cur_seq) +{ + struct xe_userptr_vma *uvma = container_of(mni, typeof(*uvma), userptr.notifier); + struct xe_vma *vma = &uvma->vma; + struct xe_vm *vm = xe_vma_vm(vma); + + xe_assert(vm->xe, xe_vma_is_userptr(vma)); + trace_xe_vma_userptr_invalidate(vma);
+ if (!mmu_notifier_range_blockable(range)) + return false; + + vm_dbg(&xe_vma_vm(vma)->xe->drm, + "NOTIFIER: addr=0x%016llx, range=0x%016llx", + xe_vma_start(vma), xe_vma_size(vma)); + + down_write(&vm->userptr.notifier_lock); + mmu_interval_set_seq(mni, cur_seq); + + __vma_userptr_invalidate(vm, uvma); + up_write(&vm->userptr.notifier_lock); trace_xe_vma_userptr_invalidate_complete(vma);
return true; @@ -656,6 +655,34 @@ static const struct mmu_interval_notifie .invalidate = vma_userptr_invalidate, };
+#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT) +/** + * xe_vma_userptr_force_invalidate() - force invalidate a userptr + * @uvma: The userptr vma to invalidate + * + * Perform a forced userptr invalidation for testing purposes. + */ +void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma) +{ + struct xe_vm *vm = xe_vma_vm(&uvma->vma); + + /* Protect against concurrent userptr pinning */ + lockdep_assert_held(&vm->lock); + /* Protect against concurrent notifiers */ + lockdep_assert_held(&vm->userptr.notifier_lock); + /* + * Protect against concurrent instances of this function and + * the critical exec sections + */ + xe_vm_assert_held(vm); + + if (!mmu_interval_read_retry(&uvma->userptr.notifier, + uvma->userptr.notifier_seq)) + uvma->userptr.notifier_seq -= 2; + __vma_userptr_invalidate(vm, uvma); +} +#endif + int xe_vm_userptr_pin(struct xe_vm *vm) { struct xe_userptr_vma *uvma, *next; --- a/drivers/gpu/drm/xe/xe_vm.h +++ b/drivers/gpu/drm/xe/xe_vm.h @@ -280,4 +280,12 @@ struct xe_vm_snapshot *xe_vm_snapshot_ca void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap); void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p); void xe_vm_snapshot_free(struct xe_vm_snapshot *snap); + +#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT) +void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma); +#else +static inline void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma) +{ +} +#endif #endif --- a/drivers/gpu/drm/xe/xe_vm_types.h +++ b/drivers/gpu/drm/xe/xe_vm_types.h @@ -227,8 +227,8 @@ struct xe_vm { * up for revalidation. Protected from access with the * @invalidated_lock. Removing items from the list * additionally requires @lock in write mode, and adding - * items to the list requires the @userptr.notifer_lock in - * write mode. + * items to the list requires either the @userptr.notifer_lock in + * write mode, OR @lock in write mode. */ struct list_head invalidated; } userptr;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matthew Auld matthew.auld@intel.com
commit 475d06e00b7496c7915d87f7ae67af26738e4649 upstream.
Currently we just leave it uninitialised, which at first looks harmless, however we also don't zero out the pfn array, and with pfn_flags_mask the idea is to be able set individual flags for a given range of pfn or completely ignore them, outside of default_flags. So here we end up with pfn[i] & pfn_flags_mask, and if both are uninitialised we might get back an unexpected flags value, like asking for read only with default_flags, but getting back write on top, leading to potentially bogus behaviour.
To fix this ensure we zero the pfn_flags_mask, such that hmm only considers the default_flags and not also the initial pfn[i] value.
v2 (Thomas): - Prefer proper initializer.
Fixes: 81e058a3e7fd ("drm/xe: Introduce helper to populate userptr") Signed-off-by: Matthew Auld matthew.auld@intel.com Cc: Matthew Brost matthew.brost@intel.com Cc: Thomas Hellström thomas.hellstrom@intel.com Cc: stable@vger.kernel.org # v6.10+ Reviewed-by: Thomas Hellström thomas.hellstrom@linux.intel.com Reviewed-by: Tejas Upadhyay tejas.upadhyay@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250226174748.294285-2-matthe... (cherry picked from commit dd8c01e42f4c5c1eaf02f003d7d588ba6706aa71) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/xe/xe_hmm.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-)
--- a/drivers/gpu/drm/xe/xe_hmm.c +++ b/drivers/gpu/drm/xe/xe_hmm.c @@ -203,13 +203,20 @@ int xe_hmm_userptr_populate_range(struct { unsigned long timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); - unsigned long *pfns, flags = HMM_PFN_REQ_FAULT; + unsigned long *pfns; struct xe_userptr *userptr; struct xe_vma *vma = &uvma->vma; u64 userptr_start = xe_vma_userptr(vma); u64 userptr_end = userptr_start + xe_vma_size(vma); struct xe_vm *vm = xe_vma_vm(vma); - struct hmm_range hmm_range; + struct hmm_range hmm_range = { + .pfn_flags_mask = 0, /* ignore pfns */ + .default_flags = HMM_PFN_REQ_FAULT, + .start = userptr_start, + .end = userptr_end, + .notifier = &uvma->userptr.notifier, + .dev_private_owner = vm->xe, + }; bool write = !xe_vma_read_only(vma); unsigned long notifier_seq; u64 npages; @@ -236,19 +243,14 @@ int xe_hmm_userptr_populate_range(struct return -ENOMEM;
if (write) - flags |= HMM_PFN_REQ_WRITE; + hmm_range.default_flags |= HMM_PFN_REQ_WRITE;
if (!mmget_not_zero(userptr->notifier.mm)) { ret = -EFAULT; goto free_pfns; }
- hmm_range.default_flags = flags; hmm_range.hmm_pfns = pfns; - hmm_range.notifier = &userptr->notifier; - hmm_range.start = userptr_start; - hmm_range.end = userptr_end; - hmm_range.dev_private_owner = vm->xe;
while (true) { hmm_range.notifier_seq = mmu_interval_read_begin(&userptr->notifier);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Hellström thomas.hellstrom@linux.intel.com
commit 333b8906336174478efbbfc1e24a89e3397ffe65 upstream.
If userptr pages are freed after a call to the xe mmu notifier, the device will not be blocked out from theoretically accessing these pages unless they are also unmapped from the iommu, and this violates some aspects of the iommu-imposed security.
Ensure that userptrs are unmapped in the mmu notifier to mitigate this. A naive attempt would try to free the sg table, but the sg table itself may be accessed by a concurrent bind operation, so settle for only unmapping.
v3: - Update lockdep asserts. - Fix a typo (Matthew Auld)
Fixes: 81e058a3e7fd ("drm/xe: Introduce helper to populate userptr") Cc: Oak Zeng oak.zeng@intel.com Cc: Matthew Auld matthew.auld@intel.com Cc: stable@vger.kernel.org # v6.10+ Signed-off-by: Thomas Hellström thomas.hellstrom@linux.intel.com Reviewed-by: Matthew Auld matthew.auld@intel.com Acked-by: Matthew Brost matthew.brost@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20250304173342.22009-4-thomas.... (cherry picked from commit ba767b9d01a2c552d76cf6f46b125d50ec4147a6) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/xe/xe_hmm.c | 51 ++++++++++++++++++++++++++++++++------- drivers/gpu/drm/xe/xe_hmm.h | 2 + drivers/gpu/drm/xe/xe_vm.c | 4 +++ drivers/gpu/drm/xe/xe_vm_types.h | 4 +++ 4 files changed, 52 insertions(+), 9 deletions(-)
--- a/drivers/gpu/drm/xe/xe_hmm.c +++ b/drivers/gpu/drm/xe/xe_hmm.c @@ -150,6 +150,45 @@ static int xe_build_sg(struct xe_device DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING); }
+static void xe_hmm_userptr_set_mapped(struct xe_userptr_vma *uvma) +{ + struct xe_userptr *userptr = &uvma->userptr; + struct xe_vm *vm = xe_vma_vm(&uvma->vma); + + lockdep_assert_held_write(&vm->lock); + lockdep_assert_held(&vm->userptr.notifier_lock); + + mutex_lock(&userptr->unmap_mutex); + xe_assert(vm->xe, !userptr->mapped); + userptr->mapped = true; + mutex_unlock(&userptr->unmap_mutex); +} + +void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma) +{ + struct xe_userptr *userptr = &uvma->userptr; + struct xe_vma *vma = &uvma->vma; + bool write = !xe_vma_read_only(vma); + struct xe_vm *vm = xe_vma_vm(vma); + struct xe_device *xe = vm->xe; + + if (!lockdep_is_held_type(&vm->userptr.notifier_lock, 0) && + !lockdep_is_held_type(&vm->lock, 0) && + !(vma->gpuva.flags & XE_VMA_DESTROYED)) { + /* Don't unmap in exec critical section. */ + xe_vm_assert_held(vm); + /* Don't unmap while mapping the sg. */ + lockdep_assert_held(&vm->lock); + } + + mutex_lock(&userptr->unmap_mutex); + if (userptr->sg && userptr->mapped) + dma_unmap_sgtable(xe->drm.dev, userptr->sg, + write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 0); + userptr->mapped = false; + mutex_unlock(&userptr->unmap_mutex); +} + /** * xe_hmm_userptr_free_sg() - Free the scatter gather table of userptr * @uvma: the userptr vma which hold the scatter gather table @@ -161,16 +200,9 @@ static int xe_build_sg(struct xe_device void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma) { struct xe_userptr *userptr = &uvma->userptr; - struct xe_vma *vma = &uvma->vma; - bool write = !xe_vma_read_only(vma); - struct xe_vm *vm = xe_vma_vm(vma); - struct xe_device *xe = vm->xe; - struct device *dev = xe->drm.dev; - - xe_assert(xe, userptr->sg); - dma_unmap_sgtable(dev, userptr->sg, - write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 0);
+ xe_assert(xe_vma_vm(&uvma->vma)->xe, userptr->sg); + xe_hmm_userptr_unmap(uvma); sg_free_table(userptr->sg); userptr->sg = NULL; } @@ -297,6 +329,7 @@ int xe_hmm_userptr_populate_range(struct
xe_mark_range_accessed(&hmm_range, write); userptr->sg = &userptr->sgt; + xe_hmm_userptr_set_mapped(uvma); userptr->notifier_seq = hmm_range.notifier_seq; up_read(&vm->userptr.notifier_lock); kvfree(pfns); --- a/drivers/gpu/drm/xe/xe_hmm.h +++ b/drivers/gpu/drm/xe/xe_hmm.h @@ -13,4 +13,6 @@ struct xe_userptr_vma; int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma, bool is_mm_mmap_locked);
void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma); + +void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma); #endif --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -621,6 +621,8 @@ static void __vma_userptr_invalidate(str err = xe_vm_invalidate_vma(vma); XE_WARN_ON(err); } + + xe_hmm_userptr_unmap(uvma); }
static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni, @@ -1039,6 +1041,7 @@ static struct xe_vma *xe_vma_create(stru INIT_LIST_HEAD(&userptr->invalidate_link); INIT_LIST_HEAD(&userptr->repin_link); vma->gpuva.gem.offset = bo_offset_or_userptr; + mutex_init(&userptr->unmap_mutex);
err = mmu_interval_notifier_insert(&userptr->notifier, current->mm, @@ -1080,6 +1083,7 @@ static void xe_vma_destroy_late(struct x * them anymore */ mmu_interval_notifier_remove(&userptr->notifier); + mutex_destroy(&userptr->unmap_mutex); xe_vm_put(vm); } else if (xe_vma_is_null(vma)) { xe_vm_put(vm); --- a/drivers/gpu/drm/xe/xe_vm_types.h +++ b/drivers/gpu/drm/xe/xe_vm_types.h @@ -59,12 +59,16 @@ struct xe_userptr { struct sg_table *sg; /** @notifier_seq: notifier sequence number */ unsigned long notifier_seq; + /** @unmap_mutex: Mutex protecting dma-unmapping */ + struct mutex unmap_mutex; /** * @initial_bind: user pointer has been bound at least once. * write: vm->userptr.notifier_lock in read mode and vm->resv held. * read: vm->userptr.notifier_lock in write mode or vm->resv held. */ bool initial_bind; + /** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */ + bool mapped; #if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT) u32 divisor; #endif
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Haoxiang Li haoxiang_li2024@163.com
commit f2176a07e7b19f73e05c805cf3d130a2999154cb upstream.
Add check for the return value of mgmt_alloc_skb() in mgmt_remote_name() to prevent null pointer dereference.
Fixes: ba17bb62ce41 ("Bluetooth: Fix skb allocation in mgmt_remote_name() & mgmt_device_connected()") Cc: stable@vger.kernel.org Signed-off-by: Haoxiang Li haoxiang_li2024@163.com Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/bluetooth/mgmt.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -10484,6 +10484,8 @@ void mgmt_remote_name(struct hci_dev *hd
skb = mgmt_alloc_skb(hdev, MGMT_EV_DEVICE_FOUND, sizeof(*ev) + (name ? eir_precalc_len(name_len) : 0)); + if (!skb) + return;
ev = skb_put(skb, sizeof(*ev)); bacpy(&ev->addr.bdaddr, bdaddr);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Haoxiang Li haoxiang_li2024@163.com
commit d8df010f72b8a32aaea393e36121738bb53ed905 upstream.
Add check for the return value of mgmt_alloc_skb() in mgmt_device_connected() to prevent null pointer dereference.
Fixes: e96741437ef0 ("Bluetooth: mgmt: Make use of mgmt_send_event_skb in MGMT_EV_DEVICE_CONNECTED") Cc: stable@vger.kernel.org Signed-off-by: Haoxiang Li haoxiang_li2024@163.com Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/bluetooth/mgmt.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -9731,6 +9731,9 @@ void mgmt_device_connected(struct hci_de sizeof(*ev) + (name ? eir_precalc_len(name_len) : 0) + eir_precalc_len(sizeof(conn->dev_class)));
+ if (!skb) + return; + ev = skb_put(skb, sizeof(*ev)); bacpy(&ev->addr.bdaddr, &conn->dst); ev->addr.type = link_to_bdaddr(conn->type, conn->dst_type);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikita Zhandarovich n.zhandarovich@fintech.ru
commit 59b348be7597c4a9903cb003c69e37df20c04a30 upstream.
Syzbot keeps reporting an issue [1] that occurs when erroneous symbols sent from userspace get through into user_alpha2[] via regulatory_hint_user() call. Such invalid regulatory hints should be rejected.
While a sanity check from commit 47caf685a685 ("cfg80211: regulatory: reject invalid hints") looks to be enough to deter these very cases, there is a way to get around it due to 2 reasons.
1) The way isalpha() works, symbols other than latin lower and upper letters may be used to determine a country/domain. For instance, greek letters will also be considered upper/lower letters and for such characters isalpha() will return true as well. However, ISO-3166-1 alpha2 codes should only hold latin characters.
2) While processing a user regulatory request, between reg_process_hint_user() and regulatory_hint_user() there happens to be a call to queue_regulatory_request() which modifies letters in request->alpha2[] with toupper(). This works fine for latin symbols, less so for weird letter characters from the second part of _ctype[].
Syzbot triggers a warning in is_user_regdom_saved() by first sending over an unexpected non-latin letter that gets malformed by toupper() into a character that ends up failing isalpha() check.
Prevent this by enhancing is_an_alpha2() to ensure that incoming symbols are latin letters and nothing else.
[1] Syzbot report: ------------[ cut here ]------------ Unexpected user alpha2: A� WARNING: CPU: 1 PID: 964 at net/wireless/reg.c:442 is_user_regdom_saved net/wireless/reg.c:440 [inline] WARNING: CPU: 1 PID: 964 at net/wireless/reg.c:442 restore_alpha2 net/wireless/reg.c:3424 [inline] WARNING: CPU: 1 PID: 964 at net/wireless/reg.c:442 restore_regulatory_settings+0x3c0/0x1e50 net/wireless/reg.c:3516 Modules linked in: CPU: 1 UID: 0 PID: 964 Comm: kworker/1:2 Not tainted 6.12.0-rc5-syzkaller-00044-gc1e939a21eb1 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Workqueue: events_power_efficient crda_timeout_work RIP: 0010:is_user_regdom_saved net/wireless/reg.c:440 [inline] RIP: 0010:restore_alpha2 net/wireless/reg.c:3424 [inline] RIP: 0010:restore_regulatory_settings+0x3c0/0x1e50 net/wireless/reg.c:3516 ... Call Trace: <TASK> crda_timeout_work+0x27/0x50 net/wireless/reg.c:542 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa65/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f2/0x390 kernel/kthread.c:389 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 </TASK>
Reported-by: syzbot+e10709ac3c44f3d4e800@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=e10709ac3c44f3d4e800 Fixes: 09d989d179d0 ("cfg80211: add regulatory hint disconnect support") Cc: stable@kernel.org Signed-off-by: Nikita Zhandarovich n.zhandarovich@fintech.ru Link: https://patch.msgid.link/20250228134659.1577656-1-n.zhandarovich@fintech.ru Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/wireless/reg.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -407,7 +407,8 @@ static bool is_an_alpha2(const char *alp { if (!alpha2) return false; - return isalpha(alpha2[0]) && isalpha(alpha2[1]); + return isascii(alpha2[0]) && isalpha(alpha2[0]) && + isascii(alpha2[1]) && isalpha(alpha2[1]); }
static bool alpha2_equal(const char *alpha2_x, const char *alpha2_y)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vitaliy Shevtsov v.shevtsov@mt-integration.ru
commit 49f27f29446a5bfe633dd2cc0cfebd48a1a5e77f upstream.
It is possible to set both MONITOR_FLAG_COOK_FRAMES and MONITOR_FLAG_ACTIVE flags simultaneously on the same monitor interface from the userspace. This causes a sub-interface to be created with no IEEE80211_SDATA_IN_DRIVER bit set because the monitor interface is in the cooked state and it takes precedence over all other states. When the interface is then being deleted the kernel calls WARN_ONCE() from check_sdata_in_driver() because of missing that bit.
Fix this by rejecting MONITOR_FLAG_COOK_FRAMES if it is set along with other flags.
Found by Linux Verification Center (linuxtesting.org) with Syzkaller.
Fixes: 66f7ac50ed7c ("nl80211: Add monitor interface configuration flags") Cc: stable@vger.kernel.org Reported-by: syzbot+2e5c1e55b9e5c28a3da7@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=2e5c1e55b9e5c28a3da7 Signed-off-by: Vitaliy Shevtsov v.shevtsov@mt-integration.ru Link: https://patch.msgid.link/20250131152657.5606-1-v.shevtsov@mt-integration.ru Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/wireless/nl80211.c | 5 +++++ 1 file changed, 5 insertions(+)
--- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -4218,6 +4218,11 @@ static int parse_monitor_flags(struct nl if (flags[flag]) *mntrflags |= (1<<flag);
+ /* cooked monitor mode is incompatible with other modes */ + if (*mntrflags & MONITOR_FLAG_COOK_FRAMES && + *mntrflags != MONITOR_FLAG_COOK_FRAMES) + return -EOPNOTSUPP; + *mntrflags |= MONITOR_FLAG_CHANGED;
return 0;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: SeongJae Park sj@kernel.org
commit 349db086a66051bc6114b64b4446787c20ac3f00 upstream.
damos_quota_goal.py selftest see if DAMOS quota goals tuning feature increases or reduces the effective size quota for given score as expected. The tuning feature sets the minimum quota size as one byte, so if the effective size quota is already one, we cannot expect it further be reduced. However the test is not aware of the edge case, and fails since it shown no expected change of the effective quota. Handle the case by updating the failure logic for no change to see if it was the case, and simply skips to next test input.
Link: https://lkml.kernel.org/r/20250217182304.45215-1-sj@kernel.org Fixes: f1c07c0a1662 ("selftests/damon: add a test for DAMOS quota goal") Signed-off-by: SeongJae Park sj@kernel.org Reported-by: kernel test robot oliver.sang@intel.com Closes: https://lore.kernel.org/oe-lkp/202502171423.b28a918d-lkp@intel.com Cc: Shuah Khan shuah@kernel.org Cc: stable@vger.kernel.org [6.10.x] Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/testing/selftests/damon/damos_quota_goal.py | 3 +++ 1 file changed, 3 insertions(+)
--- a/tools/testing/selftests/damon/damos_quota_goal.py +++ b/tools/testing/selftests/damon/damos_quota_goal.py @@ -63,6 +63,9 @@ def main(): if last_effective_bytes != 0 else -1.0))
if last_effective_bytes == goal.effective_bytes: + # effective quota was already minimum that cannot be more reduced + if expect_increase is False and last_effective_bytes == 1: + continue print('efective bytes not changed: %d' % goal.effective_bytes) exit(1)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: SeongJae Park sj@kernel.org
commit 1c684d77dfbcf926e0dd28f6d260e8fdd8a58e85 upstream.
Patch series "selftests/damon: three fixes for false results".
Fix three DAMON selftest bugs that cause two and one false positive failures and successes.
This patch (of 3):
damos_quota.py assumes the quota will always exceeded. But whether quota will be exceeded or not depend on the monitoring results. Actually the monitored workload has chaning access pattern and hence sometimes the quota may not really be exceeded. As a result, false positive test failures happen. Expect how much time the quota will be exceeded by checking the monitoring results, and use it instead of the naive assumption.
Link: https://lkml.kernel.org/r/20250225222333.505646-1-sj@kernel.org Link: https://lkml.kernel.org/r/20250225222333.505646-2-sj@kernel.org Fixes: 51f58c9da14b ("selftests/damon: add a test for DAMOS quota") Signed-off-by: SeongJae Park sj@kernel.org Cc: Shuah Khan shuah@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/testing/selftests/damon/damos_quota.py | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/tools/testing/selftests/damon/damos_quota.py +++ b/tools/testing/selftests/damon/damos_quota.py @@ -51,16 +51,19 @@ def main(): nr_quota_exceeds = scheme.stats.qt_exceeds
wss_collected.sort() + nr_expected_quota_exceeds = 0 for wss in wss_collected: if wss > sz_quota: print('quota is not kept: %s > %s' % (wss, sz_quota)) print('collected samples are as below') print('\n'.join(['%d' % wss for wss in wss_collected])) exit(1) + if wss == sz_quota: + nr_expected_quota_exceeds += 1
- if nr_quota_exceeds < len(wss_collected): - print('quota is not always exceeded: %d > %d' % - (len(wss_collected), nr_quota_exceeds)) + if nr_quota_exceeds < nr_expected_quota_exceeds: + print('quota is exceeded less than expected: %d < %d' % + (nr_quota_exceeds, nr_expected_quota_exceeds)) exit(1)
if __name__ == '__main__':
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: SeongJae Park sj@kernel.org
commit 695469c07a65547acb6e229b3fdf6aaa881817e3 upstream.
damon_nr_regions.py updates max_nr_regions to a number smaller than expected number of real regions and confirms DAMON respect the harsh limit. To give time for DAMON to make changes for the regions, 3 aggregation intervals (300 milliseconds) are given.
The internal mechanism works with not only the max_nr_regions, but also sz_limit, though. It avoids merging region if that casn make region of size larger than sz_limit. In the test, sz_limit is set too small to achive the new max_nr_regions, unless it is updated for the new min_nr_regions. But the update is done only once per operations set update interval, which is one second by default.
Hence, the test randomly incurs false positive failures. Fix it by setting the ops interval same to aggregation interval, to make sure sz_limit is updated by the time of the check.
Link: https://lkml.kernel.org/r/20250225222333.505646-3-sj@kernel.org Fixes: 8bf890c81612 ("selftests/damon/damon_nr_regions: test online-tuned max_nr_regions") Signed-off-by: SeongJae Park sj@kernel.org Cc: Shuah Khan shuah@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/testing/selftests/damon/damon_nr_regions.py | 1 + 1 file changed, 1 insertion(+)
--- a/tools/testing/selftests/damon/damon_nr_regions.py +++ b/tools/testing/selftests/damon/damon_nr_regions.py @@ -109,6 +109,7 @@ def main(): attrs = kdamonds.kdamonds[0].contexts[0].monitoring_attrs attrs.min_nr_regions = 3 attrs.max_nr_regions = 7 + attrs.update_us = 100000 err = kdamonds.kdamonds[0].commit() if err is not None: proc.terminate()
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: SeongJae Park sj@kernel.org
commit 582ccf78f6090d88b1c7066b1e90b3d9ec952d08 upstream.
damon_nr_regions.py starts DAMON, periodically collect number of regions in snapshots, and see if it is in the requested range. The check code assumes the numbers are sorted on the collection list, but there is no such guarantee. Hence this can result in false positive test success. Sort the list before doing the check.
Link: https://lkml.kernel.org/r/20250225222333.505646-4-sj@kernel.org Fixes: 781497347d1b ("selftests/damon: implement test for min/max_nr_regions") Signed-off-by: SeongJae Park sj@kernel.org Cc: Shuah Khan shuah@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/testing/selftests/damon/damon_nr_regions.py | 1 + 1 file changed, 1 insertion(+)
--- a/tools/testing/selftests/damon/damon_nr_regions.py +++ b/tools/testing/selftests/damon/damon_nr_regions.py @@ -65,6 +65,7 @@ def test_nr_regions(real_nr_regions, min
test_name = 'nr_regions test with %d/%d/%d real/min/max nr_regions' % ( real_nr_regions, min_nr_regions, max_nr_regions) + collected_nr_regions.sort() if (collected_nr_regions[0] < min_nr_regions or collected_nr_regions[-1] > max_nr_regions): print('fail %s' % test_name)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Haoxiang Li haoxiang_li2024@163.com
commit e842f9a1edf306bf36fe2a4d847a0b0d458770de upstream.
The return value of rio_add_net() should be checked. If it fails, put_device() should be called to free the memory and give up the reference initialized in rio_add_net().
Link: https://lkml.kernel.org/r/20250227041131.3680761-1-haoxiang_li2024@163.com Fixes: e6b585ca6e81 ("rapidio: move net allocation into core code") Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Haoxiang Li haoxiang_li2024@163.com Cc: Alexandre Bounine alex.bou9@gmail.com Cc: Matt Porter mporter@kernel.crashing.org Cc: Dan Carpenter dan.carpenter@linaro.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/rapidio/rio-scan.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/rapidio/rio-scan.c +++ b/drivers/rapidio/rio-scan.c @@ -871,7 +871,10 @@ static struct rio_net *rio_scan_alloc_ne dev_set_name(&net->dev, "rnet_%d", net->id); net->dev.parent = &mport->dev; net->dev.release = rio_scan_release_dev; - rio_add_net(net); + if (rio_add_net(net)) { + put_device(&net->dev); + net = NULL; + } }
return net;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Haoxiang Li haoxiang_li2024@163.com
commit b2ef51c74b0171fde7eb69b6152d3d2f743ef269 upstream.
rio_add_net() calls device_register() and fails when device_register() fails. Thus, put_device() should be used rather than kfree(). Add "mport->net = NULL;" to avoid a use after free issue.
Link: https://lkml.kernel.org/r/20250227073409.3696854-1-haoxiang_li2024@163.com Fixes: e8de370188d0 ("rapidio: add mport char device driver") Signed-off-by: Haoxiang Li haoxiang_li2024@163.com Reviewed-by: Dan Carpenter dan.carpenter@linaro.org Cc: Alexandre Bounine alex.bou9@gmail.com Cc: Matt Porter mporter@kernel.crashing.org Cc: Yang Yingliang yangyingliang@huawei.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/rapidio/devices/rio_mport_cdev.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/rapidio/devices/rio_mport_cdev.c +++ b/drivers/rapidio/devices/rio_mport_cdev.c @@ -1742,7 +1742,8 @@ static int rio_mport_add_riodev(struct m err = rio_add_net(net); if (err) { rmcd_debug(RDEV, "failed to register net, err=%d", err); - kfree(net); + put_device(&net->dev); + mport->net = NULL; goto cleanup; } }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sebastian Andrzej Siewior bigeasy@linutronix.de
commit 19fac3c93991502a22c5132824c40b6a2e64b136 upstream.
kmsan_handle_dma() is used by virtio_ring() which can be built as a module. kmsan_handle_dma() needs to be exported otherwise building the virtio_ring fails.
Export kmsan_handle_dma for modules.
Link: https://lkml.kernel.org/r/20250218091411.MMS3wBN9@linutronix.de Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/oe-kbuild-all/202502150634.qjxwSeJR-lkp@intel.com/ Fixes: 7ade4f10779c ("dma: kmsan: unpoison DMA mappings") Signed-off-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Cc: Alexander Potapenko glider@google.com Cc: Dmitriy Vyukov dvyukov@google.com Cc: Macro Elver elver@google.com Cc: Peter Zijlstra (Intel) peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/kmsan/hooks.c | 1 + 1 file changed, 1 insertion(+)
--- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -357,6 +357,7 @@ void kmsan_handle_dma(struct page *page, size -= to_go; } } +EXPORT_SYMBOL_GPL(kmsan_handle_dma);
void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, enum dma_data_direction dir)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiko Carstens hca@linux.ibm.com
commit 5623bc23a1cb9f9a9470fa73b3a20321dc4c4870 upstream.
The test_monitor_call() inline assembly uses the xgr instruction, which also modifies the condition code, to clear a register. However the clobber list of the inline assembly does not specify that the condition code is modified, which may lead to incorrect code generation.
Use the lhi instruction instead to clear the register without that the condition code is modified. Furthermore this limits clearing to the lower 32 bits of val, since its type is int.
Fixes: 17248ea03674 ("s390: fix __EMIT_BUG() macro") Cc: stable@vger.kernel.org Reviewed-by: Juergen Christ jchrist@linux.ibm.com Signed-off-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/s390/kernel/traps.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/arch/s390/kernel/traps.c +++ b/arch/s390/kernel/traps.c @@ -284,10 +284,10 @@ static void __init test_monitor_call(voi return; asm volatile( " mc 0,0\n" - "0: xgr %0,%0\n" + "0: lhi %[val],0\n" "1:\n" - EX_TABLE(0b,1b) - : "+d" (val)); + EX_TABLE(0b, 1b) + : [val] "+d" (val)); if (!val) panic("Monitor call doesn't work!\n"); }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mike Snitzer snitzer@kernel.org
commit ce6d9c1c2b5cc785016faa11b48b6cd317eb367e upstream.
Add PF_KCOMPACTD flag and current_is_kcompactd() helper to check for it so nfs_release_folio() can skip calling nfs_wb_folio() from kcompactd.
Otherwise NFS can deadlock waiting for kcompactd enduced writeback which recurses back to NFS (which triggers writeback to NFSD via NFS loopback mount on the same host, NFSD blocks waiting for XFS's call to __filemap_get_folio):
6070.550357] INFO: task kcompactd0:58 blocked for more than 4435 seconds.
{--- [58] "kcompactd0" [<0>] folio_wait_bit+0xe8/0x200 [<0>] folio_wait_writeback+0x2b/0x80 [<0>] nfs_wb_folio+0x80/0x1b0 [nfs] [<0>] nfs_release_folio+0x68/0x130 [nfs] [<0>] split_huge_page_to_list_to_order+0x362/0x840 [<0>] migrate_pages_batch+0x43d/0xb90 [<0>] migrate_pages_sync+0x9a/0x240 [<0>] migrate_pages+0x93c/0x9f0 [<0>] compact_zone+0x8e2/0x1030 [<0>] compact_node+0xdb/0x120 [<0>] kcompactd+0x121/0x2e0 [<0>] kthread+0xcf/0x100 [<0>] ret_from_fork+0x31/0x40 [<0>] ret_from_fork_asm+0x1a/0x30 ---}
[akpm@linux-foundation.org: fix build] Link: https://lkml.kernel.org/r/20250225022002.26141-1-snitzer@kernel.org Fixes: 96780ca55e3c ("NFS: fix up nfs_release_folio() to try to release the page") Signed-off-by: Mike Snitzer snitzer@kernel.org Cc: Anna Schumaker anna.schumaker@oracle.com Cc: Trond Myklebust trond.myklebust@hammerspace.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/nfs/file.c | 3 ++- include/linux/compaction.h | 5 +++++ include/linux/sched.h | 2 +- mm/compaction.c | 3 +++ 4 files changed, 11 insertions(+), 2 deletions(-)
--- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -29,6 +29,7 @@ #include <linux/pagemap.h> #include <linux/gfp.h> #include <linux/swap.h> +#include <linux/compaction.h>
#include <linux/uaccess.h> #include <linux/filelock.h> @@ -451,7 +452,7 @@ static bool nfs_release_folio(struct fol /* If the private flag is set, then the folio is not freeable */ if (folio_test_private(folio)) { if ((current_gfp_context(gfp) & GFP_KERNEL) != GFP_KERNEL || - current_is_kswapd()) + current_is_kswapd() || current_is_kcompactd()) return false; if (nfs_wb_folio(folio->mapping->host, folio) < 0) return false; --- a/include/linux/compaction.h +++ b/include/linux/compaction.h @@ -80,6 +80,11 @@ static inline unsigned long compact_gap( return 2UL << order; }
+static inline int current_is_kcompactd(void) +{ + return current->flags & PF_KCOMPACTD; +} + #ifdef CONFIG_COMPACTION
extern unsigned int extfrag_for_order(struct zone *zone, unsigned int order); --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1682,7 +1682,7 @@ extern struct pid *cad_pid; #define PF_USED_MATH 0x00002000 /* If unset the fpu must be initialized before use */ #define PF_USER_WORKER 0x00004000 /* Kernel thread cloned from userspace thread */ #define PF_NOFREEZE 0x00008000 /* This thread should not be frozen */ -#define PF__HOLE__00010000 0x00010000 +#define PF_KCOMPACTD 0x00010000 /* I am kcompactd */ #define PF_KSWAPD 0x00020000 /* I am kswapd */ #define PF_MEMALLOC_NOFS 0x00040000 /* All allocations inherit GFP_NOFS. See memalloc_nfs_save() */ #define PF_MEMALLOC_NOIO 0x00080000 /* All allocations inherit GFP_NOIO. See memalloc_noio_save() */ --- a/mm/compaction.c +++ b/mm/compaction.c @@ -3164,6 +3164,7 @@ static int kcompactd(void *p) if (!cpumask_empty(cpumask)) set_cpus_allowed_ptr(tsk, cpumask);
+ current->flags |= PF_KCOMPACTD; set_freezable();
pgdat->kcompactd_max_order = 0; @@ -3220,6 +3221,8 @@ static int kcompactd(void *p) pgdat->proactive_compact_trigger = false; }
+ current->flags &= ~PF_KCOMPACTD; + return 0; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Suren Baghdasaryan surenb@google.com
commit 37b338eed10581784e854d4262da05c8d960c748 upstream.
Lokesh recently raised an issue about UFFDIO_MOVE getting into a deadlock state when it goes into split_folio() with raised folio refcount. split_folio() expects the reference count to be exactly mapcount + num_pages_in_folio + 1 (see can_split_folio()) and fails with EAGAIN otherwise.
If multiple processes are trying to move the same large folio, they raise the refcount (all tasks succeed in that) then one of them succeeds in locking the folio, while others will block in folio_lock() while keeping the refcount raised. The winner of this race will proceed with calling split_folio() and will fail returning EAGAIN to the caller and unlocking the folio. The next competing process will get the folio locked and will go through the same flow. In the meantime the original winner will be retried and will block in folio_lock(), getting into the queue of waiting processes only to repeat the same path. All this results in a livelock.
An easy fix would be to avoid waiting for the folio lock while holding folio refcount, similar to madvise_free_huge_pmd() where folio lock is acquired before raising the folio refcount. Since we lock and take a refcount of the folio while holding the PTE lock, changing the order of these operations should not break anything.
Modify move_pages_pte() to try locking the folio first and if that fails and the folio is large then return EAGAIN without touching the folio refcount. If the folio is single-page then split_folio() is not called, so we don't have this issue. Lokesh has a reproducer [1] and I verified that this change fixes the issue.
[1] https://github.com/lokeshgidra/uffd_move_ioctl_deadlock
[akpm@linux-foundation.org: reflow comment to 80 cols, s/end/end up/] Link: https://lkml.kernel.org/r/20250226185510.2732648-2-surenb@google.com Fixes: adef440691ba ("userfaultfd: UFFDIO_MOVE uABI") Signed-off-by: Suren Baghdasaryan surenb@google.com Reported-by: Lokesh Gidra lokeshgidra@google.com Reviewed-by: Peter Xu peterx@redhat.com Acked-by: Liam R. Howlett Liam.Howlett@Oracle.com Cc: Andrea Arcangeli aarcange@redhat.com Cc: Barry Song 21cnbao@gmail.com Cc: Barry Song v-songbaohua@oppo.com Cc: David Hildenbrand david@redhat.com Cc: Hugh Dickins hughd@google.com Cc: Jann Horn jannh@google.com Cc: Kalesh Singh kaleshsingh@google.com Cc: Lorenzo Stoakes lorenzo.stoakes@oracle.com Cc: Matthew Wilcow (Oracle) willy@infradead.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/userfaultfd.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-)
--- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1215,6 +1215,7 @@ retry: */ if (!src_folio) { struct folio *folio; + bool locked;
/* * Pin the page while holding the lock to be sure the @@ -1234,12 +1235,26 @@ retry: goto out; }
+ locked = folio_trylock(folio); + /* + * We avoid waiting for folio lock with a raised + * refcount for large folios because extra refcounts + * will result in split_folio() failing later and + * retrying. If multiple tasks are trying to move a + * large folio we can end up livelocking. + */ + if (!locked && folio_test_large(folio)) { + spin_unlock(src_ptl); + err = -EAGAIN; + goto out; + } + folio_get(folio); src_folio = folio; src_folio_pte = orig_src_pte; spin_unlock(src_ptl);
- if (!folio_trylock(src_folio)) { + if (!locked) { pte_unmap(&orig_src_pte); pte_unmap(&orig_dst_pte); src_pte = dst_pte = NULL;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Olivier Gayot olivier.gayot@canonical.com
commit e06472bab2a5393430cc2fbc3211cd3602422c1e upstream.
The utf16_le_to_7bit function claims to, naively, convert a UTF-16 string to a 7-bit ASCII string. By naively, we mean that it: * drops the first byte of every character in the original UTF-16 string * checks if all characters are printable, and otherwise replaces them by exclamation mark "!".
This means that theoretically, all characters outside the 7-bit ASCII range should be replaced by another character. Examples:
* lower-case alpha (ɒ) 0x0252 becomes 0x52 (R) * ligature OE (œ) 0x0153 becomes 0x53 (S) * hangul letter pieup (ㅂ) 0x3142 becomes 0x42 (B) * upper-case gamma (Ɣ) 0x0194 becomes 0x94 (not printable) so gets replaced by "!"
The result of this conversion for the GPT partition name is passed to user-space as PARTNAME via udev, which is confusing and feels questionable.
However, there is a flaw in the conversion function itself. By dropping one byte of each character and using isprint() to check if the remaining byte corresponds to a printable character, we do not actually guarantee that the resulting character is 7-bit ASCII.
This happens because we pass 8-bit characters to isprint(), which in the kernel returns 1 for many values > 0x7f - as defined in ctype.c.
This results in many values which should be replaced by "!" to be kept as-is, despite not being valid 7-bit ASCII. Examples:
* e with acute accent (é) 0x00E9 becomes 0xE9 - kept as-is because isprint(0xE9) returns 1. * euro sign (€) 0x20AC becomes 0xAC - kept as-is because isprint(0xAC) returns 1.
This way has broken pyudev utility[1], fixes it by using a mask of 7 bits instead of 8 bits before calling isprint.
Link: https://github.com/pyudev/pyudev/issues/490#issuecomment-2685794648 [1] Link: https://lore.kernel.org/linux-block/4cac90c2-e414-4ebb-ae62-2a4589d9dc6e@can... Cc: Mulhern amulhern@redhat.com Cc: Davidlohr Bueso dave@stgolabs.net Cc: stable@vger.kernel.org Signed-off-by: Olivier Gayot olivier.gayot@canonical.com Signed-off-by: Ming Lei ming.lei@redhat.com Link: https://lore.kernel.org/r/20250305022154.3903128-1-ming.lei@redhat.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/partitions/efi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/block/partitions/efi.c +++ b/block/partitions/efi.c @@ -682,7 +682,7 @@ static void utf16_le_to_7bit(const __le1 out[size] = 0;
while (i < size) { - u8 c = le16_to_cpu(in[i]) & 0xff; + u8 c = le16_to_cpu(in[i]) & 0x7f;
if (c && !isprint(c)) c = '!';
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hao Zhang zhanghao1@kylinos.cn
commit 8fe9ed44dc29fba0786b7e956d2e87179e407582 upstream.
The variable "compact_result" is not initialized in function __alloc_pages_slowpath(). It causes should_compact_retry() to use an uninitialized value.
Initialize variable "compact_result" with the value COMPACT_SKIPPED.
BUG: KMSAN: uninit-value in __alloc_pages_slowpath+0xee8/0x16c0 mm/page_alloc.c:4416 __alloc_pages_slowpath+0xee8/0x16c0 mm/page_alloc.c:4416 __alloc_frozen_pages_noprof+0xa4c/0xe00 mm/page_alloc.c:4752 alloc_pages_mpol+0x4cd/0x890 mm/mempolicy.c:2270 alloc_frozen_pages_noprof mm/mempolicy.c:2341 [inline] alloc_pages_noprof mm/mempolicy.c:2361 [inline] folio_alloc_noprof+0x1dc/0x350 mm/mempolicy.c:2371 filemap_alloc_folio_noprof+0xa6/0x440 mm/filemap.c:1019 __filemap_get_folio+0xb9a/0x1840 mm/filemap.c:1970 grow_dev_folio fs/buffer.c:1039 [inline] grow_buffers fs/buffer.c:1105 [inline] __getblk_slow fs/buffer.c:1131 [inline] bdev_getblk+0x2c9/0xab0 fs/buffer.c:1431 getblk_unmovable include/linux/buffer_head.h:369 [inline] ext4_getblk+0x3b7/0xe50 fs/ext4/inode.c:864 ext4_bread_batch+0x9f/0x7d0 fs/ext4/inode.c:933 __ext4_find_entry+0x1ebb/0x36c0 fs/ext4/namei.c:1627 ext4_lookup_entry fs/ext4/namei.c:1729 [inline] ext4_lookup+0x189/0xb40 fs/ext4/namei.c:1797 __lookup_slow+0x538/0x710 fs/namei.c:1793 lookup_slow+0x6a/0xd0 fs/namei.c:1810 walk_component fs/namei.c:2114 [inline] link_path_walk+0xf29/0x1420 fs/namei.c:2479 path_openat+0x30f/0x6250 fs/namei.c:3985 do_filp_open+0x268/0x600 fs/namei.c:4016 do_sys_openat2+0x1bf/0x2f0 fs/open.c:1428 do_sys_open fs/open.c:1443 [inline] __do_sys_openat fs/open.c:1459 [inline] __se_sys_openat fs/open.c:1454 [inline] __x64_sys_openat+0x2a1/0x310 fs/open.c:1454 x64_sys_call+0x36f5/0x3c30 arch/x86/include/generated/asm/syscalls_64.h:258 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xcd/0x1e0 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f
Local variable compact_result created at: __alloc_pages_slowpath+0x66/0x16c0 mm/page_alloc.c:4218 __alloc_frozen_pages_noprof+0xa4c/0xe00 mm/page_alloc.c:4752
Link: https://lkml.kernel.org/r/tencent_ED1032321D6510B145CDBA8CBA0093178E09@qq.co... Reported-by: syzbot+0cfd5e38e96a5596f2b6@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=0cfd5e38e96a5596f2b6 Signed-off-by: Hao Zhang zhanghao1@kylinos.cn Reviewed-by: Vlastimil Babka vbabka@suse.cz Cc: Michal Hocko mhocko@kernel.org Cc: Mel Gorman mgorman@techsingularity.net Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/page_alloc.c | 1 + 1 file changed, 1 insertion(+)
--- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4243,6 +4243,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, u restart: compaction_retries = 0; no_progress_loops = 0; + compact_result = COMPACT_SKIPPED; compact_priority = DEF_COMPACT_PRIORITY; cpuset_mems_cookie = read_mems_allowed_begin(); zonelist_iter_cookie = zonelist_iter_begin();
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lorenzo Stoakes lorenzo.stoakes@oracle.com
commit 47b16d0462a460000b8f05dfb1292377ac48f3ca upstream.
The remainder of vma_modify() relies upon the vmg state remaining pristine after a merge attempt.
Usually this is the case, however in the one edge case scenario of a merge attempt failing not due to the specified range being unmergeable, but rather due to an out of memory error arising when attempting to commit the merge, this assumption becomes untrue.
This results in vmg->start, end being modified, and thus the proceeding attempts to split the VMA will be done with invalid start/end values.
Thankfully, it is likely practically impossible for us to hit this in reality, as it would require a maple tree node pre-allocation failure that would likely never happen due to it being 'too small to fail', i.e. the kernel would simply keep retrying reclaim until it succeeded.
However, this scenario remains theoretically possible, and what we are doing here is wrong so we must correct it.
The safest option is, when this scenario occurs, to simply give up the operation. If we cannot allocate memory to merge, then we cannot allocate memory to split either (perhaps moreso!).
Any scenario where this would be happening would be under very extreme (likely fatal) memory pressure, so it's best we give up early.
So there is no doubt it is appropriate to simply bail out in this scenario.
However, in general we must if at all possible never assume VMG state is stable after a merge attempt, since merge operations update VMG fields. As a result, additionally also make this clear by storing start, end in local variables.
The issue was reported originally by syzkaller, and by Brad Spengler (via an off-list discussion), and in both instances it manifested as a triggering of the assert:
VM_WARN_ON_VMG(start >= end, vmg);
In vma_merge_existing_range().
It seems at least one scenario in which this is occurring is one in which the merge being attempted is due to an madvise() across multiple VMAs which looks like this:
start end |<------>| |----------|------| | vma | next | |----------|------|
When madvise_walk_vmas() is invoked, we first find vma in the above (determining prev to be equal to vma as we are offset into vma), and then enter the loop.
We determine the end of vma that forms part of the range we are madvise()'ing by setting 'tmp' to this value:
/* Here vma->vm_start <= start < (end|vma->vm_end) */ tmp = vma->vm_end;
We then invoke the madvise() operation via visit(), letting prev get updated to point to vma as part of the operation:
/* Here vma->vm_start <= start < tmp <= (end|vma->vm_end). */ error = visit(vma, &prev, start, tmp, arg);
Where the visit() function pointer in this instance is madvise_vma_behavior().
As observed in syzkaller reports, it is ultimately madvise_update_vma() that is invoked, calling vma_modify_flags_name() and vma_modify() in turn.
Then, in vma_modify(), we attempt the merge:
merged = vma_merge_existing_range(vmg); if (merged) return merged;
We invoke this with vmg->start, end set to start, tmp as such:
start tmp |<--->| |----------|------| | vma | next | |----------|------|
We find ourselves in the merge right scenario, but the one in which we cannot remove the middle (we are offset into vma).
Here we have a special case where vmg->start, end get set to perhaps unintuitive values - we intended to shrink the middle VMA and expand the next.
This means vmg->start, end are set to... vma->vm_start, start.
Now the commit_merge() fails, and vmg->start, end are left like this. This means we return to the rest of vma_modify() with vmg->start, end (here denoted as start', end') set as:
start' end' |<-->| |----------|------| | vma | next | |----------|------|
So we now erroneously try to split accordingly. This is where the unfortunate stuff begins.
We start with:
/* Split any preceding portion of the VMA. */ if (vma->vm_start < vmg->start) { ... }
This doesn't trigger as we are no longer offset into vma at the start.
But then we invoke:
/* Split any trailing portion of the VMA. */ if (vma->vm_end > vmg->end) { ... }
Which does get invoked. This leaves us with:
start' end' |<-->| |----|-----|------| | vma| new | next | |----|-----|------|
We then return ultimately to madvise_walk_vmas(). Here 'new' is unknown, and putting back the values known in this function we are faced with:
start tmp end | | | |----|-----|------| | vma| new | next | |----|-----|------| prev
Then:
start = tmp;
So:
start end | | |----|-----|------| | vma| new | next | |----|-----|------| prev
The following code does not cause anything to happen:
if (prev && start < prev->vm_end) start = prev->vm_end; if (start >= end) break;
And then we invoke:
if (prev) vma = find_vma(mm, prev->vm_end);
Which is where a problem occurs - we don't know about 'new' so we essentially look for the vma after prev, which is new, whereas we actually intended to discover next!
So we end up with:
start end | | |----|-----|------| |prev| vma | next | |----|-----|------|
And we have successfully bypassed all of the checks madvise_walk_vmas() has to ensure early exit should we end up moving out of range.
We loop around, and hit:
/* Here vma->vm_start <= start < (end|vma->vm_end) */ tmp = vma->vm_end;
Oh dear. Now we have:
tmp start end | | |----|-----|------| |prev| vma | next | |----|-----|------|
We then invoke:
/* Here vma->vm_start <= start < tmp <= (end|vma->vm_end). */ error = visit(vma, &prev, start, tmp, arg);
Where start == tmp. That is, a zero range. This is not good.
We invoke visit() which is madvise_vma_behavior() which does not check the range (for good reason, it assumes all checks have been done before it was called), which in turn finally calls madvise_update_vma().
The madvise_update_vma() function calls vma_modify_flags_name() in turn, which ultimately invokes vma_modify() with... start == end.
vma_modify() calls vma_merge_existing_range() and finally we hit:
VM_WARN_ON_VMG(start >= end, vmg);
Which triggers, as start == end.
While it might be useful to add some CONFIG_DEBUG_VM asserts in these instances to catch this kind of error, since we have just eliminated any possibility of that happening, we will add such asserts separately as to reduce churn and aid backporting.
Link: https://lkml.kernel.org/r/20250222161952.41957-1-lorenzo.stoakes@oracle.com Fixes: 2f1c6611b0a8 ("mm: introduce vma_merge_struct and abstract vma_merge(),vma_modify()") Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com Tested-by: Brad Spengler brad.spengler@opensrcsec.com Reported-by: Brad Spengler brad.spengler@opensrcsec.com Reported-by: syzbot+46423ed8fa1f1148c6e4@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-mm/6774c98f.050a0220.25abdd.0991.GAE@google.co... Cc: Jann Horn jannh@google.com Cc: Liam Howlett liam.howlett@oracle.com Cc: Vlastimil Babka vbabka@suse.cz Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/vma.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
--- a/mm/vma.c +++ b/mm/vma.c @@ -1417,24 +1417,28 @@ int do_vmi_munmap(struct vma_iterator *v static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg) { struct vm_area_struct *vma = vmg->vma; + unsigned long start = vmg->start; + unsigned long end = vmg->end; struct vm_area_struct *merged;
/* First, try to merge. */ merged = vma_merge_existing_range(vmg); if (merged) return merged; + if (vmg_nomem(vmg)) + return ERR_PTR(-ENOMEM);
/* Split any preceding portion of the VMA. */ - if (vma->vm_start < vmg->start) { - int err = split_vma(vmg->vmi, vma, vmg->start, 1); + if (vma->vm_start < start) { + int err = split_vma(vmg->vmi, vma, start, 1);
if (err) return ERR_PTR(err); }
/* Split any trailing portion of the VMA. */ - if (vma->vm_end > vmg->end) { - int err = split_vma(vmg->vmi, vma, vmg->end, 0); + if (vma->vm_end > end) { + int err = split_vma(vmg->vmi, vma, end, 0);
if (err) return ERR_PTR(err);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ma Wupeng mawupeng1@huawei.com
commit b81679b1633aa43c0d973adfa816d78c1ed0d032 upstream.
Patch series "mm: memory_failure: unmap poisoned folio during migrate properly", v3.
Fix two bugs during folio migration if the folio is poisoned.
This patch (of 3):
Commit 6da6b1d4a7df ("mm/hwpoison: convert TTU_IGNORE_HWPOISON to TTU_HWPOISON") introduce TTU_HWPOISON to replace TTU_IGNORE_HWPOISON in order to stop send SIGBUS signal when accessing an error page after a memory error on a clean folio. However during page migration, anon folio must be set with TTU_HWPOISON during unmap_*(). For pagecache we need some policy just like the one in hwpoison_user_mappings to set this flag. So move this policy from hwpoison_user_mappings to unmap_poisoned_folio to handle this warning properly.
Warning will be produced during unamp poison folio with the following log:
------------[ cut here ]------------ WARNING: CPU: 1 PID: 365 at mm/rmap.c:1847 try_to_unmap_one+0x8fc/0xd3c Modules linked in: CPU: 1 UID: 0 PID: 365 Comm: bash Tainted: G W 6.13.0-rc1-00018-gacdb4bbda7ab #42 Tainted: [W]=WARN Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015 pstate: 20400005 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : try_to_unmap_one+0x8fc/0xd3c lr : try_to_unmap_one+0x3dc/0xd3c Call trace: try_to_unmap_one+0x8fc/0xd3c (P) try_to_unmap_one+0x3dc/0xd3c (L) rmap_walk_anon+0xdc/0x1f8 rmap_walk+0x3c/0x58 try_to_unmap+0x88/0x90 unmap_poisoned_folio+0x30/0xa8 do_migrate_range+0x4a0/0x568 offline_pages+0x5a4/0x670 memory_block_action+0x17c/0x374 memory_subsys_offline+0x3c/0x78 device_offline+0xa4/0xd0 state_store+0x8c/0xf0 dev_attr_store+0x18/0x2c sysfs_kf_write+0x44/0x54 kernfs_fop_write_iter+0x118/0x1a8 vfs_write+0x3a8/0x4bc ksys_write+0x6c/0xf8 __arm64_sys_write+0x1c/0x28 invoke_syscall+0x44/0x100 el0_svc_common.constprop.0+0x40/0xe0 do_el0_svc+0x1c/0x28 el0_svc+0x30/0xd0 el0t_64_sync_handler+0xc8/0xcc el0t_64_sync+0x198/0x19c ---[ end trace 0000000000000000 ]---
[mawupeng1@huawei.com: unmap_poisoned_folio(): remove shadowed local `mapping', per Miaohe] Link: https://lkml.kernel.org/r/20250219060653.3849083-1-mawupeng1@huawei.com Link: https://lkml.kernel.org/r/20250217014329.3610326-1-mawupeng1@huawei.com Link: https://lkml.kernel.org/r/20250217014329.3610326-2-mawupeng1@huawei.com Fixes: 6da6b1d4a7df ("mm/hwpoison: convert TTU_IGNORE_HWPOISON to TTU_HWPOISON") Signed-off-by: Ma Wupeng mawupeng1@huawei.com Suggested-by: David Hildenbrand david@redhat.com Acked-by: David Hildenbrand david@redhat.com Acked-by: Miaohe Lin linmiaohe@huawei.com Cc: Ma Wupeng mawupeng1@huawei.com Cc: Michal Hocko mhocko@suse.com Cc: Naoya Horiguchi nao.horiguchi@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/internal.h | 5 ++-- mm/memory-failure.c | 63 +++++++++++++++++++++++++--------------------------- mm/memory_hotplug.c | 3 +- 3 files changed, 36 insertions(+), 35 deletions(-)
--- a/mm/internal.h +++ b/mm/internal.h @@ -1101,7 +1101,7 @@ static inline int find_next_best_node(in * mm/memory-failure.c */ #ifdef CONFIG_MEMORY_FAILURE -void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu); +int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill); void shake_folio(struct folio *folio); extern int hwpoison_filter(struct page *p);
@@ -1123,8 +1123,9 @@ void add_to_kill_ksm(struct task_struct unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
#else -static inline void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu) +static inline int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill) { + return -EBUSY; } #endif
--- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1554,11 +1554,35 @@ static int get_hwpoison_page(struct page return ret; }
-void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu) +int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill) { - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { - struct address_space *mapping; + enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON; + struct address_space *mapping; + + if (folio_test_swapcache(folio)) { + pr_err("%#lx: keeping poisoned page in swap cache\n", pfn); + ttu &= ~TTU_HWPOISON; + } + + /* + * Propagate the dirty bit from PTEs to struct page first, because we + * need this to decide if we should kill or just drop the page. + * XXX: the dirty test could be racy: set_page_dirty() may not always + * be called inside page lock (it's recommended but not enforced). + */ + mapping = folio_mapping(folio); + if (!must_kill && !folio_test_dirty(folio) && mapping && + mapping_can_writeback(mapping)) { + if (folio_mkclean(folio)) { + folio_set_dirty(folio); + } else { + ttu &= ~TTU_HWPOISON; + pr_info("%#lx: corrupted page was clean: dropped without side effects\n", + pfn); + } + }
+ if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { /* * For hugetlb folios in shared mappings, try_to_unmap * could potentially call huge_pmd_unshare. Because of @@ -1570,7 +1594,7 @@ void unmap_poisoned_folio(struct folio * if (!mapping) { pr_info("%#lx: could not lock mapping for mapped hugetlb folio\n", folio_pfn(folio)); - return; + return -EBUSY; }
try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); @@ -1578,6 +1602,8 @@ void unmap_poisoned_folio(struct folio * } else { try_to_unmap(folio, ttu); } + + return folio_mapped(folio) ? -EBUSY : 0; }
/* @@ -1587,8 +1613,6 @@ void unmap_poisoned_folio(struct folio * static bool hwpoison_user_mappings(struct folio *folio, struct page *p, unsigned long pfn, int flags) { - enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON; - struct address_space *mapping; LIST_HEAD(tokill); bool unmap_success; int forcekill; @@ -1611,29 +1635,6 @@ static bool hwpoison_user_mappings(struc if (!folio_mapped(folio)) return true;
- if (folio_test_swapcache(folio)) { - pr_err("%#lx: keeping poisoned page in swap cache\n", pfn); - ttu &= ~TTU_HWPOISON; - } - - /* - * Propagate the dirty bit from PTEs to struct page first, because we - * need this to decide if we should kill or just drop the page. - * XXX: the dirty test could be racy: set_page_dirty() may not always - * be called inside page lock (it's recommended but not enforced). - */ - mapping = folio_mapping(folio); - if (!(flags & MF_MUST_KILL) && !folio_test_dirty(folio) && mapping && - mapping_can_writeback(mapping)) { - if (folio_mkclean(folio)) { - folio_set_dirty(folio); - } else { - ttu &= ~TTU_HWPOISON; - pr_info("%#lx: corrupted page was clean: dropped without side effects\n", - pfn); - } - } - /* * First collect all the processes that have the page * mapped in dirty form. This has to be done before try_to_unmap, @@ -1641,9 +1642,7 @@ static bool hwpoison_user_mappings(struc */ collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED);
- unmap_poisoned_folio(folio, ttu); - - unmap_success = !folio_mapped(folio); + unmap_success = !unmap_poisoned_folio(folio, pfn, flags & MF_MUST_KILL); if (!unmap_success) pr_err("%#lx: failed to unmap page (folio mapcount=%d)\n", pfn, folio_mapcount(folio)); --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1806,7 +1806,8 @@ static void do_migrate_range(unsigned lo if (WARN_ON(folio_test_lru(folio))) folio_isolate_lru(folio); if (folio_mapped(folio)) - unmap_poisoned_folio(folio, TTU_IGNORE_MLOCK); + unmap_poisoned_folio(folio, pfn, false); + continue; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ryan Roberts ryan.roberts@arm.com
commit 3685024edd270f7c791f993157d65d3c928f3d6e upstream.
Fix callers that previously skipped calling arch_sync_kernel_mappings() if an error occurred during a pgtable update. The call is still required to sync any pgtable updates that may have occurred prior to hitting the error condition.
These are theoretical bugs discovered during code review.
Link: https://lkml.kernel.org/r/20250226121610.2401743-1-ryan.roberts@arm.com Fixes: 2ba3e6947aed ("mm/vmalloc: track which page-table levels were modified") Fixes: 0c95cba49255 ("mm: apply_to_pte_range warn and fail if a large pte is encountered") Signed-off-by: Ryan Roberts ryan.roberts@arm.com Reviewed-by: Anshuman Khandual anshuman.khandual@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Cc: Christop Hellwig hch@infradead.org Cc: "Uladzislau Rezki (Sony)" urezki@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memory.c | 6 ++++-- mm/vmalloc.c | 4 ++-- 2 files changed, 6 insertions(+), 4 deletions(-)
--- a/mm/memory.c +++ b/mm/memory.c @@ -2957,8 +2957,10 @@ static int __apply_to_page_range(struct next = pgd_addr_end(addr, end); if (pgd_none(*pgd) && !create) continue; - if (WARN_ON_ONCE(pgd_leaf(*pgd))) - return -EINVAL; + if (WARN_ON_ONCE(pgd_leaf(*pgd))) { + err = -EINVAL; + break; + } if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) { if (!create) continue; --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -586,13 +586,13 @@ static int vmap_small_pages_range_noflus mask |= PGTBL_PGD_MODIFIED; err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); if (err) - return err; + break; } while (pgd++, addr = next, addr != end);
if (mask & ARCH_PAGE_TABLE_SYNC_MASK) arch_sync_kernel_mappings(start, end);
- return 0; + return err; }
/*
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brian Geffon bgeffon@google.com
commit 34b82f33cf3f03bc39e9a205a913d790e1520ade upstream.
When handling faults for anon shmem finish_fault() will attempt to install ptes for the entire folio. Unfortunately if it encounters a single non-pte_none entry in that range it will bail, even if the pte that triggered the fault is still pte_none. When this situation happens the fault will be retried endlessly never making forward progress.
This patch fixes this behavior and if it detects that a pte in the range is not pte_none it will fall back to setting a single pte.
[bgeffon@google.com: tweak whitespace] Link: https://lkml.kernel.org/r/20250227133236.1296853-1-bgeffon@google.com Link: https://lkml.kernel.org/r/20250226162341.915535-1-bgeffon@google.com Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio") Signed-off-by: Brian Geffon bgeffon@google.com Suggested-by: Baolin Wang baolin.wang@linux.alibaba.com Reported-by: Marek Maslanka mmaslanka@google.com Cc: Hugh Dickins hughd@google.com Cc: David Hildenbrand david@redhat.com Cc: Hugh Dickens hughd@google.com Cc: Kefeng Wang wangkefeng.wang@huawei.com Cc: Matthew Wilcow (Oracle) willy@infradead.org Cc: Suren Baghdasaryan surenb@google.com Cc: Zi Yan ziy@nvidia.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memory.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
--- a/mm/memory.c +++ b/mm/memory.c @@ -5079,7 +5079,11 @@ vm_fault_t finish_fault(struct vm_fault bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED); int type, nr_pages; - unsigned long addr = vmf->address; + unsigned long addr; + bool needs_fallback = false; + +fallback: + addr = vmf->address;
/* Did we COW the page? */ if (is_cow) @@ -5118,7 +5122,8 @@ vm_fault_t finish_fault(struct vm_fault * approach also applies to non-anonymous-shmem faults to avoid * inflating the RSS of the process. */ - if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) { + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) || + unlikely(needs_fallback)) { nr_pages = 1; } else if (nr_pages > 1) { pgoff_t idx = folio_page_idx(folio, page); @@ -5154,9 +5159,9 @@ vm_fault_t finish_fault(struct vm_fault ret = VM_FAULT_NOPAGE; goto unlock; } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); - ret = VM_FAULT_NOPAGE; - goto unlock; + needs_fallback = true; + pte_unmap_unlock(vmf->pte, vmf->ptl); + goto fallback; }
folio_ref_add(folio, nr_pages - 1);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ma Wupeng mawupeng1@huawei.com
commit af288a426c3e3552b62595c6138ec6371a17dbba upstream.
Commit b15c87263a69 ("hwpoison, memory_hotplug: allow hwpoisoned pages to be offlined) add page poison checks in do_migrate_range in order to make offline hwpoisoned page possible by introducing isolate_lru_page and try_to_unmap for hwpoisoned page. However folio lock must be held before calling try_to_unmap. Add it to fix this problem.
Warning will be produced if folio is not locked during unmap:
------------[ cut here ]------------ kernel BUG at ./include/linux/swapops.h:400! Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP Modules linked in: CPU: 4 UID: 0 PID: 411 Comm: bash Tainted: G W 6.13.0-rc1-00016-g3c434c7ee82a-dirty #41 Tainted: [W]=WARN Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015 pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : try_to_unmap_one+0xb08/0xd3c lr : try_to_unmap_one+0x3dc/0xd3c Call trace: try_to_unmap_one+0xb08/0xd3c (P) try_to_unmap_one+0x3dc/0xd3c (L) rmap_walk_anon+0xdc/0x1f8 rmap_walk+0x3c/0x58 try_to_unmap+0x88/0x90 unmap_poisoned_folio+0x30/0xa8 do_migrate_range+0x4a0/0x568 offline_pages+0x5a4/0x670 memory_block_action+0x17c/0x374 memory_subsys_offline+0x3c/0x78 device_offline+0xa4/0xd0 state_store+0x8c/0xf0 dev_attr_store+0x18/0x2c sysfs_kf_write+0x44/0x54 kernfs_fop_write_iter+0x118/0x1a8 vfs_write+0x3a8/0x4bc ksys_write+0x6c/0xf8 __arm64_sys_write+0x1c/0x28 invoke_syscall+0x44/0x100 el0_svc_common.constprop.0+0x40/0xe0 do_el0_svc+0x1c/0x28 el0_svc+0x30/0xd0 el0t_64_sync_handler+0xc8/0xcc el0t_64_sync+0x198/0x19c Code: f9407be0 b5fff320 d4210000 17ffff97 (d4210000) ---[ end trace 0000000000000000 ]---
Link: https://lkml.kernel.org/r/20250217014329.3610326-4-mawupeng1@huawei.com Fixes: b15c87263a69 ("hwpoison, memory_hotplug: allow hwpoisoned pages to be offlined") Signed-off-by: Ma Wupeng mawupeng1@huawei.com Acked-by: David Hildenbrand david@redhat.com Acked-by: Miaohe Lin linmiaohe@huawei.com Cc: Michal Hocko mhocko@suse.com Cc: Naoya Horiguchi nao.horiguchi@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memory_hotplug.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1805,8 +1805,11 @@ static void do_migrate_range(unsigned lo (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) { if (WARN_ON(folio_test_lru(folio))) folio_isolate_lru(folio); - if (folio_mapped(folio)) + if (folio_mapped(folio)) { + folio_lock(folio); unmap_poisoned_folio(folio, pfn, false); + folio_unlock(folio); + }
continue; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ma Wupeng mawupeng1@huawei.com
commit 773b9a6aa6d38894b95088e3ed6f8a701d9f50fd upstream.
If a folio has an increased reference count, folio_try_get() will acquire it, perform necessary operations, and then release it. In the case of a poisoned folio without an elevated reference count (which is unlikely for memory-failure), folio_try_get() will simply bypass it.
Therefore, relocate the folio_try_get() function, responsible for checking and acquiring this reference count at first.
Link: https://lkml.kernel.org/r/20250217014329.3610326-3-mawupeng1@huawei.com Signed-off-by: Ma Wupeng mawupeng1@huawei.com Acked-by: David Hildenbrand david@redhat.com Acked-by: Miaohe Lin linmiaohe@huawei.com Cc: Michal Hocko mhocko@suse.com Cc: Naoya Horiguchi nao.horiguchi@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memory_hotplug.c | 20 +++++++------------- 1 file changed, 7 insertions(+), 13 deletions(-)
--- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1795,12 +1795,12 @@ static void do_migrate_range(unsigned lo if (folio_test_large(folio)) pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
- /* - * HWPoison pages have elevated reference counts so the migration would - * fail on them. It also doesn't make any sense to migrate them in the - * first place. Still try to unmap such a page in case it is still mapped - * (keep the unmap as the catch all safety net). - */ + if (!folio_try_get(folio)) + continue; + + if (unlikely(page_folio(page) != folio)) + goto put_folio; + if (folio_test_hwpoison(folio) || (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) { if (WARN_ON(folio_test_lru(folio))) @@ -1811,14 +1811,8 @@ static void do_migrate_range(unsigned lo folio_unlock(folio); }
- continue; - } - - if (!folio_try_get(folio)) - continue; - - if (unlikely(page_folio(page) != folio)) goto put_folio; + }
if (!isolate_folio_to_list(folio, &source)) { if (__ratelimit(&migrate_rs)) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Berg johannes.berg@intel.com
[ Upstream commit f9751163bffd3fe60794929829f810968c6de73d ]
If the firmware fails to start the session protection, then we do call iwl_mvm_roc_finished() here, but that won't do anything at all because IWL_MVM_STATUS_ROC_P2P_RUNNING was never set. Set IWL_MVM_STATUS_ROC_P2P_RUNNING in the failure/stop path. If it started successfully before, it's already set, so that doesn't matter, and if it didn't start it needs to be set to clean up.
Not doing so will lead to a WARN_ON() later on a fresh remain- on-channel, since the link is already active when activated as it was never deactivated.
Fixes: 35c1bbd93c4e ("wifi: iwlwifi: mvm: remove IWL_MVM_STATUS_NEED_FLUSH_P2P") Signed-off-by: Johannes Berg johannes.berg@intel.com Reviewed-by: Emmanuel Grumbach emmanuel.grumbach@intel.com Signed-off-by: Miri Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20250209143303.0fe36c291068.I67f5dac742170dd937f11e... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/mvm/time-event.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c index 72fa7ac86516c..17b8ccc275693 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c @@ -1030,6 +1030,8 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm, /* End TE, notify mac80211 */ mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID; mvmvif->time_event_data.link_id = -1; + /* set the bit so the ROC cleanup will actually clean up */ + set_bit(IWL_MVM_STATUS_ROC_P2P_RUNNING, &mvm->status); iwl_mvm_roc_finished(mvm); ieee80211_remain_on_channel_expired(mvm->hw); } else if (le32_to_cpu(notif->start)) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Emmanuel Grumbach emmanuel.grumbach@intel.com
[ Upstream commit d73d2c6e3313f0ba60711ab4f4b9044eddca9ca5 ]
This fixes:
bad state = 0 WARNING: CPU: 10 PID: 702 at drivers/net/wireless/inel/iwlwifi/iwl-trans.c:178 iwl_trans_send_cmd+0xba/0xe0 [iwlwifi] Call Trace: <TASK> ? __warn+0xca/0x1c0 ? iwl_trans_send_cmd+0xba/0xe0 [iwlwifi 64fa9ad799a0e0d2ba53d4af93a53ad9a531f8d4] iwl_fw_dbg_clear_monitor_buf+0xd7/0x110 [iwlwifi 64fa9ad799a0e0d2ba53d4af93a53ad9a531f8d4] _iwl_dbgfs_fw_dbg_clear_write+0xe2/0x120 [iwlmvm 0e8adb18cea92d2c341766bcc10b18699290068a]
Ask whether the firmware is alive before sending a command.
Fixes: 268712dc3b34 ("wifi: iwlwifi: mvm: add a debugfs hook to clear the monitor data") Signed-off-by: Emmanuel Grumbach emmanuel.grumbach@intel.com Signed-off-by: Miri Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20250209143303.8e1597b62c70.I12ea71dd9b805b095c9fc1... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c index 91ca830a7b603..f4276fdee6bea 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c @@ -1518,6 +1518,13 @@ static ssize_t iwl_dbgfs_fw_dbg_clear_write(struct iwl_mvm *mvm, if (mvm->trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_9000) return -EOPNOTSUPP;
+ /* + * If the firmware is not running, silently succeed since there is + * no data to clear. + */ + if (!iwl_mvm_firmware_running(mvm)) + return count; + mutex_lock(&mvm->mutex); iwl_fw_dbg_clear_monitor_buf(&mvm->fwrt); mutex_unlock(&mvm->mutex);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Berg johannes.berg@intel.com
[ Upstream commit e0dc2c1bef722cbf16ae557690861e5f91208129 ]
There's no guarantee here that the file is always with a NUL-termination, so reading the string may read beyond the end of the TLV. If that's the last TLV in the file, it can perhaps even read beyond the end of the file buffer.
Fix that by limiting the print format to the size of the buffer we have.
Fixes: aee1b6385e29 ("iwlwifi: support fseq tlv and print fseq version") Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Miri Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20250209143303.cb5f9d0c2f5d.Idec695d53c6c2234aade30... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/iwl-drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c index c620911a11933..754e01688900d 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c @@ -1197,7 +1197,7 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv,
if (tlv_len != sizeof(*fseq_ver)) goto invalid_tlv_len; - IWL_INFO(drv, "TLV_FW_FSEQ_VERSION: %s\n", + IWL_INFO(drv, "TLV_FW_FSEQ_VERSION: %.32s\n", fseq_ver->version); } break;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ilan Peer ilan.peer@intel.com
[ Upstream commit 3b08e608d50c44ca1135beed179f266aa0461da7 ]
When failing to prepare the data needed for A-MSDU transmission, the memory allocated for the TSO management was not freed. Fix it.
Fixes: 7f5e3038f029 ("wifi: iwlwifi: map entire SKB when sending AMSDUs") Signed-off-by: Ilan Peer ilan.peer@intel.com Reviewed-by: Emmanuel Grumbach emmanuel.grumbach@intel.com Signed-off-by: Miri Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20250209143303.bc27fad9b3d5.Ibf43dd18fb652b1a590612... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c index b1846abb99b78..7bb74a480d7f1 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c @@ -347,6 +347,7 @@ iwl_tfh_tfd *iwl_txq_gen2_build_tx_amsdu(struct iwl_trans *trans, return tfd;
out_err: + iwl_pcie_free_tso_pages(trans, skb, out_meta); iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd); return NULL; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ilan Peer ilan.peer@intel.com
[ Upstream commit 3640dbc1f75ce15d128ea4af44226960d894f3fd ]
The TSO preparation assumed that the skb head contained the headers while the rest of the data was in the fragments. Since this is not always true, e.g., it is possible that the data was linearised, modify the TSO preparation to start the data processing after the network headers.
Fixes: 7f5e3038f029 ("wifi: iwlwifi: map entire SKB when sending AMSDUs") Signed-off-by: Ilan Peer ilan.peer@intel.com Reviewed-by: Benjamin Berg benjamin.berg@intel.com Signed-off-by: Miri Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20250209143303.75769a4769bf.Iaf79e8538093cdf8c446c2... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../wireless/intel/iwlwifi/pcie/internal.h | 5 +++-- .../net/wireless/intel/iwlwifi/pcie/tx-gen2.c | 5 +++-- drivers/net/wireless/intel/iwlwifi/pcie/tx.c | 20 +++++++++++-------- 3 files changed, 18 insertions(+), 12 deletions(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h index 27a7e0b5b3d51..ebe9b25cc53a9 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h +++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* - * Copyright (C) 2003-2015, 2018-2024 Intel Corporation + * Copyright (C) 2003-2015, 2018-2025 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -643,7 +643,8 @@ dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, unsigned int offset, unsigned int len); struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb, struct iwl_cmd_meta *cmd_meta, - u8 **hdr, unsigned int hdr_room); + u8 **hdr, unsigned int hdr_room, + unsigned int offset);
void iwl_pcie_free_tso_pages(struct iwl_trans *trans, struct sk_buff *skb, struct iwl_cmd_meta *cmd_meta); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c index 7bb74a480d7f1..477a05cd1288b 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* * Copyright (C) 2017 Intel Deutschland GmbH - * Copyright (C) 2018-2020, 2023-2024 Intel Corporation + * Copyright (C) 2018-2020, 2023-2025 Intel Corporation */ #include <net/tso.h> #include <linux/tcp.h> @@ -188,7 +188,8 @@ static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans, (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr));
/* Our device supports 9 segments at most, it will fit in 1 page */ - sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room); + sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room, + snap_ip_tcp_hdrlen + hdr_len); if (!sgt) return -ENOMEM;
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c index 9fe050f0ddc16..a74ce5ccf59bd 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* - * Copyright (C) 2003-2014, 2018-2021, 2023-2024 Intel Corporation + * Copyright (C) 2003-2014, 2018-2021, 2023-2025 Intel Corporation * Copyright (C) 2013-2015 Intel Mobile Communications GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH */ @@ -1853,6 +1853,7 @@ dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, unsigned int offset, * @cmd_meta: command meta to store the scatter list information for unmapping * @hdr: output argument for TSO headers * @hdr_room: requested length for TSO headers + * @offset: offset into the data from which mapping should start * * Allocate space for a scatter gather list and TSO headers and map the SKB * using the scatter gather list. The SKB is unmapped again when the page is @@ -1862,18 +1863,20 @@ dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, unsigned int offset, */ struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb, struct iwl_cmd_meta *cmd_meta, - u8 **hdr, unsigned int hdr_room) + u8 **hdr, unsigned int hdr_room, + unsigned int offset) { struct sg_table *sgt; + unsigned int n_segments;
if (WARN_ON_ONCE(skb_has_frag_list(skb))) return NULL;
+ n_segments = DIV_ROUND_UP(skb->len - offset, skb_shinfo(skb)->gso_size); *hdr = iwl_pcie_get_page_hdr(trans, hdr_room + __alignof__(struct sg_table) + sizeof(struct sg_table) + - (skb_shinfo(skb)->nr_frags + 1) * - sizeof(struct scatterlist), + n_segments * sizeof(struct scatterlist), skb); if (!*hdr) return NULL; @@ -1881,11 +1884,11 @@ struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb, sgt = (void *)PTR_ALIGN(*hdr + hdr_room, __alignof__(struct sg_table)); sgt->sgl = (void *)(sgt + 1);
- sg_init_table(sgt->sgl, skb_shinfo(skb)->nr_frags + 1); + sg_init_table(sgt->sgl, n_segments);
/* Only map the data, not the header (it is copied to the TSO page) */ - sgt->orig_nents = skb_to_sgvec(skb, sgt->sgl, skb_headlen(skb), - skb->data_len); + sgt->orig_nents = skb_to_sgvec(skb, sgt->sgl, offset, + skb->len - offset); if (WARN_ON_ONCE(sgt->orig_nents <= 0)) return NULL;
@@ -1937,7 +1940,8 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)) + iv_len;
/* Our device supports 9 segments at most, it will fit in 1 page */ - sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room); + sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room, + snap_ip_tcp_hdrlen + hdr_len + iv_len); if (!sgt) return -ENOMEM;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yu-Chun Lin eleanor15x@gmail.com
[ Upstream commit 4bd0725c09f377ffaf22b834241f6c050742e4fc ]
As reported by the kernel test robot, the following warning occurs:
drivers/hid/hid-google-hammer.c:261:36: warning: 'cbas_ec_acpi_ids' defined but not used [-Wunused-const-variable=]
261 | static const struct acpi_device_id cbas_ec_acpi_ids[] = { | ^~~~~~~~~~~~~~~~
The 'cbas_ec_acpi_ids' array is only used when CONFIG_ACPI is enabled. Wrapping its definition and 'MODULE_DEVICE_TABLE' in '#ifdef CONFIG_ACPI' prevents a compiler warning when ACPI is disabled.
Fixes: eb1aac4c8744f75 ("HID: google: add support tablet mode switch for Whiskers") Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/oe-kbuild-all/202501201141.jctFH5eB-lkp@intel.com/ Signed-off-by: Yu-Chun Lin eleanor15x@gmail.com Signed-off-by: Jiri Kosina jkosina@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-google-hammer.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/hid/hid-google-hammer.c b/drivers/hid/hid-google-hammer.c index 22683ec819aac..646ba5b92e0b2 100644 --- a/drivers/hid/hid-google-hammer.c +++ b/drivers/hid/hid-google-hammer.c @@ -268,11 +268,13 @@ static void cbas_ec_remove(struct platform_device *pdev) mutex_unlock(&cbas_ec_reglock); }
+#ifdef CONFIG_ACPI static const struct acpi_device_id cbas_ec_acpi_ids[] = { { "GOOG000B", 0 }, { } }; MODULE_DEVICE_TABLE(acpi, cbas_ec_acpi_ids); +#endif
#ifdef CONFIG_OF static const struct of_device_id cbas_ec_of_match[] = {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhang Lixu lixu.zhang@intel.com
[ Upstream commit 823987841424289339fdb4ba90e6d2c3792836db ]
During the `rmmod` operation for the `intel_ishtp_hid` driver, a use-after-free issue can occur in the hid_ishtp_cl_remove() function. The function hid_ishtp_cl_deinit() is called before ishtp_hid_remove(), which can lead to accessing freed memory or resources during the removal process.
Call Trace: ? ishtp_cl_send+0x168/0x220 [intel_ishtp] ? hid_output_report+0xe3/0x150 [hid] hid_ishtp_set_feature+0xb5/0x120 [intel_ishtp_hid] ishtp_hid_request+0x7b/0xb0 [intel_ishtp_hid] hid_hw_request+0x1f/0x40 [hid] sensor_hub_set_feature+0x11f/0x190 [hid_sensor_hub] _hid_sensor_power_state+0x147/0x1e0 [hid_sensor_trigger] hid_sensor_runtime_resume+0x22/0x30 [hid_sensor_trigger] sensor_hub_remove+0xa8/0xe0 [hid_sensor_hub] hid_device_remove+0x49/0xb0 [hid] hid_destroy_device+0x6f/0x90 [hid] ishtp_hid_remove+0x42/0x70 [intel_ishtp_hid] hid_ishtp_cl_remove+0x6b/0xb0 [intel_ishtp_hid] ishtp_cl_device_remove+0x4a/0x60 [intel_ishtp] ...
Additionally, ishtp_hid_remove() is a HID level power off, which should occur before the ISHTP level disconnect.
This patch resolves the issue by reordering the calls in hid_ishtp_cl_remove(). The function ishtp_hid_remove() is now called before hid_ishtp_cl_deinit().
Fixes: f645a90e8ff7 ("HID: intel-ish-hid: ishtp-hid-client: use helper functions for connection") Signed-off-by: Zhang Lixu lixu.zhang@intel.com Acked-by: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com Signed-off-by: Jiri Kosina jkosina@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/intel-ish-hid/ishtp-hid-client.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/hid/intel-ish-hid/ishtp-hid-client.c b/drivers/hid/intel-ish-hid/ishtp-hid-client.c index fbd4f8ea1951b..af6a5afc1a93e 100644 --- a/drivers/hid/intel-ish-hid/ishtp-hid-client.c +++ b/drivers/hid/intel-ish-hid/ishtp-hid-client.c @@ -833,9 +833,9 @@ static void hid_ishtp_cl_remove(struct ishtp_cl_device *cl_device) hid_ishtp_cl);
dev_dbg(ishtp_device(cl_device), "%s\n", __func__); - hid_ishtp_cl_deinit(hid_ishtp_cl); ishtp_put_device(cl_device); ishtp_hid_remove(client_data); + hid_ishtp_cl_deinit(hid_ishtp_cl);
hid_ishtp_cl = NULL;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhang Lixu lixu.zhang@intel.com
[ Upstream commit 07583a0010696a17fb0942e0b499a62785c5fc9f ]
The system can experience a random crash a few minutes after the driver is removed. This issue occurs due to improper handling of memory freeing in the ishtp_hid_remove() function.
The function currently frees the `driver_data` directly within the loop that destroys the HID devices, which can lead to accessing freed memory. Specifically, `hid_destroy_device()` uses `driver_data` when it calls `hid_ishtp_set_feature()` to power off the sensor, so freeing `driver_data` beforehand can result in accessing invalid memory.
This patch resolves the issue by storing the `driver_data` in a temporary variable before calling `hid_destroy_device()`, and then freeing the `driver_data` after the device is destroyed.
Fixes: 0b28cb4bcb17 ("HID: intel-ish-hid: ISH HID client driver") Signed-off-by: Zhang Lixu lixu.zhang@intel.com Acked-by: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com Signed-off-by: Jiri Kosina jkosina@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/intel-ish-hid/ishtp-hid.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/hid/intel-ish-hid/ishtp-hid.c b/drivers/hid/intel-ish-hid/ishtp-hid.c index 00c6f0ebf3563..be2c62fc8251d 100644 --- a/drivers/hid/intel-ish-hid/ishtp-hid.c +++ b/drivers/hid/intel-ish-hid/ishtp-hid.c @@ -261,12 +261,14 @@ int ishtp_hid_probe(unsigned int cur_hid_dev, */ void ishtp_hid_remove(struct ishtp_cl_data *client_data) { + void *data; int i;
for (i = 0; i < client_data->num_hid_devices; ++i) { if (client_data->hid_sensor_hubs[i]) { - kfree(client_data->hid_sensor_hubs[i]->driver_data); + data = client_data->hid_sensor_hubs[i]->driver_data; hid_destroy_device(client_data->hid_sensor_hubs[i]); + kfree(data); client_data->hid_sensor_hubs[i] = NULL; } }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kees Cook kees@kernel.org
[ Upstream commit 39ec9eaaa165d297d008d1fa385748430bd18e4d ]
The sorting of VMAs by size in commit 7d442a33bfe8 ("binfmt_elf: Dump smaller VMAs first in ELF cores") breaks elfutils[1]. Instead, sort based on the setting of the new sysctl, core_sort_vma, which defaults to 0, no sorting.
Reported-by: Michael Stapelberg michael@stapelberg.ch Closes: https://lore.kernel.org/all/20250218085407.61126-1-michael@stapelberg.de/ [1] Fixes: 7d442a33bfe8 ("binfmt_elf: Dump smaller VMAs first in ELF cores") Signed-off-by: Kees Cook kees@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/admin-guide/sysctl/kernel.rst | 11 +++++++++++ fs/coredump.c | 15 +++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-)
diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index f8bc1630eba05..fa21cdd610b21 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -212,6 +212,17 @@ pid>/``). This value defaults to 0.
+core_sort_vma +============= + +The default coredump writes VMAs in address order. By setting +``core_sort_vma`` to 1, VMAs will be written from smallest size +to largest size. This is known to break at least elfutils, but +can be handy when dealing with very large (and truncated) +coredumps where the more useful debugging details are included +in the smaller VMAs. + + core_uses_pid =============
diff --git a/fs/coredump.c b/fs/coredump.c index 45737b43dda5c..2b8c36c9660c5 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -63,6 +63,7 @@ static void free_vma_snapshot(struct coredump_params *cprm);
static int core_uses_pid; static unsigned int core_pipe_limit; +static unsigned int core_sort_vma; static char core_pattern[CORENAME_MAX_SIZE] = "core"; static int core_name_size = CORENAME_MAX_SIZE; unsigned int core_file_note_size_limit = CORE_FILE_NOTE_SIZE_DEFAULT; @@ -1025,6 +1026,15 @@ static struct ctl_table coredump_sysctls[] = { .extra1 = (unsigned int *)&core_file_note_size_min, .extra2 = (unsigned int *)&core_file_note_size_max, }, + { + .procname = "core_sort_vma", + .data = &core_sort_vma, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_douintvec_minmax, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, + }, };
static int __init init_fs_coredump_sysctls(void) @@ -1255,8 +1265,9 @@ static bool dump_vma_snapshot(struct coredump_params *cprm) cprm->vma_data_size += m->dump_size; }
- sort(cprm->vma_meta, cprm->vma_count, sizeof(*cprm->vma_meta), - cmp_vma_size, NULL); + if (core_sort_vma) + sort(cprm->vma_meta, cprm->vma_count, sizeof(*cprm->vma_meta), + cmp_vma_size, NULL);
return true; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Keith Busch kbusch@kernel.org
[ Upstream commit 979c6342f9c0a48696a6420f14f9dd409591657f ]
Supporting this mode allows creating and merging multi-segment metadata requests that wouldn't be possible otherwise. It also allows directly using user space requests that straddle physically discontiguous pages.
Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Keith Busch kbusch@kernel.org Stable-dep-of: 00817f0f1c45 ("nvme-ioctl: fix leaked requests on mapping error") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/nvme.h | 7 ++ drivers/nvme/host/pci.c | 144 +++++++++++++++++++++++++++++++++++---- include/linux/nvme.h | 1 + 3 files changed, 137 insertions(+), 15 deletions(-)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 61bba5513de05..dcdce7d12e441 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -1130,6 +1130,13 @@ static inline bool nvme_ctrl_sgl_supported(struct nvme_ctrl *ctrl) return ctrl->sgls & ((1 << 0) | (1 << 1)); }
+static inline bool nvme_ctrl_meta_sgl_supported(struct nvme_ctrl *ctrl) +{ + if (ctrl->ops->flags & NVME_F_FABRICS) + return true; + return ctrl->sgls & NVME_CTRL_SGLS_MSDS; +} + #ifdef CONFIG_NVME_HOST_AUTH int __init nvme_init_auth(void); void __exit nvme_exit_auth(void); diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index cc74682dc0d4e..58bdd0da6b658 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -43,6 +43,7 @@ */ #define NVME_MAX_KB_SZ 8192 #define NVME_MAX_SEGS 128 +#define NVME_MAX_META_SEGS 15 #define NVME_MAX_NR_ALLOCATIONS 5
static int use_threaded_interrupts; @@ -143,6 +144,7 @@ struct nvme_dev { bool hmb;
mempool_t *iod_mempool; + mempool_t *iod_meta_mempool;
/* shadow doorbell buffer support: */ __le32 *dbbuf_dbs; @@ -238,6 +240,8 @@ struct nvme_iod { dma_addr_t first_dma; dma_addr_t meta_dma; struct sg_table sgt; + struct sg_table meta_sgt; + union nvme_descriptor meta_list; union nvme_descriptor list[NVME_MAX_NR_ALLOCATIONS]; };
@@ -505,6 +509,14 @@ static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx) spin_unlock(&nvmeq->sq_lock); }
+static inline bool nvme_pci_metadata_use_sgls(struct nvme_dev *dev, + struct request *req) +{ + if (!nvme_ctrl_meta_sgl_supported(&dev->ctrl)) + return false; + return req->nr_integrity_segments > 1; +} + static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req, int nseg) { @@ -517,6 +529,8 @@ static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req, return false; if (!nvmeq->qid) return false; + if (nvme_pci_metadata_use_sgls(dev, req)) + return true; if (!sgl_threshold || avg_seg_size < sgl_threshold) return false; return true; @@ -779,7 +793,8 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, struct bio_vec bv = req_bvec(req);
if (!is_pci_p2pdma_page(bv.bv_page)) { - if ((bv.bv_offset & (NVME_CTRL_PAGE_SIZE - 1)) + + if (!nvme_pci_metadata_use_sgls(dev, req) && + (bv.bv_offset & (NVME_CTRL_PAGE_SIZE - 1)) + bv.bv_len <= NVME_CTRL_PAGE_SIZE * 2) return nvme_setup_prp_simple(dev, req, &cmnd->rw, &bv); @@ -823,11 +838,69 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, return ret; }
-static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req, - struct nvme_command *cmnd) +static blk_status_t nvme_pci_setup_meta_sgls(struct nvme_dev *dev, + struct request *req) +{ + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + struct nvme_rw_command *cmnd = &iod->cmd.rw; + struct nvme_sgl_desc *sg_list; + struct scatterlist *sgl, *sg; + unsigned int entries; + dma_addr_t sgl_dma; + int rc, i; + + iod->meta_sgt.sgl = mempool_alloc(dev->iod_meta_mempool, GFP_ATOMIC); + if (!iod->meta_sgt.sgl) + return BLK_STS_RESOURCE; + + sg_init_table(iod->meta_sgt.sgl, req->nr_integrity_segments); + iod->meta_sgt.orig_nents = blk_rq_map_integrity_sg(req, + iod->meta_sgt.sgl); + if (!iod->meta_sgt.orig_nents) + goto out_free_sg; + + rc = dma_map_sgtable(dev->dev, &iod->meta_sgt, rq_dma_dir(req), + DMA_ATTR_NO_WARN); + if (rc) + goto out_free_sg; + + sg_list = dma_pool_alloc(dev->prp_small_pool, GFP_ATOMIC, &sgl_dma); + if (!sg_list) + goto out_unmap_sg; + + entries = iod->meta_sgt.nents; + iod->meta_list.sg_list = sg_list; + iod->meta_dma = sgl_dma; + + cmnd->flags = NVME_CMD_SGL_METASEG; + cmnd->metadata = cpu_to_le64(sgl_dma); + + sgl = iod->meta_sgt.sgl; + if (entries == 1) { + nvme_pci_sgl_set_data(sg_list, sgl); + return BLK_STS_OK; + } + + sgl_dma += sizeof(*sg_list); + nvme_pci_sgl_set_seg(sg_list, sgl_dma, entries); + for_each_sg(sgl, sg, entries, i) + nvme_pci_sgl_set_data(&sg_list[i + 1], sg); + + return BLK_STS_OK; + +out_unmap_sg: + dma_unmap_sgtable(dev->dev, &iod->meta_sgt, rq_dma_dir(req), 0); +out_free_sg: + mempool_free(iod->meta_sgt.sgl, dev->iod_meta_mempool); + return BLK_STS_RESOURCE; +} + +static blk_status_t nvme_pci_setup_meta_mptr(struct nvme_dev *dev, + struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); struct bio_vec bv = rq_integrity_vec(req); + struct nvme_command *cmnd = &iod->cmd;
iod->meta_dma = dma_map_bvec(dev->dev, &bv, rq_dma_dir(req), 0); if (dma_mapping_error(dev->dev, iod->meta_dma)) @@ -836,6 +909,13 @@ static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req, return BLK_STS_OK; }
+static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req) +{ + if (nvme_pci_metadata_use_sgls(dev, req)) + return nvme_pci_setup_meta_sgls(dev, req); + return nvme_pci_setup_meta_mptr(dev, req); +} + static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); @@ -844,6 +924,7 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) iod->aborted = false; iod->nr_allocations = -1; iod->sgt.nents = 0; + iod->meta_sgt.nents = 0;
ret = nvme_setup_cmd(req->q->queuedata, req); if (ret) @@ -856,7 +937,7 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) }
if (blk_integrity_rq(req)) { - ret = nvme_map_metadata(dev, req, &iod->cmd); + ret = nvme_map_metadata(dev, req); if (ret) goto out_unmap_data; } @@ -955,17 +1036,31 @@ static void nvme_queue_rqs(struct request **rqlist) *rqlist = requeue_list; }
+static __always_inline void nvme_unmap_metadata(struct nvme_dev *dev, + struct request *req) +{ + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + + if (!iod->meta_sgt.nents) { + dma_unmap_page(dev->dev, iod->meta_dma, + rq_integrity_vec(req).bv_len, + rq_dma_dir(req)); + return; + } + + dma_pool_free(dev->prp_small_pool, iod->meta_list.sg_list, + iod->meta_dma); + dma_unmap_sgtable(dev->dev, &iod->meta_sgt, rq_dma_dir(req), 0); + mempool_free(iod->meta_sgt.sgl, dev->iod_meta_mempool); +} + static __always_inline void nvme_pci_unmap_rq(struct request *req) { struct nvme_queue *nvmeq = req->mq_hctx->driver_data; struct nvme_dev *dev = nvmeq->dev;
- if (blk_integrity_rq(req)) { - struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - - dma_unmap_page(dev->dev, iod->meta_dma, - rq_integrity_vec(req).bv_len, rq_dma_dir(req)); - } + if (blk_integrity_rq(req)) + nvme_unmap_metadata(dev, req);
if (blk_rq_nr_phys_segments(req)) nvme_unmap_data(dev, req); @@ -2719,6 +2814,7 @@ static void nvme_release_prp_pools(struct nvme_dev *dev)
static int nvme_pci_alloc_iod_mempool(struct nvme_dev *dev) { + size_t meta_size = sizeof(struct scatterlist) * (NVME_MAX_META_SEGS + 1); size_t alloc_size = sizeof(struct scatterlist) * NVME_MAX_SEGS;
dev->iod_mempool = mempool_create_node(1, @@ -2727,7 +2823,18 @@ static int nvme_pci_alloc_iod_mempool(struct nvme_dev *dev) dev_to_node(dev->dev)); if (!dev->iod_mempool) return -ENOMEM; + + dev->iod_meta_mempool = mempool_create_node(1, + mempool_kmalloc, mempool_kfree, + (void *)meta_size, GFP_KERNEL, + dev_to_node(dev->dev)); + if (!dev->iod_meta_mempool) + goto free; + return 0; +free: + mempool_destroy(dev->iod_mempool); + return -ENOMEM; }
static void nvme_free_tagset(struct nvme_dev *dev) @@ -2792,6 +2899,11 @@ static void nvme_reset_work(struct work_struct *work) if (result) goto out;
+ if (nvme_ctrl_meta_sgl_supported(&dev->ctrl)) + dev->ctrl.max_integrity_segments = NVME_MAX_META_SEGS; + else + dev->ctrl.max_integrity_segments = 1; + nvme_dbbuf_dma_alloc(dev);
result = nvme_setup_host_mem(dev); @@ -3061,11 +3173,6 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev, dev->ctrl.max_hw_sectors = min_t(u32, NVME_MAX_KB_SZ << 1, dma_opt_mapping_size(&pdev->dev) >> 9); dev->ctrl.max_segments = NVME_MAX_SEGS; - - /* - * There is no support for SGLs for metadata (yet), so we are limited to - * a single integrity segment for the separate metadata pointer. - */ dev->ctrl.max_integrity_segments = 1; return dev;
@@ -3128,6 +3235,11 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (result) goto out_disable;
+ if (nvme_ctrl_meta_sgl_supported(&dev->ctrl)) + dev->ctrl.max_integrity_segments = NVME_MAX_META_SEGS; + else + dev->ctrl.max_integrity_segments = 1; + nvme_dbbuf_dma_alloc(dev);
result = nvme_setup_host_mem(dev); @@ -3170,6 +3282,7 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) nvme_free_queues(dev, 0); out_release_iod_mempool: mempool_destroy(dev->iod_mempool); + mempool_destroy(dev->iod_meta_mempool); out_release_prp_pools: nvme_release_prp_pools(dev); out_dev_unmap: @@ -3235,6 +3348,7 @@ static void nvme_remove(struct pci_dev *pdev) nvme_dbbuf_dma_free(dev); nvme_free_queues(dev, 0); mempool_destroy(dev->iod_mempool); + mempool_destroy(dev->iod_meta_mempool); nvme_release_prp_pools(dev); nvme_dev_unmap(dev); nvme_uninit_ctrl(&dev->ctrl); diff --git a/include/linux/nvme.h b/include/linux/nvme.h index b58d9405d65e0..1c101f6fad2f3 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -388,6 +388,7 @@ enum { NVME_CTRL_CTRATT_PREDICTABLE_LAT = 1 << 5, NVME_CTRL_CTRATT_NAMESPACE_GRANULARITY = 1 << 7, NVME_CTRL_CTRATT_UUID_LIST = 1 << 9, + NVME_CTRL_SGLS_MSDS = 1 << 19, };
struct nvme_lbaf {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Keith Busch kbusch@kernel.org
[ Upstream commit 6fad84a4d624c300d03ebba457cc641765050c43 ]
If the device supports SGLs, use these for all user requests. This format encodes the expected transfer length so it can catch short buffer errors in a user command, whether it occurred accidently or maliciously.
For controllers that support SGL data mode, this is a viable mitigation to CVE-2023-6238. For controllers that don't support SGLs, log a warning in the passthrough path since not having the capability can corrupt data if the interface is not used correctly.
Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Keith Busch kbusch@kernel.org Stable-dep-of: 00817f0f1c45 ("nvme-ioctl: fix leaked requests on mapping error") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/ioctl.c | 12 ++++++++++-- drivers/nvme/host/pci.c | 5 +++-- 2 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index 61af1583356c2..d4b80938de09c 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -120,12 +120,20 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer, struct nvme_ns *ns = q->queuedata; struct block_device *bdev = ns ? ns->disk->part0 : NULL; bool supports_metadata = bdev && blk_get_integrity(bdev->bd_disk); + struct nvme_ctrl *ctrl = nvme_req(req)->ctrl; bool has_metadata = meta_buffer && meta_len; struct bio *bio = NULL; int ret;
- if (has_metadata && !supports_metadata) - return -EINVAL; + if (!nvme_ctrl_sgl_supported(ctrl)) + dev_warn_once(ctrl->device, "using unchecked data buffer\n"); + if (has_metadata) { + if (!supports_metadata) + return -EINVAL; + if (!nvme_ctrl_meta_sgl_supported(ctrl)) + dev_warn_once(ctrl->device, + "using unchecked metadata buffer\n"); + }
if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) { struct iov_iter iter; diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 58bdd0da6b658..e1329d4974fd6 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -514,7 +514,8 @@ static inline bool nvme_pci_metadata_use_sgls(struct nvme_dev *dev, { if (!nvme_ctrl_meta_sgl_supported(&dev->ctrl)) return false; - return req->nr_integrity_segments > 1; + return req->nr_integrity_segments > 1 || + nvme_req(req)->flags & NVME_REQ_USERCMD; }
static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req, @@ -532,7 +533,7 @@ static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req, if (nvme_pci_metadata_use_sgls(dev, req)) return true; if (!sgl_threshold || avg_seg_size < sgl_threshold) - return false; + return nvme_req(req)->flags & NVME_REQ_USERCMD; return true; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Keith Busch kbusch@kernel.org
[ Upstream commit 00817f0f1c45b007965f5676b9a2013bb39c7228 ]
All the callers assume nvme_map_user_request() frees the request on a failure. This wasn't happening on invalid metadata or io_uring command flags, so we've been leaking those requests.
Fixes: 23fd22e55b767b ("nvme: wire up fixed buffer support for nvme passthrough") Fixes: 7c2fd76048e95d ("nvme: fix metadata handling in nvme-passthrough") Reviewed-by: Damien Le Moal dlemoal@kernel.org Reviewed-by: Kanchan Joshi joshi.k@samsung.com Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Keith Busch kbusch@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/ioctl.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index d4b80938de09c..e4daac9c24401 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -128,8 +128,10 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer, if (!nvme_ctrl_sgl_supported(ctrl)) dev_warn_once(ctrl->device, "using unchecked data buffer\n"); if (has_metadata) { - if (!supports_metadata) - return -EINVAL; + if (!supports_metadata) { + ret = -EINVAL; + goto out; + } if (!nvme_ctrl_meta_sgl_supported(ctrl)) dev_warn_once(ctrl->device, "using unchecked metadata buffer\n"); @@ -139,8 +141,10 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer, struct iov_iter iter;
/* fixedbufs is only for non-vectored io */ - if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) - return -EINVAL; + if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) { + ret = -EINVAL; + goto out; + } ret = io_uring_cmd_import_fixed(ubuffer, bufflen, rq_data_dir(req), &iter, ioucmd); if (ret < 0)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ilan Peer ilan.peer@intel.com
[ Upstream commit 24711d60f8492a30622e419cee643d59264ea939 ]
Add support for parsing an ML element of type EPCS priority access, which can optionally be included in EHT protected action frames used to configure EPCS.
Signed-off-by: Ilan Peer ilan.peer@intel.com Reviewed-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Miri Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20250102161730.5afdf65cff46.I0ffa30b40fbad47bc5b608... Signed-off-by: Johannes Berg johannes.berg@intel.com Stable-dep-of: 99ca2c28e6b6 ("wifi: mac80211: fix MLE non-inheritance parsing") Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/ieee80211_i.h | 2 ++ net/mac80211/parse.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 31 insertions(+)
diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 7a0242e937d36..bfe0514efca37 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -1751,6 +1751,7 @@ struct ieee802_11_elems { const struct ieee80211_eht_operation *eht_operation; const struct ieee80211_multi_link_elem *ml_basic; const struct ieee80211_multi_link_elem *ml_reconf; + const struct ieee80211_multi_link_elem *ml_epcs; const struct ieee80211_bandwidth_indication *bandwidth_indication; const struct ieee80211_ttlm_elem *ttlm[IEEE80211_TTLM_MAX_CNT];
@@ -1781,6 +1782,7 @@ struct ieee802_11_elems { /* mult-link element can be de-fragmented and thus u8 is not sufficient */ size_t ml_basic_len; size_t ml_reconf_len; + size_t ml_epcs_len;
u8 ttlm_num;
diff --git a/net/mac80211/parse.c b/net/mac80211/parse.c index 279c5143b3356..cd318c1c67bec 100644 --- a/net/mac80211/parse.c +++ b/net/mac80211/parse.c @@ -44,6 +44,9 @@ struct ieee80211_elems_parse { /* The reconfiguration Multi-Link element in the original elements */ const struct element *ml_reconf_elem;
+ /* The EPCS Multi-Link element in the original elements */ + const struct element *ml_epcs_elem; + /* * scratch buffer that can be used for various element parsing related * tasks, e.g., element de-fragmentation etc. @@ -159,6 +162,9 @@ ieee80211_parse_extension_element(u32 *crc, case IEEE80211_ML_CONTROL_TYPE_RECONF: elems_parse->ml_reconf_elem = elem; break; + case IEEE80211_ML_CONTROL_TYPE_PRIO_ACCESS: + elems_parse->ml_epcs_elem = elem; + break; default: break; } @@ -943,6 +949,27 @@ ieee80211_mle_defrag_reconf(struct ieee80211_elems_parse *elems_parse) elems_parse->scratch_pos += ml_len; }
+static void +ieee80211_mle_defrag_epcs(struct ieee80211_elems_parse *elems_parse) +{ + struct ieee802_11_elems *elems = &elems_parse->elems; + ssize_t ml_len; + + ml_len = cfg80211_defragment_element(elems_parse->ml_epcs_elem, + elems->ie_start, + elems->total_len, + elems_parse->scratch_pos, + elems_parse->scratch + + elems_parse->scratch_len - + elems_parse->scratch_pos, + WLAN_EID_FRAGMENT); + if (ml_len < 0) + return; + elems->ml_epcs = (void *)elems_parse->scratch_pos; + elems->ml_epcs_len = ml_len; + elems_parse->scratch_pos += ml_len; +} + struct ieee802_11_elems * ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params) { @@ -1001,6 +1028,8 @@ ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params)
ieee80211_mle_defrag_reconf(elems_parse);
+ ieee80211_mle_defrag_epcs(elems_parse); + if (elems->tim && !elems->parse_error) { const struct ieee80211_tim_ie *tim_ie = elems->tim;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Berg johannes.berg@intel.com
[ Upstream commit 99ca2c28e6b68084a0fb65585df09b9e28c3ec16 ]
The code is erroneously applying the non-inheritance element to the inner elements rather than the outer, which is clearly completely wrong. Fix it by finding the MLE basic element at the beginning, and then applying the non-inheritance for the outer parsing.
While at it, do some general cleanups such as not allowing callers to try looking for a specific non-transmitted BSS and link at the same time.
Fixes: 45ebac4f059b ("wifi: mac80211: Parse station profile from association response") Reviewed-by: Ilan Peer ilan.peer@intel.com Reviewed-by: Miriam Rachel Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20250221112451.b46d42f45b66.If5b95dc3c80208e0c62d88... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/mlme.c | 1 + net/mac80211/parse.c | 127 ++++++++++++++++++++++++++++--------------- 2 files changed, 83 insertions(+), 45 deletions(-)
diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 111066928b963..88751b0eb317a 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -4733,6 +4733,7 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link, parse_params.start = bss_ies->data; parse_params.len = bss_ies->len; parse_params.bss = cbss; + parse_params.link_id = -1; bss_elems = ieee802_11_parse_elems_full(&parse_params); if (!bss_elems) { ret = false; diff --git a/net/mac80211/parse.c b/net/mac80211/parse.c index cd318c1c67bec..3d5d6658fe8d5 100644 --- a/net/mac80211/parse.c +++ b/net/mac80211/parse.c @@ -47,6 +47,8 @@ struct ieee80211_elems_parse { /* The EPCS Multi-Link element in the original elements */ const struct element *ml_epcs_elem;
+ bool multi_link_inner; + /* * scratch buffer that can be used for various element parsing related * tasks, e.g., element de-fragmentation etc. @@ -152,12 +154,11 @@ ieee80211_parse_extension_element(u32 *crc, switch (le16_get_bits(mle->control, IEEE80211_ML_CONTROL_TYPE)) { case IEEE80211_ML_CONTROL_TYPE_BASIC: - if (elems_parse->ml_basic_elem) { + if (elems_parse->multi_link_inner) { elems->parse_error |= IEEE80211_PARSE_ERR_DUP_NEST_ML_BASIC; break; } - elems_parse->ml_basic_elem = elem; break; case IEEE80211_ML_CONTROL_TYPE_RECONF: elems_parse->ml_reconf_elem = elem; @@ -866,21 +867,36 @@ ieee80211_mle_get_sta_prof(struct ieee80211_elems_parse *elems_parse, } }
-static void ieee80211_mle_parse_link(struct ieee80211_elems_parse *elems_parse, - struct ieee80211_elems_parse_params *params) +static const struct element * +ieee80211_prep_mle_link_parse(struct ieee80211_elems_parse *elems_parse, + struct ieee80211_elems_parse_params *params, + struct ieee80211_elems_parse_params *sub) { struct ieee802_11_elems *elems = &elems_parse->elems; struct ieee80211_mle_per_sta_profile *prof; - struct ieee80211_elems_parse_params sub = { - .mode = params->mode, - .action = params->action, - .from_ap = params->from_ap, - .link_id = -1, - }; - ssize_t ml_len = elems->ml_basic_len; - const struct element *non_inherit = NULL; + const struct element *tmp; + ssize_t ml_len; const u8 *end;
+ if (params->mode < IEEE80211_CONN_MODE_EHT) + return NULL; + + for_each_element_extid(tmp, WLAN_EID_EXT_EHT_MULTI_LINK, + elems->ie_start, elems->total_len) { + const struct ieee80211_multi_link_elem *mle = + (void *)tmp->data + 1; + + if (!ieee80211_mle_size_ok(tmp->data + 1, tmp->datalen - 1)) + continue; + + if (le16_get_bits(mle->control, IEEE80211_ML_CONTROL_TYPE) != + IEEE80211_ML_CONTROL_TYPE_BASIC) + continue; + + elems_parse->ml_basic_elem = tmp; + break; + } + ml_len = cfg80211_defragment_element(elems_parse->ml_basic_elem, elems->ie_start, elems->total_len, @@ -891,26 +907,26 @@ static void ieee80211_mle_parse_link(struct ieee80211_elems_parse *elems_parse, WLAN_EID_FRAGMENT);
if (ml_len < 0) - return; + return NULL;
elems->ml_basic = (const void *)elems_parse->scratch_pos; elems->ml_basic_len = ml_len; elems_parse->scratch_pos += ml_len;
if (params->link_id == -1) - return; + return NULL;
ieee80211_mle_get_sta_prof(elems_parse, params->link_id); prof = elems->prof;
if (!prof) - return; + return NULL;
/* check if we have the 4 bytes for the fixed part in assoc response */ if (elems->sta_prof_len < sizeof(*prof) + prof->sta_info_len - 1 + 4) { elems->prof = NULL; elems->sta_prof_len = 0; - return; + return NULL; }
/* @@ -919,13 +935,17 @@ static void ieee80211_mle_parse_link(struct ieee80211_elems_parse *elems_parse, * the -1 is because the 'sta_info_len' is accounted to as part of the * per-STA profile, but not part of the 'u8 variable[]' portion. */ - sub.start = prof->variable + prof->sta_info_len - 1 + 4; + sub->start = prof->variable + prof->sta_info_len - 1 + 4; end = (const u8 *)prof + elems->sta_prof_len; - sub.len = end - sub.start; + sub->len = end - sub->start;
- non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE, - sub.start, sub.len); - _ieee802_11_parse_elems_full(&sub, elems_parse, non_inherit); + sub->mode = params->mode; + sub->action = params->action; + sub->from_ap = params->from_ap; + sub->link_id = -1; + + return cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE, + sub->start, sub->len); }
static void @@ -973,15 +993,19 @@ ieee80211_mle_defrag_epcs(struct ieee80211_elems_parse *elems_parse) struct ieee802_11_elems * ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params) { + struct ieee80211_elems_parse_params sub = {}; struct ieee80211_elems_parse *elems_parse; - struct ieee802_11_elems *elems; const struct element *non_inherit = NULL; - u8 *nontransmitted_profile; - int nontransmitted_profile_len = 0; + struct ieee802_11_elems *elems; size_t scratch_len = 3 * params->len; + bool multi_link_inner = false;
BUILD_BUG_ON(offsetof(typeof(*elems_parse), elems) != 0);
+ /* cannot parse for both a specific link and non-transmitted BSS */ + if (WARN_ON(params->link_id >= 0 && params->bss)) + return NULL; + elems_parse = kzalloc(struct_size(elems_parse, scratch, scratch_len), GFP_ATOMIC); if (!elems_parse) @@ -998,34 +1022,47 @@ ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params) ieee80211_clear_tpe(&elems->tpe); ieee80211_clear_tpe(&elems->csa_tpe);
- nontransmitted_profile = elems_parse->scratch_pos; - nontransmitted_profile_len = - ieee802_11_find_bssid_profile(params->start, params->len, - elems, params->bss, - nontransmitted_profile); - elems_parse->scratch_pos += nontransmitted_profile_len; - non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE, - nontransmitted_profile, - nontransmitted_profile_len); + /* + * If we're looking for a non-transmitted BSS then we cannot at + * the same time be looking for a second link as the two can only + * appear in the same frame carrying info for different BSSes. + * + * In any case, we only look for one at a time, as encoded by + * the WARN_ON above. + */ + if (params->bss) { + int nontx_len = + ieee802_11_find_bssid_profile(params->start, + params->len, + elems, params->bss, + elems_parse->scratch_pos); + sub.start = elems_parse->scratch_pos; + sub.mode = params->mode; + sub.len = nontx_len; + sub.action = params->action; + sub.link_id = params->link_id; + + /* consume the space used for non-transmitted profile */ + elems_parse->scratch_pos += nontx_len; + + non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE, + sub.start, nontx_len); + } else { + /* must always parse to get elems_parse->ml_basic_elem */ + non_inherit = ieee80211_prep_mle_link_parse(elems_parse, params, + &sub); + multi_link_inner = true; + }
elems->crc = _ieee802_11_parse_elems_full(params, elems_parse, non_inherit);
- /* Override with nontransmitted profile, if found */ - if (nontransmitted_profile_len) { - struct ieee80211_elems_parse_params sub = { - .mode = params->mode, - .start = nontransmitted_profile, - .len = nontransmitted_profile_len, - .action = params->action, - .link_id = params->link_id, - }; - + /* Override with nontransmitted/per-STA profile if found */ + if (sub.len) { + elems_parse->multi_link_inner = multi_link_inner; _ieee802_11_parse_elems_full(&sub, elems_parse, NULL); }
- ieee80211_mle_parse_link(elems_parse, params); - ieee80211_mle_defrag_reconf(elems_parse);
ieee80211_mle_defrag_epcs(elems_parse);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Berg johannes.berg@intel.com
[ Upstream commit 130067e9c13bdc4820748ef16076a6972364745f ]
If there's any vendor-specific element in the subelements then the outer element parsing must not parse any vendor element at all. This isn't implemented correctly now due to parsing into the pointers and then overriding them, so explicitly skip vendor elements if any exist in the sub- elements (non-transmitted profile or per-STA profile).
Fixes: 671042a4fb77 ("mac80211: support non-inheritance element") Reviewed-by: Ilan Peer ilan.peer@intel.com Reviewed-by: Miriam Rachel Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20250221112451.fd71e5268840.I9db3e6a3367e6ff38d052d... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/parse.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/net/mac80211/parse.c b/net/mac80211/parse.c index 3d5d6658fe8d5..6da39c864f45b 100644 --- a/net/mac80211/parse.c +++ b/net/mac80211/parse.c @@ -48,6 +48,7 @@ struct ieee80211_elems_parse { const struct element *ml_epcs_elem;
bool multi_link_inner; + bool skip_vendor;
/* * scratch buffer that can be used for various element parsing related @@ -400,6 +401,9 @@ _ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params, IEEE80211_PARSE_ERR_BAD_ELEM_SIZE; break; case WLAN_EID_VENDOR_SPECIFIC: + if (elems_parse->skip_vendor) + break; + if (elen >= 4 && pos[0] == 0x00 && pos[1] == 0x50 && pos[2] == 0xf2) { /* Microsoft OUI (00:50:F2) */ @@ -1054,12 +1058,16 @@ ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params) multi_link_inner = true; }
+ elems_parse->skip_vendor = + cfg80211_find_elem(WLAN_EID_VENDOR_SPECIFIC, + sub.start, sub.len); elems->crc = _ieee802_11_parse_elems_full(params, elems_parse, non_inherit);
/* Override with nontransmitted/per-STA profile if found */ if (sub.len) { elems_parse->multi_link_inner = multi_link_inner; + elems_parse->skip_vendor = false; _ieee802_11_parse_elems_full(&sub, elems_parse, NULL); }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann tzimmermann@suse.de
[ Upstream commit eb1f4adf9101573fc2347978a60d71c4f1176cca ]
The color mode as specified on the kernel command line gives the user's preferred color depth and number of bits per pixel. Move the color-mode-to-format conversion from fbdev helpers into a 4CC helper, so that it can be shared among DRM clients.
v2: - fix grammar in commit message (Laurent)
Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Reviewed-by: Laurent Pinchart laurent.pinchart+renesas@ideasonboard.com Link: https://patchwork.freedesktop.org/patch/msgid/20240924071734.98201-2-tzimmer... Stable-dep-of: 6b481ab0e685 ("drm/nouveau: select FW caching") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/drm_fb_helper.c | 70 +++++++-------------------------- drivers/gpu/drm/drm_fourcc.c | 30 +++++++++++++- include/drm/drm_fourcc.h | 1 + 3 files changed, 45 insertions(+), 56 deletions(-)
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index eaac2e5726e75..29b0cf3a3fb19 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -1443,67 +1443,27 @@ int drm_fb_helper_pan_display(struct fb_var_screeninfo *var, EXPORT_SYMBOL(drm_fb_helper_pan_display);
static uint32_t drm_fb_helper_find_format(struct drm_fb_helper *fb_helper, const uint32_t *formats, - size_t format_count, uint32_t bpp, uint32_t depth) + size_t format_count, unsigned int color_mode) { struct drm_device *dev = fb_helper->dev; uint32_t format; size_t i;
- /* - * Do not consider YUV or other complicated formats - * for framebuffers. This means only legacy formats - * are supported (fmt->depth is a legacy field), but - * the framebuffer emulation can only deal with such - * formats, specifically RGB/BGA formats. - */ - format = drm_mode_legacy_fb_format(bpp, depth); - if (!format) - goto err; + format = drm_driver_color_mode_format(dev, color_mode); + if (!format) { + drm_info(dev, "unsupported color mode of %d\n", color_mode); + return DRM_FORMAT_INVALID; + }
for (i = 0; i < format_count; ++i) { if (formats[i] == format) return format; } - -err: - /* We found nothing. */ - drm_warn(dev, "bpp/depth value of %u/%u not supported\n", bpp, depth); + drm_warn(dev, "format %p4cc not supported\n", &format);
return DRM_FORMAT_INVALID; }
-static uint32_t drm_fb_helper_find_color_mode_format(struct drm_fb_helper *fb_helper, - const uint32_t *formats, size_t format_count, - unsigned int color_mode) -{ - struct drm_device *dev = fb_helper->dev; - uint32_t bpp, depth; - - switch (color_mode) { - case 1: - case 2: - case 4: - case 8: - case 16: - case 24: - bpp = depth = color_mode; - break; - case 15: - bpp = 16; - depth = 15; - break; - case 32: - bpp = 32; - depth = 24; - break; - default: - drm_info(dev, "unsupported color mode of %d\n", color_mode); - return DRM_FORMAT_INVALID; - } - - return drm_fb_helper_find_format(fb_helper, formats, format_count, bpp, depth); -} - static int __drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper, struct drm_fb_helper_surface_size *sizes) { @@ -1533,10 +1493,10 @@ static int __drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper, if (!cmdline_mode->bpp_specified) continue;
- surface_format = drm_fb_helper_find_color_mode_format(fb_helper, - plane->format_types, - plane->format_count, - cmdline_mode->bpp); + surface_format = drm_fb_helper_find_format(fb_helper, + plane->format_types, + plane->format_count, + cmdline_mode->bpp); if (surface_format != DRM_FORMAT_INVALID) break; /* found supported format */ } @@ -1546,10 +1506,10 @@ static int __drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper, break; /* found supported format */
/* try preferred color mode */ - surface_format = drm_fb_helper_find_color_mode_format(fb_helper, - plane->format_types, - plane->format_count, - fb_helper->preferred_bpp); + surface_format = drm_fb_helper_find_format(fb_helper, + plane->format_types, + plane->format_count, + fb_helper->preferred_bpp); if (surface_format != DRM_FORMAT_INVALID) break; /* found supported format */ } diff --git a/drivers/gpu/drm/drm_fourcc.c b/drivers/gpu/drm/drm_fourcc.c index 193cf8ed79128..3a94ca211f9ce 100644 --- a/drivers/gpu/drm/drm_fourcc.c +++ b/drivers/gpu/drm/drm_fourcc.c @@ -36,7 +36,6 @@ * @depth: bit depth per pixel * * Computes a drm fourcc pixel format code for the given @bpp/@depth values. - * Useful in fbdev emulation code, since that deals in those values. */ uint32_t drm_mode_legacy_fb_format(uint32_t bpp, uint32_t depth) { @@ -140,6 +139,35 @@ uint32_t drm_driver_legacy_fb_format(struct drm_device *dev, } EXPORT_SYMBOL(drm_driver_legacy_fb_format);
+/** + * drm_driver_color_mode_format - Compute DRM 4CC code from color mode + * @dev: DRM device + * @color_mode: command-line color mode + * + * Computes a DRM 4CC pixel format code for the given color mode using + * drm_driver_color_mode(). The color mode is in the format used and the + * kernel command line. It specifies the number of bits per pixel + * and color depth in a single value. + * + * Useful in fbdev emulation code, since that deals in those values. The + * helper does not consider YUV or other complicated formats. This means + * only legacy formats are supported (fmt->depth is a legacy field), but + * the framebuffer emulation can only deal with such formats, specifically + * RGB/BGA formats. + */ +uint32_t drm_driver_color_mode_format(struct drm_device *dev, unsigned int color_mode) +{ + switch (color_mode) { + case 15: + return drm_driver_legacy_fb_format(dev, 16, 15); + case 32: + return drm_driver_legacy_fb_format(dev, 32, 24); + default: + return drm_driver_legacy_fb_format(dev, color_mode, color_mode); + } +} +EXPORT_SYMBOL(drm_driver_color_mode_format); + /* * Internal function to query information for a given format. See * drm_format_info() for the public API. diff --git a/include/drm/drm_fourcc.h b/include/drm/drm_fourcc.h index ccf91daa43070..c3f4405d66629 100644 --- a/include/drm/drm_fourcc.h +++ b/include/drm/drm_fourcc.h @@ -313,6 +313,7 @@ drm_get_format_info(struct drm_device *dev, uint32_t drm_mode_legacy_fb_format(uint32_t bpp, uint32_t depth); uint32_t drm_driver_legacy_fb_format(struct drm_device *dev, uint32_t bpp, uint32_t depth); +uint32_t drm_driver_color_mode_format(struct drm_device *dev, unsigned int color_mode); unsigned int drm_format_info_block_width(const struct drm_format_info *info, int plane); unsigned int drm_format_info_block_height(const struct drm_format_info *info,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann tzimmermann@suse.de
[ Upstream commit 5d08c44e47b9d41366714552bdd374ac4b595591 ]
Add an fbdev client that can work with any memory manager. The client implementation is the same as existing code in fbdev-dma or fbdev-shmem.
Provide struct drm_driver.fbdev_probe for the new client to allocate the surface GEM buffer. The new callback replaces fb_probe of struct drm_fb_helper_funcs, which does the same.
To use the new client, DRM drivers set fbdev_probe in their struct drm_driver instance and call drm_fbdev_client_setup(). Probing and creating the fbdev surface buffer is now independent from the other operations in struct drm_fb_helper. For the pixel format, the fbdev client either uses a specified format, the value in preferred_depth or 32-bit RGB.
v2: - test for struct drm_fb_helper.funcs for NULL (Sui) - respect struct drm_mode_config.preferred_depth for default format
Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Reviewed-by: Javier Martinez Canillas javierm@redhat.com Link: https://patchwork.freedesktop.org/patch/msgid/20240924071734.98201-4-tzimmer... Stable-dep-of: 6b481ab0e685 ("drm/nouveau: select FW caching") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/Makefile | 4 +- drivers/gpu/drm/drm_fb_helper.c | 15 +-- drivers/gpu/drm/drm_fbdev_client.c | 141 +++++++++++++++++++++++++++++ include/drm/drm_drv.h | 18 ++++ include/drm/drm_fbdev_client.h | 19 ++++ 5 files changed, 190 insertions(+), 7 deletions(-) create mode 100644 drivers/gpu/drm/drm_fbdev_client.c create mode 100644 include/drm/drm_fbdev_client.h
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 84746054c721a..a1dd0909fa38b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -145,7 +145,9 @@ drm_kms_helper-y := \ drm_self_refresh_helper.o \ drm_simple_kms_helper.o drm_kms_helper-$(CONFIG_DRM_PANEL_BRIDGE) += bridge/panel.o -drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fb_helper.o +drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += \ + drm_fbdev_client.o \ + drm_fb_helper.o obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o
# diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index 29b0cf3a3fb19..b15ddbd65e7b5 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -492,8 +492,8 @@ EXPORT_SYMBOL(drm_fb_helper_init); * @fb_helper: driver-allocated fbdev helper * * A helper to alloc fb_info and the member cmap. Called by the driver - * within the fb_probe fb_helper callback function. Drivers do not - * need to release the allocated fb_info structure themselves, this is + * within the struct &drm_driver.fbdev_probe callback function. Drivers do + * not need to release the allocated fb_info structure themselves, this is * automatically done when calling drm_fb_helper_fini(). * * RETURNS: @@ -1610,7 +1610,7 @@ static int drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper,
/* * Allocates the backing storage and sets up the fbdev info structure through - * the ->fb_probe callback. + * the ->fbdev_probe callback. */ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper) { @@ -1628,7 +1628,10 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper) }
/* push down into drivers */ - ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes); + if (dev->driver->fbdev_probe) + ret = dev->driver->fbdev_probe(fb_helper, &sizes); + else if (fb_helper->funcs) + ret = fb_helper->funcs->fb_probe(fb_helper, &sizes); if (ret < 0) return ret;
@@ -1700,7 +1703,7 @@ static void drm_fb_helper_fill_var(struct fb_info *info, * instance and the drm framebuffer allocated in &drm_fb_helper.fb. * * Drivers should call this (or their equivalent setup code) from their - * &drm_fb_helper_funcs.fb_probe callback after having allocated the fbdev + * &drm_driver.fbdev_probe callback after having allocated the fbdev * backing storage framebuffer. */ void drm_fb_helper_fill_info(struct fb_info *info, @@ -1856,7 +1859,7 @@ __drm_fb_helper_initial_config_and_unlock(struct drm_fb_helper *fb_helper) * Note that this also registers the fbdev and so allows userspace to call into * the driver through the fbdev interfaces. * - * This function will call down into the &drm_fb_helper_funcs.fb_probe callback + * This function will call down into the &drm_driver.fbdev_probe callback * to let the driver allocate and initialize the fbdev info structure and the * drm framebuffer used to back the fbdev. drm_fb_helper_fill_info() is provided * as a helper to setup simple default values for the fbdev info structure. diff --git a/drivers/gpu/drm/drm_fbdev_client.c b/drivers/gpu/drm/drm_fbdev_client.c new file mode 100644 index 0000000000000..a09382afe2fb6 --- /dev/null +++ b/drivers/gpu/drm/drm_fbdev_client.c @@ -0,0 +1,141 @@ +// SPDX-License-Identifier: MIT + +#include <drm/drm_client.h> +#include <drm/drm_crtc_helper.h> +#include <drm/drm_drv.h> +#include <drm/drm_fbdev_client.h> +#include <drm/drm_fb_helper.h> +#include <drm/drm_fourcc.h> +#include <drm/drm_print.h> + +/* + * struct drm_client_funcs + */ + +static void drm_fbdev_client_unregister(struct drm_client_dev *client) +{ + struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client); + + if (fb_helper->info) { + drm_fb_helper_unregister_info(fb_helper); + } else { + drm_client_release(&fb_helper->client); + drm_fb_helper_unprepare(fb_helper); + kfree(fb_helper); + } +} + +static int drm_fbdev_client_restore(struct drm_client_dev *client) +{ + drm_fb_helper_lastclose(client->dev); + + return 0; +} + +static int drm_fbdev_client_hotplug(struct drm_client_dev *client) +{ + struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client); + struct drm_device *dev = client->dev; + int ret; + + if (dev->fb_helper) + return drm_fb_helper_hotplug_event(dev->fb_helper); + + ret = drm_fb_helper_init(dev, fb_helper); + if (ret) + goto err_drm_err; + + if (!drm_drv_uses_atomic_modeset(dev)) + drm_helper_disable_unused_functions(dev); + + ret = drm_fb_helper_initial_config(fb_helper); + if (ret) + goto err_drm_fb_helper_fini; + + return 0; + +err_drm_fb_helper_fini: + drm_fb_helper_fini(fb_helper); +err_drm_err: + drm_err(dev, "fbdev: Failed to setup emulation (ret=%d)\n", ret); + return ret; +} + +static const struct drm_client_funcs drm_fbdev_client_funcs = { + .owner = THIS_MODULE, + .unregister = drm_fbdev_client_unregister, + .restore = drm_fbdev_client_restore, + .hotplug = drm_fbdev_client_hotplug, +}; + +/** + * drm_fbdev_client_setup() - Setup fbdev emulation + * @dev: DRM device + * @format: Preferred color format for the device. DRM_FORMAT_XRGB8888 + * is used if this is zero. + * + * This function sets up fbdev emulation. Restore, hotplug events and + * teardown are all taken care of. Drivers that do suspend/resume need + * to call drm_fb_helper_set_suspend_unlocked() themselves. Simple + * drivers might use drm_mode_config_helper_suspend(). + * + * This function is safe to call even when there are no connectors present. + * Setup will be retried on the next hotplug event. + * + * The fbdev client is destroyed by drm_dev_unregister(). + * + * Returns: + * 0 on success, or a negative errno code otherwise. + */ +int drm_fbdev_client_setup(struct drm_device *dev, const struct drm_format_info *format) +{ + struct drm_fb_helper *fb_helper; + unsigned int color_mode; + int ret; + + /* TODO: Use format info throughout DRM */ + if (format) { + unsigned int bpp = drm_format_info_bpp(format, 0); + + switch (bpp) { + case 16: + color_mode = format->depth; // could also be 15 + break; + default: + color_mode = bpp; + } + } else { + switch (dev->mode_config.preferred_depth) { + case 0: + case 24: + color_mode = 32; + break; + default: + color_mode = dev->mode_config.preferred_depth; + } + } + + drm_WARN(dev, !dev->registered, "Device has not been registered.\n"); + drm_WARN(dev, dev->fb_helper, "fb_helper is already set!\n"); + + fb_helper = kzalloc(sizeof(*fb_helper), GFP_KERNEL); + if (!fb_helper) + return -ENOMEM; + drm_fb_helper_prepare(dev, fb_helper, color_mode, NULL); + + ret = drm_client_init(dev, &fb_helper->client, "fbdev", &drm_fbdev_client_funcs); + if (ret) { + drm_err(dev, "Failed to register client: %d\n", ret); + goto err_drm_client_init; + } + + drm_client_register(&fb_helper->client); + + return 0; + +err_drm_client_init: + drm_fb_helper_unprepare(fb_helper); + kfree(fb_helper); + return ret; +} +EXPORT_SYMBOL(drm_fbdev_client_setup); diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index 02ea4e3248fdf..36a606af4ba1d 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -34,6 +34,8 @@
#include <drm/drm_device.h>
+struct drm_fb_helper; +struct drm_fb_helper_surface_size; struct drm_file; struct drm_gem_object; struct drm_master; @@ -366,6 +368,22 @@ struct drm_driver { struct drm_device *dev, uint32_t handle, uint64_t *offset);
+ /** + * @fbdev_probe + * + * Allocates and initialize the fb_info structure for fbdev emulation. + * Furthermore it also needs to allocate the DRM framebuffer used to + * back the fbdev. + * + * This callback is mandatory for fbdev support. + * + * Returns: + * + * 0 on success ot a negative error code otherwise. + */ + int (*fbdev_probe)(struct drm_fb_helper *fbdev_helper, + struct drm_fb_helper_surface_size *sizes); + /** * @show_fdinfo: * diff --git a/include/drm/drm_fbdev_client.h b/include/drm/drm_fbdev_client.h new file mode 100644 index 0000000000000..e11a5614f127c --- /dev/null +++ b/include/drm/drm_fbdev_client.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: MIT */ + +#ifndef DRM_FBDEV_CLIENT_H +#define DRM_FBDEV_CLIENT_H + +struct drm_device; +struct drm_format_info; + +#ifdef CONFIG_DRM_FBDEV_EMULATION +int drm_fbdev_client_setup(struct drm_device *dev, const struct drm_format_info *format); +#else +static inline int drm_fbdev_client_setup(struct drm_device *dev, + const struct drm_format_info *format) +{ + return 0; +} +#endif + +#endif
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann tzimmermann@suse.de
[ Upstream commit d07fdf9225922d3e36ebd13ccab3df62b1ccdab3 ]
DRM may support multiple in-kernel clients that run as soon as a DRM driver has been registered. To select the client(s) in a single place, introduce drm_client_setup().
Drivers that call the new helper automatically instantiate the kernel's configured default clients. Only fbdev emulation is currently supported. Later versions can add support for DRM-based logging, a boot logo or even a console.
Some drivers handle the color mode for clients internally. Provide the helper drm_client_setup_with_color_mode() for them.
Using the new interface requires the driver to select DRM_CLIENT_SELECTION in its Kconfig. For now this only enables the client-setup helpers if the fbdev client has been configured by the user. A future patchset will further modularize client support and rework DRM_CLIENT_SELECTION to select the correct dependencies for all its clients.
v5: - add CONFIG_DRM_CLIENT_SELECTION und DRM_CLIENT_SETUP v4: - fix docs for drm_client_setup_with_fourcc() (Geert) v3: - fix build error v2: - add drm_client_setup_with_fourcc() (Laurent) - push default-format handling into actual clients
Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Reviewed-by: Laurent Pinchart laurent.pinchart+renesas@ideasonboard.com Link: https://patchwork.freedesktop.org/patch/msgid/20240924071734.98201-5-tzimmer... Stable-dep-of: 6b481ab0e685 ("drm/nouveau: select FW caching") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/Kconfig | 12 ++++++ drivers/gpu/drm/Makefile | 2 + drivers/gpu/drm/drm_client_setup.c | 66 ++++++++++++++++++++++++++++++ include/drm/drm_client_setup.h | 26 ++++++++++++ 4 files changed, 106 insertions(+) create mode 100644 drivers/gpu/drm/drm_client_setup.c create mode 100644 include/drm/drm_client_setup.h
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 7408ea8caacc3..ae53f26da945f 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -211,6 +211,18 @@ config DRM_DEBUG_MODESET_LOCK
If in doubt, say "N".
+config DRM_CLIENT_SELECTION + bool + depends on DRM + select DRM_CLIENT_SETUP if DRM_FBDEV_EMULATION + help + Drivers that support in-kernel DRM clients have to select this + option. + +config DRM_CLIENT_SETUP + bool + depends on DRM_CLIENT_SELECTION + config DRM_FBDEV_EMULATION bool "Enable legacy fbdev support for your modesetting driver" depends on DRM diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index a1dd0909fa38b..1ec44529447a7 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -144,6 +144,8 @@ drm_kms_helper-y := \ drm_rect.o \ drm_self_refresh_helper.o \ drm_simple_kms_helper.o +drm_kms_helper-$(CONFIG_DRM_CLIENT_SETUP) += \ + drm_client_setup.o drm_kms_helper-$(CONFIG_DRM_PANEL_BRIDGE) += bridge/panel.o drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += \ drm_fbdev_client.o \ diff --git a/drivers/gpu/drm/drm_client_setup.c b/drivers/gpu/drm/drm_client_setup.c new file mode 100644 index 0000000000000..5969c4ffe31ba --- /dev/null +++ b/drivers/gpu/drm/drm_client_setup.c @@ -0,0 +1,66 @@ +// SPDX-License-Identifier: MIT + +#include <drm/drm_client_setup.h> +#include <drm/drm_device.h> +#include <drm/drm_fbdev_client.h> +#include <drm/drm_fourcc.h> +#include <drm/drm_print.h> + +/** + * drm_client_setup() - Setup in-kernel DRM clients + * @dev: DRM device + * @format: Preferred pixel format for the device. Use NULL, unless + * there is clearly a driver-preferred format. + * + * This function sets up the in-kernel DRM clients. Restore, hotplug + * events and teardown are all taken care of. + * + * Drivers should call drm_client_setup() after registering the new + * DRM device with drm_dev_register(). This function is safe to call + * even when there are no connectors present. Setup will be retried + * on the next hotplug event. + * + * The clients are destroyed by drm_dev_unregister(). + */ +void drm_client_setup(struct drm_device *dev, const struct drm_format_info *format) +{ + int ret; + + ret = drm_fbdev_client_setup(dev, format); + if (ret) + drm_warn(dev, "Failed to set up DRM client; error %d\n", ret); +} +EXPORT_SYMBOL(drm_client_setup); + +/** + * drm_client_setup_with_fourcc() - Setup in-kernel DRM clients for color mode + * @dev: DRM device + * @fourcc: Preferred pixel format as 4CC code for the device + * + * This function sets up the in-kernel DRM clients. It is equivalent + * to drm_client_setup(), but expects a 4CC code as second argument. + */ +void drm_client_setup_with_fourcc(struct drm_device *dev, u32 fourcc) +{ + drm_client_setup(dev, drm_format_info(fourcc)); +} +EXPORT_SYMBOL(drm_client_setup_with_fourcc); + +/** + * drm_client_setup_with_color_mode() - Setup in-kernel DRM clients for color mode + * @dev: DRM device + * @color_mode: Preferred color mode for the device + * + * This function sets up the in-kernel DRM clients. It is equivalent + * to drm_client_setup(), but expects a color mode as second argument. + * + * Do not use this function in new drivers. Prefer drm_client_setup() with a + * format of NULL. + */ +void drm_client_setup_with_color_mode(struct drm_device *dev, unsigned int color_mode) +{ + u32 fourcc = drm_driver_color_mode_format(dev, color_mode); + + drm_client_setup_with_fourcc(dev, fourcc); +} +EXPORT_SYMBOL(drm_client_setup_with_color_mode); diff --git a/include/drm/drm_client_setup.h b/include/drm/drm_client_setup.h new file mode 100644 index 0000000000000..46aab3fb46be5 --- /dev/null +++ b/include/drm/drm_client_setup.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: MIT */ + +#ifndef DRM_CLIENT_SETUP_H +#define DRM_CLIENT_SETUP_H + +#include <linux/types.h> + +struct drm_device; +struct drm_format_info; + +#if defined(CONFIG_DRM_CLIENT_SETUP) +void drm_client_setup(struct drm_device *dev, const struct drm_format_info *format); +void drm_client_setup_with_fourcc(struct drm_device *dev, u32 fourcc); +void drm_client_setup_with_color_mode(struct drm_device *dev, unsigned int color_mode); +#else +static inline void drm_client_setup(struct drm_device *dev, + const struct drm_format_info *format) +{ } +static inline void drm_client_setup_with_fourcc(struct drm_device *dev, u32 fourcc) +{ } +static inline void drm_client_setup_with_color_mode(struct drm_device *dev, + unsigned int color_mode) +{ } +#endif + +#endif
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann tzimmermann@suse.de
[ Upstream commit c7c1b9e1d52b0a0dbb0ee552efdc3360c0f5363c ]
Rework fbdev probing to support fbdev_probe in struct drm_driver and reimplement the old fb_probe callback on top of it. Provide an initializer macro for struct drm_driver that sets the callback according to the kernel configuration.
This change allows the common fbdev client to run on top of TTM- based DRM drivers.
Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Acked-by: Javier Martinez Canillas javierm@redhat.com Link: https://patchwork.freedesktop.org/patch/msgid/20240924071734.98201-65-tzimme... Stable-dep-of: 6b481ab0e685 ("drm/nouveau: select FW caching") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/drm_fbdev_ttm.c | 142 +++++++++++++++++--------------- include/drm/drm_fbdev_ttm.h | 13 +++ 2 files changed, 90 insertions(+), 65 deletions(-)
diff --git a/drivers/gpu/drm/drm_fbdev_ttm.c b/drivers/gpu/drm/drm_fbdev_ttm.c index 119ffb28aaf95..d799cbe944cd3 100644 --- a/drivers/gpu/drm/drm_fbdev_ttm.c +++ b/drivers/gpu/drm/drm_fbdev_ttm.c @@ -71,71 +71,7 @@ static const struct fb_ops drm_fbdev_ttm_fb_ops = { static int drm_fbdev_ttm_helper_fb_probe(struct drm_fb_helper *fb_helper, struct drm_fb_helper_surface_size *sizes) { - struct drm_client_dev *client = &fb_helper->client; - struct drm_device *dev = fb_helper->dev; - struct drm_client_buffer *buffer; - struct fb_info *info; - size_t screen_size; - void *screen_buffer; - u32 format; - int ret; - - drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n", - sizes->surface_width, sizes->surface_height, - sizes->surface_bpp); - - format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp, - sizes->surface_depth); - buffer = drm_client_framebuffer_create(client, sizes->surface_width, - sizes->surface_height, format); - if (IS_ERR(buffer)) - return PTR_ERR(buffer); - - fb_helper->buffer = buffer; - fb_helper->fb = buffer->fb; - - screen_size = buffer->gem->size; - screen_buffer = vzalloc(screen_size); - if (!screen_buffer) { - ret = -ENOMEM; - goto err_drm_client_framebuffer_delete; - } - - info = drm_fb_helper_alloc_info(fb_helper); - if (IS_ERR(info)) { - ret = PTR_ERR(info); - goto err_vfree; - } - - drm_fb_helper_fill_info(info, fb_helper, sizes); - - info->fbops = &drm_fbdev_ttm_fb_ops; - - /* screen */ - info->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST; - info->screen_buffer = screen_buffer; - info->fix.smem_len = screen_size; - - /* deferred I/O */ - fb_helper->fbdefio.delay = HZ / 20; - fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io; - - info->fbdefio = &fb_helper->fbdefio; - ret = fb_deferred_io_init(info); - if (ret) - goto err_drm_fb_helper_release_info; - - return 0; - -err_drm_fb_helper_release_info: - drm_fb_helper_release_info(fb_helper); -err_vfree: - vfree(screen_buffer); -err_drm_client_framebuffer_delete: - fb_helper->fb = NULL; - fb_helper->buffer = NULL; - drm_client_framebuffer_delete(buffer); - return ret; + return drm_fbdev_ttm_driver_fbdev_probe(fb_helper, sizes); }
static void drm_fbdev_ttm_damage_blit_real(struct drm_fb_helper *fb_helper, @@ -240,6 +176,82 @@ static const struct drm_fb_helper_funcs drm_fbdev_ttm_helper_funcs = { .fb_dirty = drm_fbdev_ttm_helper_fb_dirty, };
+/* + * struct drm_driver + */ + +int drm_fbdev_ttm_driver_fbdev_probe(struct drm_fb_helper *fb_helper, + struct drm_fb_helper_surface_size *sizes) +{ + struct drm_client_dev *client = &fb_helper->client; + struct drm_device *dev = fb_helper->dev; + struct drm_client_buffer *buffer; + struct fb_info *info; + size_t screen_size; + void *screen_buffer; + u32 format; + int ret; + + drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n", + sizes->surface_width, sizes->surface_height, + sizes->surface_bpp); + + format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp, + sizes->surface_depth); + buffer = drm_client_framebuffer_create(client, sizes->surface_width, + sizes->surface_height, format); + if (IS_ERR(buffer)) + return PTR_ERR(buffer); + + fb_helper->funcs = &drm_fbdev_ttm_helper_funcs; + fb_helper->buffer = buffer; + fb_helper->fb = buffer->fb; + + screen_size = buffer->gem->size; + screen_buffer = vzalloc(screen_size); + if (!screen_buffer) { + ret = -ENOMEM; + goto err_drm_client_framebuffer_delete; + } + + info = drm_fb_helper_alloc_info(fb_helper); + if (IS_ERR(info)) { + ret = PTR_ERR(info); + goto err_vfree; + } + + drm_fb_helper_fill_info(info, fb_helper, sizes); + + info->fbops = &drm_fbdev_ttm_fb_ops; + + /* screen */ + info->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST; + info->screen_buffer = screen_buffer; + info->fix.smem_len = screen_size; + + /* deferred I/O */ + fb_helper->fbdefio.delay = HZ / 20; + fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io; + + info->fbdefio = &fb_helper->fbdefio; + ret = fb_deferred_io_init(info); + if (ret) + goto err_drm_fb_helper_release_info; + + return 0; + +err_drm_fb_helper_release_info: + drm_fb_helper_release_info(fb_helper); +err_vfree: + vfree(screen_buffer); +err_drm_client_framebuffer_delete: + fb_helper->fb = NULL; + fb_helper->buffer = NULL; + drm_client_framebuffer_delete(buffer); + return ret; +} +EXPORT_SYMBOL(drm_fbdev_ttm_driver_fbdev_probe); + static void drm_fbdev_ttm_client_unregister(struct drm_client_dev *client) { struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client); diff --git a/include/drm/drm_fbdev_ttm.h b/include/drm/drm_fbdev_ttm.h index 9e6c3bdf35376..243685d02eb13 100644 --- a/include/drm/drm_fbdev_ttm.h +++ b/include/drm/drm_fbdev_ttm.h @@ -3,11 +3,24 @@ #ifndef DRM_FBDEV_TTM_H #define DRM_FBDEV_TTM_H
+#include <linux/stddef.h> + struct drm_device; +struct drm_fb_helper; +struct drm_fb_helper_surface_size;
#ifdef CONFIG_DRM_FBDEV_EMULATION +int drm_fbdev_ttm_driver_fbdev_probe(struct drm_fb_helper *fb_helper, + struct drm_fb_helper_surface_size *sizes); + +#define DRM_FBDEV_TTM_DRIVER_OPS \ + .fbdev_probe = drm_fbdev_ttm_driver_fbdev_probe + void drm_fbdev_ttm_setup(struct drm_device *dev, unsigned int preferred_bpp); #else +#define DRM_FBDEV_TTM_DRIVER_OPS \ + .fbdev_probe = NULL + static inline void drm_fbdev_ttm_setup(struct drm_device *dev, unsigned int preferred_bpp) { } #endif
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann tzimmermann@suse.de
[ Upstream commit ef350898ae22db832ada972476fa2999f8ea978c ]
Call drm_client_setup() to run the kernel's default client setup for DRM. Set fbdev_probe in struct drm_driver, so that the client setup can start the common fbdev client.
The nouveau driver specifies a preferred color mode depending on the available video memory, with a default of 32. Adapt this for the new client interface.
v5: - select DRM_CLIENT_SELECTION v2: - style changes
Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Cc: Karol Herbst kherbst@redhat.com Cc: Lyude Paul lyude@redhat.com Cc: Danilo Krummrich dakr@redhat.com Reviewed-by: Lyude Paul lyude@redhat.com Acked-by: Danilo Krummrich dakr@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20240924071734.98201-69-tzimme... Stable-dep-of: 6b481ab0e685 ("drm/nouveau: select FW caching") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/nouveau/Kconfig | 1 + drivers/gpu/drm/nouveau/nouveau_drm.c | 10 ++++++++-- 2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig index ceef470c9fbfc..ce840300578d8 100644 --- a/drivers/gpu/drm/nouveau/Kconfig +++ b/drivers/gpu/drm/nouveau/Kconfig @@ -4,6 +4,7 @@ config DRM_NOUVEAU depends on DRM && PCI && MMU select IOMMU_API select FW_LOADER + select DRM_CLIENT_SELECTION select DRM_DISPLAY_DP_HELPER select DRM_DISPLAY_HDMI_HELPER select DRM_DISPLAY_HELPER diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index 34985771b2a28..6e5adab034713 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -31,6 +31,7 @@ #include <linux/dynamic_debug.h>
#include <drm/drm_aperture.h> +#include <drm/drm_client_setup.h> #include <drm/drm_drv.h> #include <drm/drm_fbdev_ttm.h> #include <drm/drm_gem_ttm_helper.h> @@ -836,6 +837,7 @@ static int nouveau_drm_probe(struct pci_dev *pdev, { struct nvkm_device *device; struct nouveau_drm *drm; + const struct drm_format_info *format; int ret;
if (vga_switcheroo_client_probe_defer(pdev)) @@ -873,9 +875,11 @@ static int nouveau_drm_probe(struct pci_dev *pdev, goto fail_pci;
if (drm->client.device.info.ram_size <= 32 * 1024 * 1024) - drm_fbdev_ttm_setup(drm->dev, 8); + format = drm_format_info(DRM_FORMAT_C8); else - drm_fbdev_ttm_setup(drm->dev, 32); + format = NULL; + + drm_client_setup(drm->dev, format);
quirk_broken_nv_runpm(pdev); return 0; @@ -1318,6 +1322,8 @@ driver_stub = { .dumb_create = nouveau_display_dumb_create, .dumb_map_offset = drm_gem_ttm_dumb_map_offset,
+ DRM_FBDEV_TTM_DRIVER_OPS, + .name = DRIVER_NAME, .desc = DRIVER_DESC, #ifdef GIT_REVISION
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dave Airlie airlied@redhat.com
[ Upstream commit 6b481ab0e6855fb30e2923c51f62f1662d1cda7e ]
nouveau tries to load some firmware during suspend that it loaded earlier, but with fw caching disabled it hangs suspend, so just rely on FW cache enabling instead of working around it in the driver.
Fixes: 176fdcbddfd2 ("drm/nouveau/gsp/r535: add support for booting GSP-RM") Signed-off-by: Dave Airlie airlied@redhat.com Signed-off-by: Danilo Krummrich dakr@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20250207012531.621369-1-airlie... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/nouveau/Kconfig | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig index ce840300578d8..1050a4617fc15 100644 --- a/drivers/gpu/drm/nouveau/Kconfig +++ b/drivers/gpu/drm/nouveau/Kconfig @@ -4,6 +4,7 @@ config DRM_NOUVEAU depends on DRM && PCI && MMU select IOMMU_API select FW_LOADER + select FW_CACHE if PM_SLEEP select DRM_CLIENT_SELECTION select DRM_DISPLAY_DP_HELPER select DRM_DISPLAY_HDMI_HELPER
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Salah Triki salah.triki@gmail.com
[ Upstream commit cbf85b9cb80bec6345ffe0368dfff98386f4714f ]
Initialize .owner field of force_poll_sync_fops to THIS_MODULE in order to prevent btusb from being unloaded while its operations are in use.
Fixes: 800fe5ec302e ("Bluetooth: btusb: Add support for queuing during polling interval") Signed-off-by: Salah Triki salah.triki@gmail.com Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/bluetooth/btusb.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index 6bc6dd417adf6..3a0b9dc98707f 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -3644,6 +3644,7 @@ static ssize_t force_poll_sync_write(struct file *file, }
static const struct file_operations force_poll_sync_fops = { + .owner = THIS_MODULE, .open = simple_open, .read = force_poll_sync_read, .write = force_poll_sync_write,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maurizio Lombardi mlombard@redhat.com
[ Upstream commit 84e009042d0f3dfe91bec60bcd208ee3f866cbcd ]
Previously, the NVMe/TCP host driver did not handle the C2HTermReq PDU, instead printing "unsupported pdu type (3)" when received. This patch adds support for processing the C2HTermReq PDU, allowing the driver to print the Fatal Error Status field.
Example of output: nvme nvme4: Received C2HTermReq (FES = Invalid PDU Header Field)
Signed-off-by: Maurizio Lombardi mlombard@redhat.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Keith Busch kbusch@kernel.org Stable-dep-of: ad95bab0cd28 ("nvme-tcp: fix potential memory corruption in nvme_tcp_recv_pdu()") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/tcp.c | 43 ++++++++++++++++++++++++++++++++++++++++ include/linux/nvme-tcp.h | 2 ++ 2 files changed, 45 insertions(+)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 840ae475074d0..749d464bef9f7 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -763,6 +763,40 @@ static int nvme_tcp_handle_r2t(struct nvme_tcp_queue *queue, return 0; }
+static void nvme_tcp_handle_c2h_term(struct nvme_tcp_queue *queue, + struct nvme_tcp_term_pdu *pdu) +{ + u16 fes; + const char *msg; + u32 plen = le32_to_cpu(pdu->hdr.plen); + + static const char * const msg_table[] = { + [NVME_TCP_FES_INVALID_PDU_HDR] = "Invalid PDU Header Field", + [NVME_TCP_FES_PDU_SEQ_ERR] = "PDU Sequence Error", + [NVME_TCP_FES_HDR_DIGEST_ERR] = "Header Digest Error", + [NVME_TCP_FES_DATA_OUT_OF_RANGE] = "Data Transfer Out Of Range", + [NVME_TCP_FES_R2T_LIMIT_EXCEEDED] = "R2T Limit Exceeded", + [NVME_TCP_FES_UNSUPPORTED_PARAM] = "Unsupported Parameter", + }; + + if (plen < NVME_TCP_MIN_C2HTERM_PLEN || + plen > NVME_TCP_MAX_C2HTERM_PLEN) { + dev_err(queue->ctrl->ctrl.device, + "Received a malformed C2HTermReq PDU (plen = %u)\n", + plen); + return; + } + + fes = le16_to_cpu(pdu->fes); + if (fes && fes < ARRAY_SIZE(msg_table)) + msg = msg_table[fes]; + else + msg = "Unknown"; + + dev_err(queue->ctrl->ctrl.device, + "Received C2HTermReq (FES = %s)\n", msg); +} + static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb, unsigned int *offset, size_t *len) { @@ -784,6 +818,15 @@ static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb, return 0;
hdr = queue->pdu; + if (unlikely(hdr->type == nvme_tcp_c2h_term)) { + /* + * C2HTermReq never includes Header or Data digests. + * Skip the checks. + */ + nvme_tcp_handle_c2h_term(queue, (void *)queue->pdu); + return -EINVAL; + } + if (queue->hdr_digest) { ret = nvme_tcp_verify_hdgst(queue, queue->pdu, hdr->hlen); if (unlikely(ret)) diff --git a/include/linux/nvme-tcp.h b/include/linux/nvme-tcp.h index e07e8978d691b..e435250fcb4d0 100644 --- a/include/linux/nvme-tcp.h +++ b/include/linux/nvme-tcp.h @@ -13,6 +13,8 @@ #define NVME_TCP_ADMIN_CCSZ SZ_8K #define NVME_TCP_DIGEST_LENGTH 4 #define NVME_TCP_MIN_MAXH2CDATA 4096 +#define NVME_TCP_MIN_C2HTERM_PLEN 24 +#define NVME_TCP_MAX_C2HTERM_PLEN 152
enum nvme_tcp_pfv { NVME_TCP_PFV_1_0 = 0x0,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maurizio Lombardi mlombard@redhat.com
[ Upstream commit ad95bab0cd28ed77c2c0d0b6e76e03e031391064 ]
nvme_tcp_recv_pdu() doesn't check the validity of the header length. When header digests are enabled, a target might send a packet with an invalid header length (e.g. 255), causing nvme_tcp_verify_hdgst() to access memory outside the allocated area and cause memory corruptions by overwriting it with the calculated digest.
Fix this by rejecting packets with an unexpected header length.
Fixes: 3f2304f8c6d6 ("nvme-tcp: add NVMe over TCP host driver") Signed-off-by: Maurizio Lombardi mlombard@redhat.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Keith Busch kbusch@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/tcp.c | 32 +++++++++++++++++++++++++++++--- 1 file changed, 29 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 749d464bef9f7..0bcc9bf57d1d0 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -217,6 +217,19 @@ static inline int nvme_tcp_queue_id(struct nvme_tcp_queue *queue) return queue - queue->ctrl->queues; }
+static inline bool nvme_tcp_recv_pdu_supported(enum nvme_tcp_pdu_type type) +{ + switch (type) { + case nvme_tcp_c2h_term: + case nvme_tcp_c2h_data: + case nvme_tcp_r2t: + case nvme_tcp_rsp: + return true; + default: + return false; + } +} + /* * Check if the queue is TLS encrypted */ @@ -818,6 +831,16 @@ static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb, return 0;
hdr = queue->pdu; + if (unlikely(hdr->hlen != sizeof(struct nvme_tcp_rsp_pdu))) { + if (!nvme_tcp_recv_pdu_supported(hdr->type)) + goto unsupported_pdu; + + dev_err(queue->ctrl->ctrl.device, + "pdu type %d has unexpected header length (%d)\n", + hdr->type, hdr->hlen); + return -EPROTO; + } + if (unlikely(hdr->type == nvme_tcp_c2h_term)) { /* * C2HTermReq never includes Header or Data digests. @@ -850,10 +873,13 @@ static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb, nvme_tcp_init_recv_ctx(queue); return nvme_tcp_handle_r2t(queue, (void *)queue->pdu); default: - dev_err(queue->ctrl->ctrl.device, - "unsupported pdu type (%d)\n", hdr->type); - return -EINVAL; + goto unsupported_pdu; } + +unsupported_pdu: + dev_err(queue->ctrl->ctrl.device, + "unsupported pdu type (%d)\n", hdr->type); + return -EINVAL; }
static inline void nvme_tcp_end_request(struct request *rq, u16 status)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Meir Elisha meir.elisha@volumez.com
[ Upstream commit a16f88964c647103dad7743a484b216d488a6352 ]
The order in which queue->cmd and rcv_state are updated is crucial. If these assignments are reordered by the compiler, the worker might not get queued in nvmet_tcp_queue_response(), hanging the IO. to enforce the the correct reordering, set rcv_state using smp_store_release().
Fixes: bdaf13279192 ("nvmet-tcp: fix a segmentation fault during io parsing error")
Signed-off-by: Meir Elisha meir.elisha@volumez.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Keith Busch kbusch@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/target/tcp.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 7c51c2a8c109a..4f9cac8a5abe0 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -571,10 +571,16 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req) struct nvmet_tcp_cmd *cmd = container_of(req, struct nvmet_tcp_cmd, req); struct nvmet_tcp_queue *queue = cmd->queue; + enum nvmet_tcp_recv_state queue_state; + struct nvmet_tcp_cmd *queue_cmd; struct nvme_sgl_desc *sgl; u32 len;
- if (unlikely(cmd == queue->cmd)) { + /* Pairs with store_release in nvmet_prepare_receive_pdu() */ + queue_state = smp_load_acquire(&queue->rcv_state); + queue_cmd = READ_ONCE(queue->cmd); + + if (unlikely(cmd == queue_cmd)) { sgl = &cmd->req.cmd->common.dptr.sgl; len = le32_to_cpu(sgl->length);
@@ -583,7 +589,7 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req) * Avoid using helpers, this might happen before * nvmet_req_init is completed. */ - if (queue->rcv_state == NVMET_TCP_RECV_PDU && + if (queue_state == NVMET_TCP_RECV_PDU && len && len <= cmd->req.port->inline_data_size && nvme_is_write(cmd->req.cmd)) return; @@ -847,8 +853,9 @@ static void nvmet_prepare_receive_pdu(struct nvmet_tcp_queue *queue) { queue->offset = 0; queue->left = sizeof(struct nvme_tcp_hdr); - queue->cmd = NULL; - queue->rcv_state = NVMET_TCP_RECV_PDU; + WRITE_ONCE(queue->cmd, NULL); + /* Ensure rcv_state is visible only after queue->cmd is set */ + smp_store_release(&queue->rcv_state, NVMET_TCP_RECV_PDU); }
static void nvmet_tcp_free_crypto(struct nvmet_tcp_queue *queue)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Antheas Kapenekakis lkml@antheas.dev
[ Upstream commit 3414cda9d41f41703832d0abd01063dd8de82b89 ]
In commit 1e9c708dc3ae ("ALSA: hda/tas2781: Add new quirk for Lenovo, ASUS, Dell projects") Baojun adds a bunch of projects to the file, including for the Ally X. Turns out the initial Ally X was not sorted properly, so the kernel had 2 quirks for it.
The previous quirk overrode the new one due to being earlier and they are different. When AB testing, the normal pin fixup seems to work ok but causes a bit of a minor popping. Given the other config is more complicated and may cause undefined behavior, revert it.
Fixes: 1e9c708dc3ae ("ALSA: hda/tas2781: Add new quirk for Lenovo, ASUS, Dell projects") Signed-off-by: Antheas Kapenekakis lkml@antheas.dev Link: https://patch.msgid.link/20250227175107.33432-2-lkml@antheas.dev Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_realtek.c | 8 -------- 1 file changed, 8 deletions(-)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 9997bf0150576..d3a81d3638872 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -7741,7 +7741,6 @@ enum { ALC285_FIXUP_THINKPAD_X1_GEN7, ALC285_FIXUP_THINKPAD_HEADSET_JACK, ALC294_FIXUP_ASUS_ALLY, - ALC294_FIXUP_ASUS_ALLY_X, ALC294_FIXUP_ASUS_ALLY_PINS, ALC294_FIXUP_ASUS_ALLY_VERBS, ALC294_FIXUP_ASUS_ALLY_SPEAKER, @@ -9184,12 +9183,6 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC294_FIXUP_ASUS_ALLY_PINS }, - [ALC294_FIXUP_ASUS_ALLY_X] = { - .type = HDA_FIXUP_FUNC, - .v.func = tas2781_fixup_i2c, - .chained = true, - .chain_id = ALC294_FIXUP_ASUS_ALLY_PINS - }, [ALC294_FIXUP_ASUS_ALLY_PINS] = { .type = HDA_FIXUP_PINS, .v.pins = (const struct hda_pintbl[]) { @@ -10674,7 +10667,6 @@ static const struct hda_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS), SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK), SND_PCI_QUIRK(0x1043, 0x17f3, "ROG Ally NR2301L/X", ALC294_FIXUP_ASUS_ALLY), - SND_PCI_QUIRK(0x1043, 0x1eb3, "ROG Ally X RC72LA", ALC294_FIXUP_ASUS_ALLY_X), SND_PCI_QUIRK(0x1043, 0x1863, "ASUS UX6404VI/VV", ALC245_FIXUP_CS35L41_SPI_2), SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS), SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Antoine Tenart atenart@kernel.org
[ Upstream commit ee01b2f2d7d0010787c2343463965bbc283a497f ]
In __udp_gso_segment the skb destructor is removed before segmenting the skb but the socket reference is kept as-is. This is an issue if the original skb is later orphaned as we can hit the following bug:
kernel BUG at ./include/linux/skbuff.h:3312! (skb_orphan) RIP: 0010:ip_rcv_core+0x8b2/0xca0 Call Trace: ip_rcv+0xab/0x6e0 __netif_receive_skb_one_core+0x168/0x1b0 process_backlog+0x384/0x1100 __napi_poll.constprop.0+0xa1/0x370 net_rx_action+0x925/0xe50
The above can happen following a sequence of events when using OpenVSwitch, when an OVS_ACTION_ATTR_USERSPACE action precedes an OVS_ACTION_ATTR_OUTPUT action:
1. OVS_ACTION_ATTR_USERSPACE is handled (in do_execute_actions): the skb goes through queue_gso_packets and then __udp_gso_segment, where its destructor is removed. 2. The segments' data are copied and sent to userspace. 3. OVS_ACTION_ATTR_OUTPUT is handled (in do_execute_actions) and the same original skb is sent to its path. 4. If it later hits skb_orphan, we hit the bug.
Fix this by also removing the reference to the socket in __udp_gso_segment.
Fixes: ad405857b174 ("udp: better wmem accounting on gso") Signed-off-by: Antoine Tenart atenart@kernel.org Link: https://patch.msgid.link/20250226171352.258045-1-atenart@kernel.org Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/udp_offload.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index a5be6e4ed326f..ecfca59f31f13 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -321,13 +321,17 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
/* clear destructor to avoid skb_segment assigning it to tail */ copy_dtor = gso_skb->destructor == sock_wfree; - if (copy_dtor) + if (copy_dtor) { gso_skb->destructor = NULL; + gso_skb->sk = NULL; + }
segs = skb_segment(gso_skb, features); if (IS_ERR_OR_NULL(segs)) { - if (copy_dtor) + if (copy_dtor) { gso_skb->destructor = sock_wfree; + gso_skb->sk = sk; + } return segs; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vitaliy Shevtsov v.shevtsov@mt-integration.ru
[ Upstream commit a466fd7e9fafd975949e5945e2f70c33a94b1a70 ]
del_vqs() frees virtqueues, therefore cfv->vq_tx pointer should be checked for NULL before calling it, not cfv->vdev. Also the current implementation is redundant because the pointer cfv->vdev is dereferenced before it is checked for NULL.
Fix this by checking cfv->vq_tx for NULL instead of cfv->vdev before calling del_vqs().
Fixes: 0d2e1a2926b1 ("caif_virtio: Introduce caif over virtio") Signed-off-by: Vitaliy Shevtsov v.shevtsov@mt-integration.ru Reviewed-by: Gerhard Engleder gerhard@engleder-embedded.com Link: https://patch.msgid.link/20250227184716.4715-1-v.shevtsov@mt-integration.ru Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/caif/caif_virtio.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/caif/caif_virtio.c b/drivers/net/caif/caif_virtio.c index 7fea00c7ca8a6..c60386bf2d1a4 100644 --- a/drivers/net/caif/caif_virtio.c +++ b/drivers/net/caif/caif_virtio.c @@ -745,7 +745,7 @@ static int cfv_probe(struct virtio_device *vdev)
if (cfv->vr_rx) vdev->vringh_config->del_vrhs(cfv->vdev); - if (cfv->vdev) + if (cfv->vq_tx) vdev->config->del_vqs(cfv->vdev); free_netdev(netdev); return err;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit 2565e42539b120b81a68a58da961ce5d1e34eac8 ]
Commit a63fbed776c7 ("perf/tracing/cpuhotplug: Fix locking order") placed pmus_lock inside pmus_srcu, this makes perf_pmu_unregister() trip lockdep.
Move the locking about such that only pmu_idr and pmus (list) are modified while holding pmus_lock. This avoids doing synchronize_srcu() while holding pmus_lock and all is well again.
Fixes: a63fbed776c7 ("perf/tracing/cpuhotplug: Fix locking order") Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lore.kernel.org/r/20241104135517.679556858@infradead.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index a0e1d2124727e..5fff74c736063 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -11846,6 +11846,8 @@ void perf_pmu_unregister(struct pmu *pmu) { mutex_lock(&pmus_lock); list_del_rcu(&pmu->entry); + idr_remove(&pmu_idr, pmu->type); + mutex_unlock(&pmus_lock);
/* * We dereference the pmu list under both SRCU and regular RCU, so @@ -11855,7 +11857,6 @@ void perf_pmu_unregister(struct pmu *pmu) synchronize_rcu();
free_percpu(pmu->pmu_disable_count); - idr_remove(&pmu_idr, pmu->type); if (pmu_bus_running && pmu->dev && pmu->dev != PMU_NULL_DEV) { if (pmu->nr_addr_filters) device_remove_file(pmu->dev, &dev_attr_nr_addr_filters); @@ -11863,7 +11864,6 @@ void perf_pmu_unregister(struct pmu *pmu) put_device(pmu->dev); } free_pmu_context(pmu); - mutex_unlock(&pmus_lock); } EXPORT_SYMBOL_GPL(perf_pmu_unregister);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Titus Rwantare titusr@google.com
[ Upstream commit 6b6e2e8fd0de3fa7c6f4f8fe6841b01770b2e7bc ]
The `pmbus_identify()` function fails to correctly determine the number of supported pages on PMBus devices. This occurs because `info->pages` is implicitly zero-initialised, and `pmbus_set_page()` does not perform writes to the page register if `info->pages` is not yet initialised. Without this patch, `info->pages` is always set to the maximum after scanning.
This patch initialises `info->pages` to `PMBUS_PAGES` before the probing loop, enabling `pmbus_set_page()` writes to make it out onto the bus correctly identifying the number of pages. `PMBUS_PAGES` seemed like a reasonable non-zero number because that's the current result of the identification process.
Testing was done with a PMBus device in QEMU.
Signed-off-by: Titus Rwantare titusr@google.com Fixes: 442aba78728e7 ("hwmon: PMBus device driver") Link: https://lore.kernel.org/r/20250227222455.2583468-1-titusr@google.com Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hwmon/pmbus/pmbus.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/hwmon/pmbus/pmbus.c b/drivers/hwmon/pmbus/pmbus.c index ec40c5c599543..59424dc518c8f 100644 --- a/drivers/hwmon/pmbus/pmbus.c +++ b/drivers/hwmon/pmbus/pmbus.c @@ -103,6 +103,8 @@ static int pmbus_identify(struct i2c_client *client, if (pmbus_check_byte_register(client, 0, PMBUS_PAGE)) { int page;
+ info->pages = PMBUS_PAGES; + for (page = 1; page < PMBUS_PAGES; page++) { if (pmbus_set_page(client, page, 0xff) < 0) break;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maud Spierings maudspierings@gocontroll.com
[ Upstream commit 1c7932d5ae0f5c22fa52ac811b4c427bbca5aff5 ]
I could not find a single table that has the values currently present in the table, change it to the actual values that can be found in [1]/[2] and [3] (page 15 column 2)
[1]: https://www.murata.com/products/productdetail?partno=NCP15XH103F03RC [2]: https://www.murata.com/products/productdata/8796836626462/NTHCG83.txt?143796... [3]: https://nl.mouser.com/datasheet/2/281/r44e-522712.pdf
Fixes: 54ce3a0d8011 ("hwmon: (ntc_thermistor) Add support for ncpXXxh103") Signed-off-by: Maud Spierings maudspierings@gocontroll.com Link: https://lore.kernel.org/r/20250227-ntc_thermistor_fixes-v1-3-70fa73200b52@go... Reviewed-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hwmon/ntc_thermistor.c | 66 +++++++++++++++++----------------- 1 file changed, 33 insertions(+), 33 deletions(-)
diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c index b5352900463fb..0d29c8f97ba7c 100644 --- a/drivers/hwmon/ntc_thermistor.c +++ b/drivers/hwmon/ntc_thermistor.c @@ -181,40 +181,40 @@ static const struct ntc_compensation ncpXXwf104[] = { };
static const struct ntc_compensation ncpXXxh103[] = { - { .temp_c = -40, .ohm = 247565 }, - { .temp_c = -35, .ohm = 181742 }, - { .temp_c = -30, .ohm = 135128 }, - { .temp_c = -25, .ohm = 101678 }, - { .temp_c = -20, .ohm = 77373 }, - { .temp_c = -15, .ohm = 59504 }, - { .temp_c = -10, .ohm = 46222 }, - { .temp_c = -5, .ohm = 36244 }, - { .temp_c = 0, .ohm = 28674 }, - { .temp_c = 5, .ohm = 22878 }, - { .temp_c = 10, .ohm = 18399 }, - { .temp_c = 15, .ohm = 14910 }, - { .temp_c = 20, .ohm = 12169 }, + { .temp_c = -40, .ohm = 195652 }, + { .temp_c = -35, .ohm = 148171 }, + { .temp_c = -30, .ohm = 113347 }, + { .temp_c = -25, .ohm = 87559 }, + { .temp_c = -20, .ohm = 68237 }, + { .temp_c = -15, .ohm = 53650 }, + { .temp_c = -10, .ohm = 42506 }, + { .temp_c = -5, .ohm = 33892 }, + { .temp_c = 0, .ohm = 27219 }, + { .temp_c = 5, .ohm = 22021 }, + { .temp_c = 10, .ohm = 17926 }, + { .temp_c = 15, .ohm = 14674 }, + { .temp_c = 20, .ohm = 12081 }, { .temp_c = 25, .ohm = 10000 }, - { .temp_c = 30, .ohm = 8271 }, - { .temp_c = 35, .ohm = 6883 }, - { .temp_c = 40, .ohm = 5762 }, - { .temp_c = 45, .ohm = 4851 }, - { .temp_c = 50, .ohm = 4105 }, - { .temp_c = 55, .ohm = 3492 }, - { .temp_c = 60, .ohm = 2985 }, - { .temp_c = 65, .ohm = 2563 }, - { .temp_c = 70, .ohm = 2211 }, - { .temp_c = 75, .ohm = 1915 }, - { .temp_c = 80, .ohm = 1666 }, - { .temp_c = 85, .ohm = 1454 }, - { .temp_c = 90, .ohm = 1275 }, - { .temp_c = 95, .ohm = 1121 }, - { .temp_c = 100, .ohm = 990 }, - { .temp_c = 105, .ohm = 876 }, - { .temp_c = 110, .ohm = 779 }, - { .temp_c = 115, .ohm = 694 }, - { .temp_c = 120, .ohm = 620 }, - { .temp_c = 125, .ohm = 556 }, + { .temp_c = 30, .ohm = 8315 }, + { .temp_c = 35, .ohm = 6948 }, + { .temp_c = 40, .ohm = 5834 }, + { .temp_c = 45, .ohm = 4917 }, + { .temp_c = 50, .ohm = 4161 }, + { .temp_c = 55, .ohm = 3535 }, + { .temp_c = 60, .ohm = 3014 }, + { .temp_c = 65, .ohm = 2586 }, + { .temp_c = 70, .ohm = 2228 }, + { .temp_c = 75, .ohm = 1925 }, + { .temp_c = 80, .ohm = 1669 }, + { .temp_c = 85, .ohm = 1452 }, + { .temp_c = 90, .ohm = 1268 }, + { .temp_c = 95, .ohm = 1110 }, + { .temp_c = 100, .ohm = 974 }, + { .temp_c = 105, .ohm = 858 }, + { .temp_c = 110, .ohm = 758 }, + { .temp_c = 115, .ohm = 672 }, + { .temp_c = 120, .ohm = 596 }, + { .temp_c = 125, .ohm = 531 }, };
/*
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Erik Schumacher erik.schumacher@iris-sensing.com
[ Upstream commit e278d5e8aef4c0a1d9a9fa8b8910d713a89aa800 ]
Leading zero bits are sent on the bus before the temperature value is transmitted. If any of these bits are high, the connection might be unstable or there could be no AD7314 / ADT730x (or compatible) at all. Return -EIO in that case.
Signed-off-by: Erik Schumacher erik.schumacher@iris-sensing.com Fixes: 4f3a659581cab ("hwmon: AD7314 driver (ported from IIO)") Link: https://lore.kernel.org/r/24a50c2981a318580aca8f50d23be7987b69ea00.camel@iri... Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hwmon/ad7314.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/hwmon/ad7314.c b/drivers/hwmon/ad7314.c index 7802bbf5f9587..59424103f6348 100644 --- a/drivers/hwmon/ad7314.c +++ b/drivers/hwmon/ad7314.c @@ -22,11 +22,13 @@ */ #define AD7314_TEMP_MASK 0x7FE0 #define AD7314_TEMP_SHIFT 5 +#define AD7314_LEADING_ZEROS_MASK BIT(15)
/* * ADT7301 and ADT7302 temperature masks */ #define ADT7301_TEMP_MASK 0x3FFF +#define ADT7301_LEADING_ZEROS_MASK (BIT(15) | BIT(14))
enum ad7314_variant { adt7301, @@ -65,12 +67,20 @@ static ssize_t ad7314_temperature_show(struct device *dev, return ret; switch (spi_get_device_id(chip->spi_dev)->driver_data) { case ad7314: + if (ret & AD7314_LEADING_ZEROS_MASK) { + /* Invalid read-out, leading zero part is missing */ + return -EIO; + } data = (ret & AD7314_TEMP_MASK) >> AD7314_TEMP_SHIFT; data = sign_extend32(data, 9);
return sprintf(buf, "%d\n", 250 * data); case adt7301: case adt7302: + if (ret & ADT7301_LEADING_ZEROS_MASK) { + /* Invalid read-out, leading zero part is missing */ + return -EIO; + } /* * Documented as a 13 bit twos complement register * with a sign bit - which is a 14 bit 2's complement
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Masami Hiramatsu (Google) mhiramat@kernel.org
[ Upstream commit fd5ba38390c59e1c147480ae49b6133c4ac24001 ]
Commit 18b1e870a496 ("tracing/probes: Add $arg* meta argument for all function args") introduced MAX_ARG_BUF_LEN but it is not used. Remove it.
Link: https://lore.kernel.org/all/174055075876.4079315.8805416872155957588.stgit@m...
Fixes: 18b1e870a496 ("tracing/probes: Add $arg* meta argument for all function args") Signed-off-by: Masami Hiramatsu (Google) mhiramat@kernel.org Reviewed-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/trace/trace_probe.h | 1 - 1 file changed, 1 deletion(-)
diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h index fba3ede870541..8a6797c2278d9 100644 --- a/kernel/trace/trace_probe.h +++ b/kernel/trace/trace_probe.h @@ -36,7 +36,6 @@ #define MAX_BTF_ARGS_LEN 128 #define MAX_DENTRY_ARGS_LEN 256 #define MAX_STRING_SIZE PATH_MAX -#define MAX_ARG_BUF_LEN (MAX_TRACE_ARGS * MAX_ARG_NAME_LEN)
/* Reserved field names */ #define FIELD_STRING_IP "__probe_ip"
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alessio Belle alessio.belle@imgtec.com
[ Upstream commit 1d2eabb6616433ccaa13927811bdfa205e91ba60 ]
When firmware traces are enabled, the firmware dumps 48-bit timestamps for each trace as two 32-bit values, highest 32 bits (of which only 16 useful) first.
The driver was reassembling them the other way round i.e. interpreting the first value in memory as the lowest 32 bits, and the second value as the highest 32 bits (then truncated to 16 bits).
Due to this, firmware trace dumps showed very large timestamps even for traces recorded shortly after GPU boot. The timestamps in these dumps would also sometimes jump backwards because of the truncation.
Example trace dumped after loading the powervr module and enabling firmware traces, where each line is commented with the timestamp value in hexadecimal to better show both issues:
[93540092739584] : Host Sync Partition marker: 1 // 0x551300000000 [28419798597632] : GPU units deinit // 0x19d900000000 [28548647616512] : GPU deinit // 0x19f700000000
Update logic to reassemble the timestamps halves in the correct order.
Fixes: cb56cd610866 ("drm/imagination: Add firmware trace to debugfs") Signed-off-by: Alessio Belle alessio.belle@imgtec.com Reviewed-by: Matt Coster matt.coster@imgtec.com Link: https://patchwork.freedesktop.org/patch/msgid/20250221-fix-fw-trace-timestam... Signed-off-by: Matt Coster matt.coster@imgtec.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/imagination/pvr_fw_trace.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/imagination/pvr_fw_trace.c b/drivers/gpu/drm/imagination/pvr_fw_trace.c index 73707daa4e52d..5dbb636d7d4ff 100644 --- a/drivers/gpu/drm/imagination/pvr_fw_trace.c +++ b/drivers/gpu/drm/imagination/pvr_fw_trace.c @@ -333,8 +333,8 @@ static int fw_trace_seq_show(struct seq_file *s, void *v) if (sf_id == ROGUE_FW_SF_LAST) return -EINVAL;
- timestamp = read_fw_trace(trace_seq_data, 1) | - ((u64)read_fw_trace(trace_seq_data, 2) << 32); + timestamp = ((u64)read_fw_trace(trace_seq_data, 1) << 32) | + read_fw_trace(trace_seq_data, 2); timestamp = (timestamp & ~ROGUE_FWT_TIMESTAMP_TIME_CLRMSK) >> ROGUE_FWT_TIMESTAMP_TIME_SHIFT;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Murad Masimov m.masimov@mt-integration.ru
[ Upstream commit 172a0f509723fe4741d4b8e9190cf434b18320d8 ]
The module parameter defines number of iso packets per one URB. User is allowed to set any value to the parameter of type int, which can lead to various kinds of weird and incorrect behavior like integer overflows, truncations, etc. Number of packets should be a small non-negative number.
Since this parameter is read-only, its value can be validated on driver probe.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Murad Masimov m.masimov@mt-integration.ru Link: https://patch.msgid.link/20250303100413.835-1-m.masimov@mt-integration.ru Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/usb/usx2y/usbusx2y.c | 11 +++++++++++ sound/usb/usx2y/usbusx2y.h | 26 ++++++++++++++++++++++++++ sound/usb/usx2y/usbusx2yaudio.c | 27 --------------------------- 3 files changed, 37 insertions(+), 27 deletions(-)
diff --git a/sound/usb/usx2y/usbusx2y.c b/sound/usb/usx2y/usbusx2y.c index 5f81c68fd42b6..5756ff3528a2d 100644 --- a/sound/usb/usx2y/usbusx2y.c +++ b/sound/usb/usx2y/usbusx2y.c @@ -151,6 +151,12 @@ static int snd_usx2y_card_used[SNDRV_CARDS]; static void snd_usx2y_card_private_free(struct snd_card *card); static void usx2y_unlinkseq(struct snd_usx2y_async_seq *s);
+#ifdef USX2Y_NRPACKS_VARIABLE +int nrpacks = USX2Y_NRPACKS; /* number of packets per urb */ +module_param(nrpacks, int, 0444); +MODULE_PARM_DESC(nrpacks, "Number of packets per URB."); +#endif + /* * pipe 4 is used for switching the lamps, setting samplerate, volumes .... */ @@ -432,6 +438,11 @@ static int snd_usx2y_probe(struct usb_interface *intf, struct snd_card *card; int err;
+#ifdef USX2Y_NRPACKS_VARIABLE + if (nrpacks < 0 || nrpacks > USX2Y_NRPACKS_MAX) + return -EINVAL; +#endif + if (le16_to_cpu(device->descriptor.idVendor) != 0x1604 || (le16_to_cpu(device->descriptor.idProduct) != USB_ID_US122 && le16_to_cpu(device->descriptor.idProduct) != USB_ID_US224 && diff --git a/sound/usb/usx2y/usbusx2y.h b/sound/usb/usx2y/usbusx2y.h index 391fd7b4ed5ef..6a76d04bf1c7d 100644 --- a/sound/usb/usx2y/usbusx2y.h +++ b/sound/usb/usx2y/usbusx2y.h @@ -7,6 +7,32 @@
#define NRURBS 2
+/* Default value used for nr of packs per urb. + * 1 to 4 have been tested ok on uhci. + * To use 3 on ohci, you'd need a patch: + * look for "0000425-linux-2.6.9-rc4-mm1_ohci-hcd.patch.gz" on + * "https://bugtrack.alsa-project.org/alsa-bug/bug_view_page.php?bug_id=0000425" + * + * 1, 2 and 4 work out of the box on ohci, if I recall correctly. + * Bigger is safer operation, smaller gives lower latencies. + */ +#define USX2Y_NRPACKS 4 + +#define USX2Y_NRPACKS_MAX 1024 + +/* If your system works ok with this module's parameter + * nrpacks set to 1, you might as well comment + * this define out, and thereby produce smaller, faster code. + * You'd also set USX2Y_NRPACKS to 1 then. + */ +#define USX2Y_NRPACKS_VARIABLE 1 + +#ifdef USX2Y_NRPACKS_VARIABLE +extern int nrpacks; +#define nr_of_packs() nrpacks +#else +#define nr_of_packs() USX2Y_NRPACKS +#endif
#define URBS_ASYNC_SEQ 10 #define URB_DATA_LEN_ASYNC_SEQ 32 diff --git a/sound/usb/usx2y/usbusx2yaudio.c b/sound/usb/usx2y/usbusx2yaudio.c index f540f46a0b143..acca8bead82e5 100644 --- a/sound/usb/usx2y/usbusx2yaudio.c +++ b/sound/usb/usx2y/usbusx2yaudio.c @@ -28,33 +28,6 @@ #include "usx2y.h" #include "usbusx2y.h"
-/* Default value used for nr of packs per urb. - * 1 to 4 have been tested ok on uhci. - * To use 3 on ohci, you'd need a patch: - * look for "0000425-linux-2.6.9-rc4-mm1_ohci-hcd.patch.gz" on - * "https://bugtrack.alsa-project.org/alsa-bug/bug_view_page.php?bug_id=0000425" - * - * 1, 2 and 4 work out of the box on ohci, if I recall correctly. - * Bigger is safer operation, smaller gives lower latencies. - */ -#define USX2Y_NRPACKS 4 - -/* If your system works ok with this module's parameter - * nrpacks set to 1, you might as well comment - * this define out, and thereby produce smaller, faster code. - * You'd also set USX2Y_NRPACKS to 1 then. - */ -#define USX2Y_NRPACKS_VARIABLE 1 - -#ifdef USX2Y_NRPACKS_VARIABLE -static int nrpacks = USX2Y_NRPACKS; /* number of packets per urb */ -#define nr_of_packs() nrpacks -module_param(nrpacks, int, 0444); -MODULE_PARM_DESC(nrpacks, "Number of packets per URB."); -#else -#define nr_of_packs() USX2Y_NRPACKS -#endif - static int usx2y_urb_capt_retire(struct snd_usx2y_substream *subs) { struct urb *urb = subs->completed_urb;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 64e6a754d33d31aa844b3ee66fb93ac84ca1565e ]
syzbot is able to crash hosts [1], using llc and devices not supporting IFF_TX_SKB_SHARING.
In this case, e1000 driver calls eth_skb_pad(), while the skb is shared.
Simply replace skb_get() by skb_clone() in net/llc/llc_s_ac.c
Note that e1000 driver might have an issue with pktgen, because it does not clear IFF_TX_SKB_SHARING, this is an orthogonal change.
We need to audit other skb_get() uses in net/llc.
[1]
kernel BUG at net/core/skbuff.c:2178 ! Oops: invalid opcode: 0000 [#1] PREEMPT SMP KASAN NOPTI CPU: 0 UID: 0 PID: 16371 Comm: syz.2.2764 Not tainted 6.14.0-rc4-syzkaller-00052-gac9c34d1e45a #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 RIP: 0010:pskb_expand_head+0x6ce/0x1240 net/core/skbuff.c:2178 Call Trace: <TASK> __skb_pad+0x18a/0x610 net/core/skbuff.c:2466 __skb_put_padto include/linux/skbuff.h:3843 [inline] skb_put_padto include/linux/skbuff.h:3862 [inline] eth_skb_pad include/linux/etherdevice.h:656 [inline] e1000_xmit_frame+0x2d99/0x5800 drivers/net/ethernet/intel/e1000/e1000_main.c:3128 __netdev_start_xmit include/linux/netdevice.h:5151 [inline] netdev_start_xmit include/linux/netdevice.h:5160 [inline] xmit_one net/core/dev.c:3806 [inline] dev_hard_start_xmit+0x9a/0x7b0 net/core/dev.c:3822 sch_direct_xmit+0x1ae/0xc30 net/sched/sch_generic.c:343 __dev_xmit_skb net/core/dev.c:4045 [inline] __dev_queue_xmit+0x13d4/0x43e0 net/core/dev.c:4621 dev_queue_xmit include/linux/netdevice.h:3313 [inline] llc_sap_action_send_test_c+0x268/0x320 net/llc/llc_s_ac.c:144 llc_exec_sap_trans_actions net/llc/llc_sap.c:153 [inline] llc_sap_next_state net/llc/llc_sap.c:182 [inline] llc_sap_state_process+0x239/0x510 net/llc/llc_sap.c:209 llc_ui_sendmsg+0xd0d/0x14e0 net/llc/af_llc.c:993 sock_sendmsg_nosec net/socket.c:718 [inline]
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: syzbot+da65c993ae113742a25f@syzkaller.appspotmail.com Closes: https://lore.kernel.org/netdev/67c020c0.050a0220.222324.0011.GAE@google.com/... Signed-off-by: Eric Dumazet edumazet@google.com Reviewed-by: Simon Horman horms@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/llc/llc_s_ac.c | 49 +++++++++++++++++++++++++--------------------- 1 file changed, 27 insertions(+), 22 deletions(-)
diff --git a/net/llc/llc_s_ac.c b/net/llc/llc_s_ac.c index 06fb8e6944b06..7a0cae9a81114 100644 --- a/net/llc/llc_s_ac.c +++ b/net/llc/llc_s_ac.c @@ -24,7 +24,7 @@ #include <net/llc_s_ac.h> #include <net/llc_s_ev.h> #include <net/llc_sap.h> - +#include <net/sock.h>
/** * llc_sap_action_unitdata_ind - forward UI PDU to network layer @@ -40,6 +40,26 @@ int llc_sap_action_unitdata_ind(struct llc_sap *sap, struct sk_buff *skb) return 0; }
+static int llc_prepare_and_xmit(struct sk_buff *skb) +{ + struct llc_sap_state_ev *ev = llc_sap_ev(skb); + struct sk_buff *nskb; + int rc; + + rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); + if (rc) + return rc; + + nskb = skb_clone(skb, GFP_ATOMIC); + if (!nskb) + return -ENOMEM; + + if (skb->sk) + skb_set_owner_w(nskb, skb->sk); + + return dev_queue_xmit(nskb); +} + /** * llc_sap_action_send_ui - sends UI PDU resp to UNITDATA REQ to MAC layer * @sap: SAP @@ -52,17 +72,12 @@ int llc_sap_action_unitdata_ind(struct llc_sap *sap, struct sk_buff *skb) int llc_sap_action_send_ui(struct llc_sap *sap, struct sk_buff *skb) { struct llc_sap_state_ev *ev = llc_sap_ev(skb); - int rc;
llc_pdu_header_init(skb, LLC_PDU_TYPE_U, ev->saddr.lsap, ev->daddr.lsap, LLC_PDU_CMD); llc_pdu_init_as_ui_cmd(skb); - rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); - if (likely(!rc)) { - skb_get(skb); - rc = dev_queue_xmit(skb); - } - return rc; + + return llc_prepare_and_xmit(skb); }
/** @@ -77,17 +92,12 @@ int llc_sap_action_send_ui(struct llc_sap *sap, struct sk_buff *skb) int llc_sap_action_send_xid_c(struct llc_sap *sap, struct sk_buff *skb) { struct llc_sap_state_ev *ev = llc_sap_ev(skb); - int rc;
llc_pdu_header_init(skb, LLC_PDU_TYPE_U_XID, ev->saddr.lsap, ev->daddr.lsap, LLC_PDU_CMD); llc_pdu_init_as_xid_cmd(skb, LLC_XID_NULL_CLASS_2, 0); - rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); - if (likely(!rc)) { - skb_get(skb); - rc = dev_queue_xmit(skb); - } - return rc; + + return llc_prepare_and_xmit(skb); }
/** @@ -133,17 +143,12 @@ int llc_sap_action_send_xid_r(struct llc_sap *sap, struct sk_buff *skb) int llc_sap_action_send_test_c(struct llc_sap *sap, struct sk_buff *skb) { struct llc_sap_state_ev *ev = llc_sap_ev(skb); - int rc;
llc_pdu_header_init(skb, LLC_PDU_TYPE_U, ev->saddr.lsap, ev->daddr.lsap, LLC_PDU_CMD); llc_pdu_init_as_test_cmd(skb); - rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); - if (likely(!rc)) { - skb_get(skb); - rc = dev_queue_xmit(skb); - } - return rc; + + return llc_prepare_and_xmit(skb); }
int llc_sap_action_send_test_r(struct llc_sap *sap, struct sk_buff *skb)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xinghuo Chen xinghuo.chen@foxmail.com
[ Upstream commit 10fce7ebe888fa8c97eee7e317a47e7603e5e78d ]
The devm_memremap() function returns error pointers on error, it doesn't return NULL.
Fixes: c7cefce03e69 ("hwmon: (xgene) access mailbox as RAM") Signed-off-by: Xinghuo Chen xinghuo.chen@foxmail.com Link: https://lore.kernel.org/r/tencent_9AD8E7683EC29CAC97496B44F3F865BA070A@qq.co... Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hwmon/xgene-hwmon.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/hwmon/xgene-hwmon.c b/drivers/hwmon/xgene-hwmon.c index 5e0759a70f6d5..92d82faf237fc 100644 --- a/drivers/hwmon/xgene-hwmon.c +++ b/drivers/hwmon/xgene-hwmon.c @@ -706,7 +706,7 @@ static int xgene_hwmon_probe(struct platform_device *pdev) goto out; }
- if (!ctx->pcc_comm_addr) { + if (IS_ERR_OR_NULL(ctx->pcc_comm_addr)) { dev_err(&pdev->dev, "Failed to ioremap PCC comm region\n"); rc = -ENOMEM;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Philipp Stanner phasta@kernel.org
[ Upstream commit 23e0832d6d7be2d3c713f9390c060b6f1c48bf36 ]
When writing the header guard for gpu_scheduler_trace.h, a typo, apparently, occurred.
Fix the typo and document the scope of the guard.
Fixes: 353da3c520b4 ("drm/amdgpu: add tracepoint for scheduler (v2)") Reviewed-by: Tvrtko Ursulin tvrtko.ursulin@igalia.com Signed-off-by: Philipp Stanner phasta@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20250218124149.118002-2-phasta... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/scheduler/gpu_scheduler_trace.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h b/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h index c75302ca3427c..f56e77e7f6d02 100644 --- a/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h +++ b/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h @@ -21,7 +21,7 @@ * */
-#if !defined(_GPU_SCHED_TRACE_H) || defined(TRACE_HEADER_MULTI_READ) +#if !defined(_GPU_SCHED_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ) #define _GPU_SCHED_TRACE_H_
#include <linux/stringify.h> @@ -106,7 +106,7 @@ TRACE_EVENT(drm_sched_job_wait_dep, __entry->seqno) );
-#endif +#endif /* _GPU_SCHED_TRACE_H_ */
/* This part must be outside protection */ #undef TRACE_INCLUDE_PATH
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikolay Aleksandrov razor@blackwall.org
[ Upstream commit 1a82d19ca2d6835904ee71e2d40fd331098f94a0 ]
Partially revert commit b71724147e73 ("be2net: replace polling with sleeping in the FW completion path") w.r.t mcc mutex it introduces and the use of usleep_range. The be2net be_ndo_bridge_getlink() callback is called with rcu_read_lock, so this code has been broken for a long time. Both the mutex_lock and the usleep_range can cause the issue Ian Kumlien reported[1]. The call path is: be_ndo_bridge_getlink -> be_cmd_get_hsw_config -> be_mcc_notify_wait -> be_mcc_wait_compl -> usleep_range()
[1] https://lore.kernel.org/netdev/CAA85sZveppNgEVa_FD+qhOMtG_AavK9_mFiU+jWrMtXm...
Tested-by: Ian Kumlien ian.kumlien@gmail.com Fixes: b71724147e73 ("be2net: replace polling with sleeping in the FW completion path") Signed-off-by: Nikolay Aleksandrov razor@blackwall.org Link: https://patch.msgid.link/20250227164129.1201164-1-razor@blackwall.org Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/emulex/benet/be.h | 2 +- drivers/net/ethernet/emulex/benet/be_cmds.c | 197 ++++++++++---------- drivers/net/ethernet/emulex/benet/be_main.c | 2 +- 3 files changed, 100 insertions(+), 101 deletions(-)
diff --git a/drivers/net/ethernet/emulex/benet/be.h b/drivers/net/ethernet/emulex/benet/be.h index e48b861e4ce15..270ff9aab3352 100644 --- a/drivers/net/ethernet/emulex/benet/be.h +++ b/drivers/net/ethernet/emulex/benet/be.h @@ -562,7 +562,7 @@ struct be_adapter { struct be_dma_mem mbox_mem_alloced;
struct be_mcc_obj mcc_obj; - struct mutex mcc_lock; /* For serializing mcc cmds to BE card */ + spinlock_t mcc_lock; /* For serializing mcc cmds to BE card */ spinlock_t mcc_cq_lock;
u16 cfg_num_rx_irqs; /* configured via set-channels */ diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c index 61adcebeef010..51b8377edd1d0 100644 --- a/drivers/net/ethernet/emulex/benet/be_cmds.c +++ b/drivers/net/ethernet/emulex/benet/be_cmds.c @@ -575,7 +575,7 @@ int be_process_mcc(struct be_adapter *adapter) /* Wait till no more pending mcc requests are present */ static int be_mcc_wait_compl(struct be_adapter *adapter) { -#define mcc_timeout 12000 /* 12s timeout */ +#define mcc_timeout 120000 /* 12s timeout */ int i, status = 0; struct be_mcc_obj *mcc_obj = &adapter->mcc_obj;
@@ -589,7 +589,7 @@ static int be_mcc_wait_compl(struct be_adapter *adapter)
if (atomic_read(&mcc_obj->q.used) == 0) break; - usleep_range(500, 1000); + udelay(100); } if (i == mcc_timeout) { dev_err(&adapter->pdev->dev, "FW not responding\n"); @@ -866,7 +866,7 @@ static bool use_mcc(struct be_adapter *adapter) static int be_cmd_lock(struct be_adapter *adapter) { if (use_mcc(adapter)) { - mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock); return 0; } else { return mutex_lock_interruptible(&adapter->mbox_lock); @@ -877,7 +877,7 @@ static int be_cmd_lock(struct be_adapter *adapter) static void be_cmd_unlock(struct be_adapter *adapter) { if (use_mcc(adapter)) - return mutex_unlock(&adapter->mcc_lock); + return spin_unlock_bh(&adapter->mcc_lock); else return mutex_unlock(&adapter->mbox_lock); } @@ -1047,7 +1047,7 @@ int be_cmd_mac_addr_query(struct be_adapter *adapter, u8 *mac_addr, struct be_cmd_req_mac_query *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1076,7 +1076,7 @@ int be_cmd_mac_addr_query(struct be_adapter *adapter, u8 *mac_addr, }
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1088,7 +1088,7 @@ int be_cmd_pmac_add(struct be_adapter *adapter, const u8 *mac_addr, struct be_cmd_req_pmac_add *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1113,7 +1113,7 @@ int be_cmd_pmac_add(struct be_adapter *adapter, const u8 *mac_addr, }
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock);
if (base_status(status) == MCC_STATUS_UNAUTHORIZED_REQUEST) status = -EPERM; @@ -1131,7 +1131,7 @@ int be_cmd_pmac_del(struct be_adapter *adapter, u32 if_id, int pmac_id, u32 dom) if (pmac_id == -1) return 0;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1151,7 +1151,7 @@ int be_cmd_pmac_del(struct be_adapter *adapter, u32 if_id, int pmac_id, u32 dom) status = be_mcc_notify_wait(adapter);
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1414,7 +1414,7 @@ int be_cmd_rxq_create(struct be_adapter *adapter, struct be_dma_mem *q_mem = &rxq->dma_mem; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1444,7 +1444,7 @@ int be_cmd_rxq_create(struct be_adapter *adapter, }
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1508,7 +1508,7 @@ int be_cmd_rxq_destroy(struct be_adapter *adapter, struct be_queue_info *q) struct be_cmd_req_q_destroy *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1525,7 +1525,7 @@ int be_cmd_rxq_destroy(struct be_adapter *adapter, struct be_queue_info *q) q->created = false;
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1593,7 +1593,7 @@ int be_cmd_get_stats(struct be_adapter *adapter, struct be_dma_mem *nonemb_cmd) struct be_cmd_req_hdr *hdr; int status = 0;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1621,7 +1621,7 @@ int be_cmd_get_stats(struct be_adapter *adapter, struct be_dma_mem *nonemb_cmd) adapter->stats_cmd_sent = true;
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1637,7 +1637,7 @@ int lancer_cmd_get_pport_stats(struct be_adapter *adapter, CMD_SUBSYSTEM_ETH)) return -EPERM;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1660,7 +1660,7 @@ int lancer_cmd_get_pport_stats(struct be_adapter *adapter, adapter->stats_cmd_sent = true;
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1697,7 +1697,7 @@ int be_cmd_link_status_query(struct be_adapter *adapter, u16 *link_speed, struct be_cmd_req_link_status *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
if (link_status) *link_status = LINK_DOWN; @@ -1736,7 +1736,7 @@ int be_cmd_link_status_query(struct be_adapter *adapter, u16 *link_speed, }
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1747,7 +1747,7 @@ int be_cmd_get_die_temperature(struct be_adapter *adapter) struct be_cmd_req_get_cntl_addnl_attribs *req; int status = 0;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1762,7 +1762,7 @@ int be_cmd_get_die_temperature(struct be_adapter *adapter)
status = be_mcc_notify(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1811,7 +1811,7 @@ int be_cmd_get_fat_dump(struct be_adapter *adapter, u32 buf_len, void *buf) if (!get_fat_cmd.va) return -ENOMEM;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
while (total_size) { buf_size = min(total_size, (u32)60 * 1024); @@ -1849,9 +1849,9 @@ int be_cmd_get_fat_dump(struct be_adapter *adapter, u32 buf_len, void *buf) log_offset += buf_size; } err: + spin_unlock_bh(&adapter->mcc_lock); dma_free_coherent(&adapter->pdev->dev, get_fat_cmd.size, get_fat_cmd.va, get_fat_cmd.dma); - mutex_unlock(&adapter->mcc_lock); return status; }
@@ -1862,7 +1862,7 @@ int be_cmd_get_fw_ver(struct be_adapter *adapter) struct be_cmd_req_get_fw_version *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1885,7 +1885,7 @@ int be_cmd_get_fw_ver(struct be_adapter *adapter) sizeof(adapter->fw_on_flash)); } err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1899,7 +1899,7 @@ static int __be_cmd_modify_eqd(struct be_adapter *adapter, struct be_cmd_req_modify_eq_delay *req; int status = 0, i;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1922,7 +1922,7 @@ static int __be_cmd_modify_eqd(struct be_adapter *adapter,
status = be_mcc_notify(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1949,7 +1949,7 @@ int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array, struct be_cmd_req_vlan_config *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -1971,7 +1971,7 @@ int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,
status = be_mcc_notify_wait(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -1982,7 +1982,7 @@ static int __be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 value) struct be_cmd_req_rx_filter *req = mem->va; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -2015,7 +2015,7 @@ static int __be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 value)
status = be_mcc_notify_wait(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -2046,7 +2046,7 @@ int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc) CMD_SUBSYSTEM_COMMON)) return -EPERM;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -2066,7 +2066,7 @@ int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc) status = be_mcc_notify_wait(adapter);
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock);
if (base_status(status) == MCC_STATUS_FEATURE_NOT_SUPPORTED) return -EOPNOTSUPP; @@ -2085,7 +2085,7 @@ int be_cmd_get_flow_control(struct be_adapter *adapter, u32 *tx_fc, u32 *rx_fc) CMD_SUBSYSTEM_COMMON)) return -EPERM;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -2108,7 +2108,7 @@ int be_cmd_get_flow_control(struct be_adapter *adapter, u32 *tx_fc, u32 *rx_fc) }
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -2189,7 +2189,7 @@ int be_cmd_rss_config(struct be_adapter *adapter, u8 *rsstable, if (!(be_if_cap_flags(adapter) & BE_IF_FLAGS_RSS)) return 0;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -2214,7 +2214,7 @@ int be_cmd_rss_config(struct be_adapter *adapter, u8 *rsstable,
status = be_mcc_notify_wait(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -2226,7 +2226,7 @@ int be_cmd_set_beacon_state(struct be_adapter *adapter, u8 port_num, struct be_cmd_req_enable_disable_beacon *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -2247,7 +2247,7 @@ int be_cmd_set_beacon_state(struct be_adapter *adapter, u8 port_num, status = be_mcc_notify_wait(adapter);
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -2258,7 +2258,7 @@ int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num, u32 *state) struct be_cmd_req_get_beacon_state *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -2282,7 +2282,7 @@ int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num, u32 *state) }
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -2306,7 +2306,7 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter, return -ENOMEM; }
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -2328,7 +2328,7 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter, memcpy(data, resp->page_data + off, len); } err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma); return status; } @@ -2345,7 +2345,7 @@ static int lancer_cmd_write_object(struct be_adapter *adapter, void *ctxt = NULL; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock); adapter->flash_status = 0;
wrb = wrb_from_mccq(adapter); @@ -2387,7 +2387,7 @@ static int lancer_cmd_write_object(struct be_adapter *adapter, if (status) goto err_unlock;
- mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock);
if (!wait_for_completion_timeout(&adapter->et_cmd_compl, msecs_to_jiffies(60000))) @@ -2406,7 +2406,7 @@ static int lancer_cmd_write_object(struct be_adapter *adapter, return status;
err_unlock: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -2460,7 +2460,7 @@ static int lancer_cmd_delete_object(struct be_adapter *adapter, struct be_mcc_wrb *wrb; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -2478,7 +2478,7 @@ static int lancer_cmd_delete_object(struct be_adapter *adapter,
status = be_mcc_notify_wait(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -2491,7 +2491,7 @@ int lancer_cmd_read_object(struct be_adapter *adapter, struct be_dma_mem *cmd, struct lancer_cmd_resp_read_object *resp; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -2525,7 +2525,7 @@ int lancer_cmd_read_object(struct be_adapter *adapter, struct be_dma_mem *cmd, }
err_unlock: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -2537,7 +2537,7 @@ static int be_cmd_write_flashrom(struct be_adapter *adapter, struct be_cmd_write_flashrom *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock); adapter->flash_status = 0;
wrb = wrb_from_mccq(adapter); @@ -2562,7 +2562,7 @@ static int be_cmd_write_flashrom(struct be_adapter *adapter, if (status) goto err_unlock;
- mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock);
if (!wait_for_completion_timeout(&adapter->et_cmd_compl, msecs_to_jiffies(40000))) @@ -2573,7 +2573,7 @@ static int be_cmd_write_flashrom(struct be_adapter *adapter, return status;
err_unlock: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -2584,7 +2584,7 @@ static int be_cmd_get_flash_crc(struct be_adapter *adapter, u8 *flashed_crc, struct be_mcc_wrb *wrb; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -2611,7 +2611,7 @@ static int be_cmd_get_flash_crc(struct be_adapter *adapter, u8 *flashed_crc, memcpy(flashed_crc, req->crc, 4);
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3217,7 +3217,7 @@ int be_cmd_enable_magic_wol(struct be_adapter *adapter, u8 *mac, struct be_cmd_req_acpi_wol_magic_config *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3234,7 +3234,7 @@ int be_cmd_enable_magic_wol(struct be_adapter *adapter, u8 *mac, status = be_mcc_notify_wait(adapter);
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3249,7 +3249,7 @@ int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num, CMD_SUBSYSTEM_LOWLEVEL)) return -EPERM;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3272,7 +3272,7 @@ int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num, if (status) goto err_unlock;
- mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock);
if (!wait_for_completion_timeout(&adapter->et_cmd_compl, msecs_to_jiffies(SET_LB_MODE_TIMEOUT))) @@ -3281,7 +3281,7 @@ int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num, return status;
err_unlock: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3298,7 +3298,7 @@ int be_cmd_loopback_test(struct be_adapter *adapter, u32 port_num, CMD_SUBSYSTEM_LOWLEVEL)) return -EPERM;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3324,7 +3324,7 @@ int be_cmd_loopback_test(struct be_adapter *adapter, u32 port_num, if (status) goto err;
- mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock);
wait_for_completion(&adapter->et_cmd_compl); resp = embedded_payload(wrb); @@ -3332,7 +3332,7 @@ int be_cmd_loopback_test(struct be_adapter *adapter, u32 port_num,
return status; err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3348,7 +3348,7 @@ int be_cmd_ddr_dma_test(struct be_adapter *adapter, u64 pattern, CMD_SUBSYSTEM_LOWLEVEL)) return -EPERM;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3382,7 +3382,7 @@ int be_cmd_ddr_dma_test(struct be_adapter *adapter, u64 pattern, }
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3393,7 +3393,7 @@ int be_cmd_get_seeprom_data(struct be_adapter *adapter, struct be_cmd_req_seeprom_read *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3409,7 +3409,7 @@ int be_cmd_get_seeprom_data(struct be_adapter *adapter, status = be_mcc_notify_wait(adapter);
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3424,7 +3424,7 @@ int be_cmd_get_phy_info(struct be_adapter *adapter) CMD_SUBSYSTEM_COMMON)) return -EPERM;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3469,7 +3469,7 @@ int be_cmd_get_phy_info(struct be_adapter *adapter) } dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3479,7 +3479,7 @@ static int be_cmd_set_qos(struct be_adapter *adapter, u32 bps, u32 domain) struct be_cmd_req_set_qos *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3499,7 +3499,7 @@ static int be_cmd_set_qos(struct be_adapter *adapter, u32 bps, u32 domain) status = be_mcc_notify_wait(adapter);
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3611,7 +3611,7 @@ int be_cmd_get_fn_privileges(struct be_adapter *adapter, u32 *privilege, struct be_cmd_req_get_fn_privileges *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3643,7 +3643,7 @@ int be_cmd_get_fn_privileges(struct be_adapter *adapter, u32 *privilege, }
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3655,7 +3655,7 @@ int be_cmd_set_fn_privileges(struct be_adapter *adapter, u32 privileges, struct be_cmd_req_set_fn_privileges *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3675,7 +3675,7 @@ int be_cmd_set_fn_privileges(struct be_adapter *adapter, u32 privileges,
status = be_mcc_notify_wait(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3707,7 +3707,7 @@ int be_cmd_get_mac_from_list(struct be_adapter *adapter, u8 *mac, return -ENOMEM; }
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3771,7 +3771,7 @@ int be_cmd_get_mac_from_list(struct be_adapter *adapter, u8 *mac, }
out: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); dma_free_coherent(&adapter->pdev->dev, get_mac_list_cmd.size, get_mac_list_cmd.va, get_mac_list_cmd.dma); return status; @@ -3831,7 +3831,7 @@ int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array, if (!cmd.va) return -ENOMEM;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3853,7 +3853,7 @@ int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array,
err: dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma); - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3889,7 +3889,7 @@ int be_cmd_set_hsw_config(struct be_adapter *adapter, u16 pvid, CMD_SUBSYSTEM_COMMON)) return -EPERM;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3930,7 +3930,7 @@ int be_cmd_set_hsw_config(struct be_adapter *adapter, u16 pvid, status = be_mcc_notify_wait(adapter);
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -3944,7 +3944,7 @@ int be_cmd_get_hsw_config(struct be_adapter *adapter, u16 *pvid, int status; u16 vid;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -3991,7 +3991,7 @@ int be_cmd_get_hsw_config(struct be_adapter *adapter, u16 *pvid, }
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -4190,7 +4190,7 @@ int be_cmd_set_ext_fat_capabilites(struct be_adapter *adapter, struct be_cmd_req_set_ext_fat_caps *req; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -4206,7 +4206,7 @@ int be_cmd_set_ext_fat_capabilites(struct be_adapter *adapter,
status = be_mcc_notify_wait(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -4684,7 +4684,7 @@ int be_cmd_manage_iface(struct be_adapter *adapter, u32 iface, u8 op) if (iface == 0xFFFFFFFF) return -1;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -4701,7 +4701,7 @@ int be_cmd_manage_iface(struct be_adapter *adapter, u32 iface, u8 op)
status = be_mcc_notify_wait(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -4735,7 +4735,7 @@ int be_cmd_get_if_id(struct be_adapter *adapter, struct be_vf_cfg *vf_cfg, struct be_cmd_resp_get_iface_list *resp; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -4756,7 +4756,7 @@ int be_cmd_get_if_id(struct be_adapter *adapter, struct be_vf_cfg *vf_cfg, }
err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -4850,7 +4850,7 @@ int be_cmd_enable_vf(struct be_adapter *adapter, u8 domain) if (BEx_chip(adapter)) return 0;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -4868,7 +4868,7 @@ int be_cmd_enable_vf(struct be_adapter *adapter, u8 domain) req->enable = 1; status = be_mcc_notify_wait(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -4941,7 +4941,7 @@ __be_cmd_set_logical_link_config(struct be_adapter *adapter, u32 link_config = 0; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -4969,7 +4969,7 @@ __be_cmd_set_logical_link_config(struct be_adapter *adapter,
status = be_mcc_notify_wait(adapter); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -5000,8 +5000,7 @@ int be_cmd_set_features(struct be_adapter *adapter) struct be_mcc_wrb *wrb; int status;
- if (mutex_lock_interruptible(&adapter->mcc_lock)) - return -1; + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -5039,7 +5038,7 @@ int be_cmd_set_features(struct be_adapter *adapter) dev_info(&adapter->pdev->dev, "Adapter does not support HW error recovery\n");
- mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; }
@@ -5053,7 +5052,7 @@ int be_roce_mcc_cmd(void *netdev_handle, void *wrb_payload, struct be_cmd_resp_hdr *resp; int status;
- mutex_lock(&adapter->mcc_lock); + spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter); if (!wrb) { @@ -5076,7 +5075,7 @@ int be_roce_mcc_cmd(void *netdev_handle, void *wrb_payload, memcpy(wrb_payload, resp, sizeof(*resp) + resp->response_length); be_dws_le_to_cpu(wrb_payload, sizeof(*resp) + resp->response_length); err: - mutex_unlock(&adapter->mcc_lock); + spin_unlock_bh(&adapter->mcc_lock); return status; } EXPORT_SYMBOL(be_roce_mcc_cmd); diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index 875fe379eea21..3d2e215921191 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c @@ -5667,8 +5667,8 @@ static int be_drv_init(struct be_adapter *adapter) }
mutex_init(&adapter->mbox_lock); - mutex_init(&adapter->mcc_lock); mutex_init(&adapter->rx_filter_lock); + spin_lock_init(&adapter->mcc_lock); spin_lock_init(&adapter->mcc_cq_lock); init_completion(&adapter->et_cmd_compl);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peiyang Wang wangpeiyang1@huawei.com
[ Upstream commit b7365eab39831487a84e63a9638209b68dc54008 ]
During the initialization of ptp, hclge_ptp_get_cycle might return an error and returned directly without unregister clock and free it. To avoid that, call hclge_ptp_destroy_clock to unregist and free clock if hclge_ptp_get_cycle failed.
Fixes: 8373cd38a888 ("net: hns3: change the method of obtaining default ptp cycle") Signed-off-by: Peiyang Wang wangpeiyang1@huawei.com Signed-off-by: Jijie Shao shaojijie@huawei.com Reviewed-by: Simon Horman horms@kernel.org Link: https://patch.msgid.link/20250228105258.1243461-1-shaojijie@huawei.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c index bab16c2191b2f..181af419b878d 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c @@ -483,7 +483,7 @@ int hclge_ptp_init(struct hclge_dev *hdev)
ret = hclge_ptp_get_cycle(hdev); if (ret) - return ret; + goto out; }
ret = hclge_ptp_int_en(hdev, true);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ville Syrjälä ville.syrjala@linux.intel.com
[ Upstream commit 84d2d0430f0833cdf52a3d051906add051f20ef0 ]
We always perform the same steps to program color management stuff during a full modeset. Extract that code to a helper to avoid duplication.
Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20240916152958.17332-2-ville.s... Reviewed-by: Luca Coelho luciano.coelho@intel.com Stable-dep-of: 30bfc151f0c1 ("drm/xe: Remove double pageflip") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/intel_color.c | 17 ++++++++++ drivers/gpu/drm/i915/display/intel_color.h | 1 + drivers/gpu/drm/i915/display/intel_display.c | 33 +++----------------- 3 files changed, 22 insertions(+), 29 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/intel_color.c b/drivers/gpu/drm/i915/display/intel_color.c index ec55cb651d449..808bece71c247 100644 --- a/drivers/gpu/drm/i915/display/intel_color.c +++ b/drivers/gpu/drm/i915/display/intel_color.c @@ -1912,6 +1912,23 @@ void intel_color_post_update(const struct intel_crtc_state *crtc_state) i915->display.funcs.color->color_post_update(crtc_state); }
+void intel_color_modeset(const struct intel_crtc_state *crtc_state) +{ + struct intel_display *display = to_intel_display(crtc_state); + + intel_color_load_luts(crtc_state); + intel_color_commit_noarm(crtc_state); + intel_color_commit_arm(crtc_state); + + if (DISPLAY_VER(display) < 9) { + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct intel_plane *plane = to_intel_plane(crtc->base.primary); + + /* update DSPCNTR to configure gamma/csc for pipe bottom color */ + plane->disable_arm(plane, crtc_state); + } +} + void intel_color_prepare_commit(struct intel_atomic_state *state, struct intel_crtc *crtc) { diff --git a/drivers/gpu/drm/i915/display/intel_color.h b/drivers/gpu/drm/i915/display/intel_color.h index 79f230a1709ad..ab3aaec06a2ac 100644 --- a/drivers/gpu/drm/i915/display/intel_color.h +++ b/drivers/gpu/drm/i915/display/intel_color.h @@ -28,6 +28,7 @@ void intel_color_commit_noarm(const struct intel_crtc_state *crtc_state); void intel_color_commit_arm(const struct intel_crtc_state *crtc_state); void intel_color_post_update(const struct intel_crtc_state *crtc_state); void intel_color_load_luts(const struct intel_crtc_state *crtc_state); +void intel_color_modeset(const struct intel_crtc_state *crtc_state); void intel_color_get_config(struct intel_crtc_state *crtc_state); bool intel_color_lut_equal(const struct intel_crtc_state *crtc_state, const struct drm_property_blob *blob1, diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 2c6d0da8a16f8..ac5febd076e10 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -1502,14 +1502,6 @@ static void intel_encoders_update_pipe(struct intel_atomic_state *state, } }
-static void intel_disable_primary_plane(const struct intel_crtc_state *crtc_state) -{ - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); - struct intel_plane *plane = to_intel_plane(crtc->base.primary); - - plane->disable_arm(plane, crtc_state); -} - static void ilk_configure_cpu_transcoder(const struct intel_crtc_state *crtc_state) { struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); @@ -1575,11 +1567,7 @@ static void ilk_crtc_enable(struct intel_atomic_state *state, * On ILK+ LUT must be loaded before the pipe is running but with * clocks enabled */ - intel_color_load_luts(new_crtc_state); - intel_color_commit_noarm(new_crtc_state); - intel_color_commit_arm(new_crtc_state); - /* update DSPCNTR to configure gamma for pipe bottom color */ - intel_disable_primary_plane(new_crtc_state); + intel_color_modeset(new_crtc_state);
intel_initial_watermarks(state, crtc); intel_enable_transcoder(new_crtc_state); @@ -1741,12 +1729,7 @@ static void hsw_crtc_enable(struct intel_atomic_state *state, * On ILK+ LUT must be loaded before the pipe is running but with * clocks enabled */ - intel_color_load_luts(pipe_crtc_state); - intel_color_commit_noarm(pipe_crtc_state); - intel_color_commit_arm(pipe_crtc_state); - /* update DSPCNTR to configure gamma/csc for pipe bottom color */ - if (DISPLAY_VER(dev_priv) < 9) - intel_disable_primary_plane(pipe_crtc_state); + intel_color_modeset(pipe_crtc_state);
hsw_set_linetime_wm(pipe_crtc_state);
@@ -2147,11 +2130,7 @@ static void valleyview_crtc_enable(struct intel_atomic_state *state,
i9xx_pfit_enable(new_crtc_state);
- intel_color_load_luts(new_crtc_state); - intel_color_commit_noarm(new_crtc_state); - intel_color_commit_arm(new_crtc_state); - /* update DSPCNTR to configure gamma for pipe bottom color */ - intel_disable_primary_plane(new_crtc_state); + intel_color_modeset(new_crtc_state);
intel_initial_watermarks(state, crtc); intel_enable_transcoder(new_crtc_state); @@ -2187,11 +2166,7 @@ static void i9xx_crtc_enable(struct intel_atomic_state *state,
i9xx_pfit_enable(new_crtc_state);
- intel_color_load_luts(new_crtc_state); - intel_color_commit_noarm(new_crtc_state); - intel_color_commit_arm(new_crtc_state); - /* update DSPCNTR to configure gamma for pipe bottom color */ - intel_disable_primary_plane(new_crtc_state); + intel_color_modeset(new_crtc_state);
if (!intel_initial_watermarks(state, crtc)) intel_update_watermarks(dev_priv);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ville Syrjälä ville.syrjala@linux.intel.com
[ Upstream commit 01389846f7d61d262cc92d42ad4d1a25730e3eff ]
We need to be able to do both MMIO and DSB based pipe/plane programming. To that end plumb the 'dsb' all way from the top into the plane commit hooks.
The compiler appears smart enough to combine the branches from all the back-to-back register writes into a single branch. So the generated asm ends up looking more or less like this: plane_hook() { if (dsb) { intel_dsb_reg_write(); intel_dsb_reg_write(); ... } else { intel_de_write_fw(); intel_de_write_fw(); ... } } which seems like a reasonably efficient way to do this.
An alternative I was also considering is some kind of closure (register write function + display vs. dsb pointer passed to it). That does result is smaller code as there are no branches anymore, but having each register access go via function pointer sounds less efficient.
Not that I actually measured the overhead of either approach yet. Also the reg_rw tracepoint seems to be making a huge mess of the generated code for the mmio path. And additionally there's some kind of IS_GSI_REG() hack in __raw_uncore_read() which ends up generating a pointless branch for every mmio register access. So looks like there might be quite a bit of room for improvement in the mmio path still.
Reviewed-by: Animesh Manna animesh.manna@intel.com Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20240930170415.23841-12-ville.... Stable-dep-of: 30bfc151f0c1 ("drm/xe: Remove double pageflip") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/i9xx_plane.c | 22 +- .../gpu/drm/i915/display/intel_atomic_plane.c | 49 +-- .../gpu/drm/i915/display/intel_atomic_plane.h | 19 +- drivers/gpu/drm/i915/display/intel_color.c | 2 +- drivers/gpu/drm/i915/display/intel_cursor.c | 101 +++--- drivers/gpu/drm/i915/display/intel_de.h | 11 + drivers/gpu/drm/i915/display/intel_display.c | 25 +- .../drm/i915/display/intel_display_types.h | 16 +- drivers/gpu/drm/i915/display/intel_sprite.c | 27 +- .../drm/i915/display/skl_universal_plane.c | 307 ++++++++++-------- drivers/gpu/drm/xe/display/xe_plane_initial.c | 2 +- 11 files changed, 334 insertions(+), 247 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/i9xx_plane.c b/drivers/gpu/drm/i915/display/i9xx_plane.c index 9447f7229b608..17a1e3801a85c 100644 --- a/drivers/gpu/drm/i915/display/i9xx_plane.c +++ b/drivers/gpu/drm/i915/display/i9xx_plane.c @@ -416,7 +416,8 @@ static int i9xx_plane_min_cdclk(const struct intel_crtc_state *crtc_state, return DIV_ROUND_UP(pixel_rate * num, den); }
-static void i9xx_plane_update_noarm(struct intel_plane *plane, +static void i9xx_plane_update_noarm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -444,7 +445,8 @@ static void i9xx_plane_update_noarm(struct intel_plane *plane, } }
-static void i9xx_plane_update_arm(struct intel_plane *plane, +static void i9xx_plane_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -507,7 +509,8 @@ static void i9xx_plane_update_arm(struct intel_plane *plane, intel_plane_ggtt_offset(plane_state) + dspaddr_offset); }
-static void i830_plane_update_arm(struct intel_plane *plane, +static void i830_plane_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -517,11 +520,12 @@ static void i830_plane_update_arm(struct intel_plane *plane, * Additional breakage on i830 causes register reads to return * the last latched value instead of the last written value [ALM026]. */ - i9xx_plane_update_noarm(plane, crtc_state, plane_state); - i9xx_plane_update_arm(plane, crtc_state, plane_state); + i9xx_plane_update_noarm(dsb, plane, crtc_state, plane_state); + i9xx_plane_update_arm(dsb, plane, crtc_state, plane_state); }
-static void i9xx_plane_disable_arm(struct intel_plane *plane, +static void i9xx_plane_disable_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { struct drm_i915_private *dev_priv = to_i915(plane->base.dev); @@ -549,7 +553,8 @@ static void i9xx_plane_disable_arm(struct intel_plane *plane, }
static void -g4x_primary_async_flip(struct intel_plane *plane, +g4x_primary_async_flip(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state, bool async_flip) @@ -569,7 +574,8 @@ g4x_primary_async_flip(struct intel_plane *plane, }
static void -vlv_primary_async_flip(struct intel_plane *plane, +vlv_primary_async_flip(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state, bool async_flip) diff --git a/drivers/gpu/drm/i915/display/intel_atomic_plane.c b/drivers/gpu/drm/i915/display/intel_atomic_plane.c index e979786aa5cf3..5c2a7987cccb4 100644 --- a/drivers/gpu/drm/i915/display/intel_atomic_plane.c +++ b/drivers/gpu/drm/i915/display/intel_atomic_plane.c @@ -790,7 +790,8 @@ skl_next_plane_to_commit(struct intel_atomic_state *state, return NULL; }
-void intel_plane_update_noarm(struct intel_plane *plane, +void intel_plane_update_noarm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -799,10 +800,11 @@ void intel_plane_update_noarm(struct intel_plane *plane, trace_intel_plane_update_noarm(plane, crtc);
if (plane->update_noarm) - plane->update_noarm(plane, crtc_state, plane_state); + plane->update_noarm(dsb, plane, crtc_state, plane_state); }
-void intel_plane_async_flip(struct intel_plane *plane, +void intel_plane_async_flip(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state, bool async_flip) @@ -810,34 +812,37 @@ void intel_plane_async_flip(struct intel_plane *plane, struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
trace_intel_plane_async_flip(plane, crtc, async_flip); - plane->async_flip(plane, crtc_state, plane_state, async_flip); + plane->async_flip(dsb, plane, crtc_state, plane_state, async_flip); }
-void intel_plane_update_arm(struct intel_plane *plane, +void intel_plane_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
if (crtc_state->do_async_flip && plane->async_flip) { - intel_plane_async_flip(plane, crtc_state, plane_state, true); + intel_plane_async_flip(dsb, plane, crtc_state, plane_state, true); return; }
trace_intel_plane_update_arm(plane, crtc); - plane->update_arm(plane, crtc_state, plane_state); + plane->update_arm(dsb, plane, crtc_state, plane_state); }
-void intel_plane_disable_arm(struct intel_plane *plane, +void intel_plane_disable_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
trace_intel_plane_disable_arm(plane, crtc); - plane->disable_arm(plane, crtc_state); + plane->disable_arm(dsb, plane, crtc_state); }
-void intel_crtc_planes_update_noarm(struct intel_atomic_state *state, +void intel_crtc_planes_update_noarm(struct intel_dsb *dsb, + struct intel_atomic_state *state, struct intel_crtc *crtc) { struct intel_crtc_state *new_crtc_state = @@ -862,11 +867,13 @@ void intel_crtc_planes_update_noarm(struct intel_atomic_state *state, /* TODO: for mailbox updates this should be skipped */ if (new_plane_state->uapi.visible || new_plane_state->planar_slave) - intel_plane_update_noarm(plane, new_crtc_state, new_plane_state); + intel_plane_update_noarm(dsb, plane, + new_crtc_state, new_plane_state); } }
-static void skl_crtc_planes_update_arm(struct intel_atomic_state *state, +static void skl_crtc_planes_update_arm(struct intel_dsb *dsb, + struct intel_atomic_state *state, struct intel_crtc *crtc) { struct intel_crtc_state *old_crtc_state = @@ -893,13 +900,14 @@ static void skl_crtc_planes_update_arm(struct intel_atomic_state *state, */ if (new_plane_state->uapi.visible || new_plane_state->planar_slave) - intel_plane_update_arm(plane, new_crtc_state, new_plane_state); + intel_plane_update_arm(dsb, plane, new_crtc_state, new_plane_state); else - intel_plane_disable_arm(plane, new_crtc_state); + intel_plane_disable_arm(dsb, plane, new_crtc_state); } }
-static void i9xx_crtc_planes_update_arm(struct intel_atomic_state *state, +static void i9xx_crtc_planes_update_arm(struct intel_dsb *dsb, + struct intel_atomic_state *state, struct intel_crtc *crtc) { struct intel_crtc_state *new_crtc_state = @@ -919,21 +927,22 @@ static void i9xx_crtc_planes_update_arm(struct intel_atomic_state *state, * would have to be called here as well. */ if (new_plane_state->uapi.visible) - intel_plane_update_arm(plane, new_crtc_state, new_plane_state); + intel_plane_update_arm(dsb, plane, new_crtc_state, new_plane_state); else - intel_plane_disable_arm(plane, new_crtc_state); + intel_plane_disable_arm(dsb, plane, new_crtc_state); } }
-void intel_crtc_planes_update_arm(struct intel_atomic_state *state, +void intel_crtc_planes_update_arm(struct intel_dsb *dsb, + struct intel_atomic_state *state, struct intel_crtc *crtc) { struct drm_i915_private *i915 = to_i915(state->base.dev);
if (DISPLAY_VER(i915) >= 9) - skl_crtc_planes_update_arm(state, crtc); + skl_crtc_planes_update_arm(dsb, state, crtc); else - i9xx_crtc_planes_update_arm(state, crtc); + i9xx_crtc_planes_update_arm(dsb, state, crtc); }
int intel_atomic_plane_check_clipping(struct intel_plane_state *plane_state, diff --git a/drivers/gpu/drm/i915/display/intel_atomic_plane.h b/drivers/gpu/drm/i915/display/intel_atomic_plane.h index 6c4fe35964650..0f982f452ff39 100644 --- a/drivers/gpu/drm/i915/display/intel_atomic_plane.h +++ b/drivers/gpu/drm/i915/display/intel_atomic_plane.h @@ -14,6 +14,7 @@ struct drm_rect; struct intel_atomic_state; struct intel_crtc; struct intel_crtc_state; +struct intel_dsb; struct intel_plane; struct intel_plane_state; enum plane_id; @@ -32,26 +33,32 @@ void intel_plane_copy_uapi_to_hw_state(struct intel_plane_state *plane_state, struct intel_crtc *crtc); void intel_plane_copy_hw_state(struct intel_plane_state *plane_state, const struct intel_plane_state *from_plane_state); -void intel_plane_async_flip(struct intel_plane *plane, +void intel_plane_async_flip(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state, bool async_flip); -void intel_plane_update_noarm(struct intel_plane *plane, +void intel_plane_update_noarm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state); -void intel_plane_update_arm(struct intel_plane *plane, +void intel_plane_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state); -void intel_plane_disable_arm(struct intel_plane *plane, +void intel_plane_disable_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state); struct intel_plane *intel_plane_alloc(void); void intel_plane_free(struct intel_plane *plane); struct drm_plane_state *intel_plane_duplicate_state(struct drm_plane *plane); void intel_plane_destroy_state(struct drm_plane *plane, struct drm_plane_state *state); -void intel_crtc_planes_update_noarm(struct intel_atomic_state *state, +void intel_crtc_planes_update_noarm(struct intel_dsb *dsb, + struct intel_atomic_state *state, struct intel_crtc *crtc); -void intel_crtc_planes_update_arm(struct intel_atomic_state *state, +void intel_crtc_planes_update_arm(struct intel_dsb *dsbx, + struct intel_atomic_state *state, struct intel_crtc *crtc); int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_state, struct intel_crtc_state *crtc_state, diff --git a/drivers/gpu/drm/i915/display/intel_color.c b/drivers/gpu/drm/i915/display/intel_color.c index 808bece71c247..1fbe3cd452c13 100644 --- a/drivers/gpu/drm/i915/display/intel_color.c +++ b/drivers/gpu/drm/i915/display/intel_color.c @@ -1925,7 +1925,7 @@ void intel_color_modeset(const struct intel_crtc_state *crtc_state) struct intel_plane *plane = to_intel_plane(crtc->base.primary);
/* update DSPCNTR to configure gamma/csc for pipe bottom color */ - plane->disable_arm(plane, crtc_state); + plane->disable_arm(NULL, plane, crtc_state); } }
diff --git a/drivers/gpu/drm/i915/display/intel_cursor.c b/drivers/gpu/drm/i915/display/intel_cursor.c index 9ad53e1cbbd06..aeadb834d3328 100644 --- a/drivers/gpu/drm/i915/display/intel_cursor.c +++ b/drivers/gpu/drm/i915/display/intel_cursor.c @@ -275,7 +275,8 @@ static int i845_check_cursor(struct intel_crtc_state *crtc_state, }
/* TODO: split into noarm+arm pair */ -static void i845_cursor_update_arm(struct intel_plane *plane, +static void i845_cursor_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -315,10 +316,11 @@ static void i845_cursor_update_arm(struct intel_plane *plane, } }
-static void i845_cursor_disable_arm(struct intel_plane *plane, +static void i845_cursor_disable_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { - i845_cursor_update_arm(plane, crtc_state, NULL); + i845_cursor_update_arm(dsb, plane, crtc_state, NULL); }
static bool i845_cursor_get_hw_state(struct intel_plane *plane, @@ -527,22 +529,25 @@ static int i9xx_check_cursor(struct intel_crtc_state *crtc_state, return 0; }
-static void i9xx_cursor_disable_sel_fetch_arm(struct intel_plane *plane, +static void i9xx_cursor_disable_sel_fetch_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { - struct drm_i915_private *dev_priv = to_i915(plane->base.dev); + struct intel_display *display = to_intel_display(plane->base.dev); enum pipe pipe = plane->pipe;
if (!crtc_state->enable_psr2_sel_fetch) return;
- intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe), 0); + intel_de_write_dsb(display, dsb, SEL_FETCH_CUR_CTL(pipe), 0); }
-static void wa_16021440873(struct intel_plane *plane, +static void wa_16021440873(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { + struct intel_display *display = to_intel_display(plane->base.dev); struct drm_i915_private *dev_priv = to_i915(plane->base.dev); u32 ctl = plane_state->ctl; int et_y_position = drm_rect_height(&crtc_state->pipe_src) + 1; @@ -551,16 +556,18 @@ static void wa_16021440873(struct intel_plane *plane, ctl &= ~MCURSOR_MODE_MASK; ctl |= MCURSOR_MODE_64_2B;
- intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe), ctl); + intel_de_write_dsb(display, dsb, SEL_FETCH_CUR_CTL(pipe), ctl);
- intel_de_write(dev_priv, CURPOS_ERLY_TPT(dev_priv, pipe), - CURSOR_POS_Y(et_y_position)); + intel_de_write_dsb(display, dsb, CURPOS_ERLY_TPT(dev_priv, pipe), + CURSOR_POS_Y(et_y_position)); }
-static void i9xx_cursor_update_sel_fetch_arm(struct intel_plane *plane, +static void i9xx_cursor_update_sel_fetch_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { + struct intel_display *display = to_intel_display(plane->base.dev); struct drm_i915_private *dev_priv = to_i915(plane->base.dev); enum pipe pipe = plane->pipe;
@@ -571,19 +578,17 @@ static void i9xx_cursor_update_sel_fetch_arm(struct intel_plane *plane, if (crtc_state->enable_psr2_su_region_et) { u32 val = intel_cursor_position(crtc_state, plane_state, true); - intel_de_write_fw(dev_priv, - CURPOS_ERLY_TPT(dev_priv, pipe), - val); + + intel_de_write_dsb(display, dsb, CURPOS_ERLY_TPT(dev_priv, pipe), val); }
- intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe), - plane_state->ctl); + intel_de_write_dsb(display, dsb, SEL_FETCH_CUR_CTL(pipe), plane_state->ctl); } else { /* Wa_16021440873 */ if (crtc_state->enable_psr2_su_region_et) - wa_16021440873(plane, crtc_state, plane_state); + wa_16021440873(dsb, plane, crtc_state, plane_state); else - i9xx_cursor_disable_sel_fetch_arm(plane, crtc_state); + i9xx_cursor_disable_sel_fetch_arm(dsb, plane, crtc_state); } }
@@ -610,9 +615,11 @@ static u32 skl_cursor_wm_reg_val(const struct skl_wm_level *level) return val; }
-static void skl_write_cursor_wm(struct intel_plane *plane, +static void skl_write_cursor_wm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { + struct intel_display *display = to_intel_display(plane->base.dev); struct drm_i915_private *i915 = to_i915(plane->base.dev); enum plane_id plane_id = plane->id; enum pipe pipe = plane->pipe; @@ -622,30 +629,32 @@ static void skl_write_cursor_wm(struct intel_plane *plane, int level;
for (level = 0; level < i915->display.wm.num_levels; level++) - intel_de_write_fw(i915, CUR_WM(pipe, level), - skl_cursor_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level))); + intel_de_write_dsb(display, dsb, CUR_WM(pipe, level), + skl_cursor_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
- intel_de_write_fw(i915, CUR_WM_TRANS(pipe), - skl_cursor_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id))); + intel_de_write_dsb(display, dsb, CUR_WM_TRANS(pipe), + skl_cursor_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
if (HAS_HW_SAGV_WM(i915)) { const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id];
- intel_de_write_fw(i915, CUR_WM_SAGV(pipe), - skl_cursor_wm_reg_val(&wm->sagv.wm0)); - intel_de_write_fw(i915, CUR_WM_SAGV_TRANS(pipe), - skl_cursor_wm_reg_val(&wm->sagv.trans_wm)); + intel_de_write_dsb(display, dsb, CUR_WM_SAGV(pipe), + skl_cursor_wm_reg_val(&wm->sagv.wm0)); + intel_de_write_dsb(display, dsb, CUR_WM_SAGV_TRANS(pipe), + skl_cursor_wm_reg_val(&wm->sagv.trans_wm)); }
- intel_de_write_fw(i915, CUR_BUF_CFG(pipe), - skl_cursor_ddb_reg_val(ddb)); + intel_de_write_dsb(display, dsb, CUR_BUF_CFG(pipe), + skl_cursor_ddb_reg_val(ddb)); }
/* TODO: split into noarm+arm pair */ -static void i9xx_cursor_update_arm(struct intel_plane *plane, +static void i9xx_cursor_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { + struct intel_display *display = to_intel_display(plane->base.dev); struct drm_i915_private *dev_priv = to_i915(plane->base.dev); enum pipe pipe = plane->pipe; u32 cntl = 0, base = 0, pos = 0, fbc_ctl = 0; @@ -685,38 +694,36 @@ static void i9xx_cursor_update_arm(struct intel_plane *plane, */
if (DISPLAY_VER(dev_priv) >= 9) - skl_write_cursor_wm(plane, crtc_state); + skl_write_cursor_wm(dsb, plane, crtc_state);
if (plane_state) - i9xx_cursor_update_sel_fetch_arm(plane, crtc_state, - plane_state); + i9xx_cursor_update_sel_fetch_arm(dsb, plane, crtc_state, plane_state); else - i9xx_cursor_disable_sel_fetch_arm(plane, crtc_state); + i9xx_cursor_disable_sel_fetch_arm(dsb, plane, crtc_state);
if (plane->cursor.base != base || plane->cursor.size != fbc_ctl || plane->cursor.cntl != cntl) { if (HAS_CUR_FBC(dev_priv)) - intel_de_write_fw(dev_priv, - CUR_FBC_CTL(dev_priv, pipe), - fbc_ctl); - intel_de_write_fw(dev_priv, CURCNTR(dev_priv, pipe), cntl); - intel_de_write_fw(dev_priv, CURPOS(dev_priv, pipe), pos); - intel_de_write_fw(dev_priv, CURBASE(dev_priv, pipe), base); + intel_de_write_dsb(display, dsb, CUR_FBC_CTL(dev_priv, pipe), fbc_ctl); + intel_de_write_dsb(display, dsb, CURCNTR(dev_priv, pipe), cntl); + intel_de_write_dsb(display, dsb, CURPOS(dev_priv, pipe), pos); + intel_de_write_dsb(display, dsb, CURBASE(dev_priv, pipe), base);
plane->cursor.base = base; plane->cursor.size = fbc_ctl; plane->cursor.cntl = cntl; } else { - intel_de_write_fw(dev_priv, CURPOS(dev_priv, pipe), pos); - intel_de_write_fw(dev_priv, CURBASE(dev_priv, pipe), base); + intel_de_write_dsb(display, dsb, CURPOS(dev_priv, pipe), pos); + intel_de_write_dsb(display, dsb, CURBASE(dev_priv, pipe), base); } }
-static void i9xx_cursor_disable_arm(struct intel_plane *plane, +static void i9xx_cursor_disable_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { - i9xx_cursor_update_arm(plane, crtc_state, NULL); + i9xx_cursor_update_arm(dsb, plane, crtc_state, NULL); }
static bool i9xx_cursor_get_hw_state(struct intel_plane *plane, @@ -905,10 +912,10 @@ intel_legacy_cursor_update(struct drm_plane *_plane, }
if (new_plane_state->uapi.visible) { - intel_plane_update_noarm(plane, crtc_state, new_plane_state); - intel_plane_update_arm(plane, crtc_state, new_plane_state); + intel_plane_update_noarm(NULL, plane, crtc_state, new_plane_state); + intel_plane_update_arm(NULL, plane, crtc_state, new_plane_state); } else { - intel_plane_disable_arm(plane, crtc_state); + intel_plane_disable_arm(NULL, plane, crtc_state); }
local_irq_enable(); diff --git a/drivers/gpu/drm/i915/display/intel_de.h b/drivers/gpu/drm/i915/display/intel_de.h index e881bfeafb47d..e017cd4a81685 100644 --- a/drivers/gpu/drm/i915/display/intel_de.h +++ b/drivers/gpu/drm/i915/display/intel_de.h @@ -8,6 +8,7 @@
#include "i915_drv.h" #include "i915_trace.h" +#include "intel_dsb.h" #include "intel_uncore.h"
static inline struct intel_uncore *__to_uncore(struct intel_display *display) @@ -233,4 +234,14 @@ __intel_de_write_notrace(struct intel_display *display, i915_reg_t reg, } #define intel_de_write_notrace(p,...) __intel_de_write_notrace(__to_intel_display(p), __VA_ARGS__)
+static __always_inline void +intel_de_write_dsb(struct intel_display *display, struct intel_dsb *dsb, + i915_reg_t reg, u32 val) +{ + if (dsb) + intel_dsb_reg_write(dsb, reg, val); + else + intel_de_write_fw(display, reg, val); +} + #endif /* __INTEL_DE_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index ac5febd076e10..3039ee03e1c7a 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -135,7 +135,8 @@ static void intel_set_transcoder_timings(const struct intel_crtc_state *crtc_state); static void intel_set_pipe_src_size(const struct intel_crtc_state *crtc_state); static void hsw_set_transconf(const struct intel_crtc_state *crtc_state); -static void bdw_set_pipe_misc(const struct intel_crtc_state *crtc_state); +static void bdw_set_pipe_misc(struct intel_dsb *dsb, + const struct intel_crtc_state *crtc_state);
/* returns HPLL frequency in kHz */ int vlv_get_hpll_vco(struct drm_i915_private *dev_priv) @@ -715,7 +716,7 @@ void intel_plane_disable_noatomic(struct intel_crtc *crtc, if (DISPLAY_VER(dev_priv) == 2 && !crtc_state->active_planes) intel_set_cpu_fifo_underrun_reporting(dev_priv, crtc->pipe, false);
- intel_plane_disable_arm(plane, crtc_state); + intel_plane_disable_arm(NULL, plane, crtc_state); intel_crtc_wait_for_next_vblank(crtc); }
@@ -1172,8 +1173,8 @@ static void intel_crtc_async_flip_disable_wa(struct intel_atomic_state *state, * Apart from the async flip bit we want to * preserve the old state for the plane. */ - intel_plane_async_flip(plane, old_crtc_state, - old_plane_state, false); + intel_plane_async_flip(NULL, plane, + old_crtc_state, old_plane_state, false); need_vbl_wait = true; } } @@ -1315,7 +1316,7 @@ static void intel_crtc_disable_planes(struct intel_atomic_state *state, !(update_mask & BIT(plane->id))) continue;
- intel_plane_disable_arm(plane, new_crtc_state); + intel_plane_disable_arm(NULL, plane, new_crtc_state);
if (old_plane_state->uapi.visible) fb_bits |= plane->frontbuffer_bit; @@ -1704,7 +1705,7 @@ static void hsw_crtc_enable(struct intel_atomic_state *state, intel_set_pipe_src_size(pipe_crtc_state);
if (DISPLAY_VER(dev_priv) >= 9 || IS_BROADWELL(dev_priv)) - bdw_set_pipe_misc(pipe_crtc_state); + bdw_set_pipe_misc(NULL, pipe_crtc_state); }
if (!transcoder_is_dsi(cpu_transcoder)) @@ -3221,9 +3222,11 @@ static void hsw_set_transconf(const struct intel_crtc_state *crtc_state) intel_de_posting_read(dev_priv, TRANSCONF(dev_priv, cpu_transcoder)); }
-static void bdw_set_pipe_misc(const struct intel_crtc_state *crtc_state) +static void bdw_set_pipe_misc(struct intel_dsb *dsb, + const struct intel_crtc_state *crtc_state) { struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct intel_display *display = to_intel_display(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); u32 val = 0;
@@ -3268,7 +3271,7 @@ static void bdw_set_pipe_misc(const struct intel_crtc_state *crtc_state) if (IS_BROADWELL(dev_priv)) val |= PIPE_MISC_PSR_MASK_SPRITE_ENABLE;
- intel_de_write(dev_priv, PIPE_MISC(crtc->pipe), val); + intel_de_write_dsb(display, dsb, PIPE_MISC(crtc->pipe), val); }
int bdw_get_pipe_misc_bpp(struct intel_crtc *crtc) @@ -6821,7 +6824,7 @@ static void commit_pipe_pre_planes(struct intel_atomic_state *state, intel_color_commit_arm(new_crtc_state);
if (DISPLAY_VER(dev_priv) >= 9 || IS_BROADWELL(dev_priv)) - bdw_set_pipe_misc(new_crtc_state); + bdw_set_pipe_misc(NULL, new_crtc_state);
if (intel_crtc_needs_fastset(new_crtc_state)) intel_pipe_fastset(old_crtc_state, new_crtc_state); @@ -6921,7 +6924,7 @@ static void intel_pre_update_crtc(struct intel_atomic_state *state, intel_crtc_needs_color_update(new_crtc_state)) intel_color_commit_noarm(new_crtc_state);
- intel_crtc_planes_update_noarm(state, crtc); + intel_crtc_planes_update_noarm(NULL, state, crtc); }
static void intel_update_crtc(struct intel_atomic_state *state, @@ -6937,7 +6940,7 @@ static void intel_update_crtc(struct intel_atomic_state *state,
commit_pipe_pre_planes(state, crtc);
- intel_crtc_planes_update_arm(state, crtc); + intel_crtc_planes_update_arm(NULL, state, crtc);
commit_pipe_post_planes(state, crtc);
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h index f29e5dc3db910..3e24d2e90d3cf 100644 --- a/drivers/gpu/drm/i915/display/intel_display_types.h +++ b/drivers/gpu/drm/i915/display/intel_display_types.h @@ -1036,6 +1036,10 @@ struct intel_csc_matrix { u16 postoff[3]; };
+void intel_io_mmio_fw_write(void *ctx, i915_reg_t reg, u32 val); + +typedef void (*intel_io_reg_write)(void *ctx, i915_reg_t reg, u32 val); + struct intel_crtc_state { /* * uapi (drm) state. This is the software state shown to userspace. @@ -1578,22 +1582,26 @@ struct intel_plane { u32 pixel_format, u64 modifier, unsigned int rotation); /* Write all non-self arming plane registers */ - void (*update_noarm)(struct intel_plane *plane, + void (*update_noarm)(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state); /* Write all self-arming plane registers */ - void (*update_arm)(struct intel_plane *plane, + void (*update_arm)(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state); /* Disable the plane, must arm */ - void (*disable_arm)(struct intel_plane *plane, + void (*disable_arm)(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state); bool (*get_hw_state)(struct intel_plane *plane, enum pipe *pipe); int (*check_plane)(struct intel_crtc_state *crtc_state, struct intel_plane_state *plane_state); int (*min_cdclk)(const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state); - void (*async_flip)(struct intel_plane *plane, + void (*async_flip)(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state, bool async_flip); diff --git a/drivers/gpu/drm/i915/display/intel_sprite.c b/drivers/gpu/drm/i915/display/intel_sprite.c index e657b09ede999..e6fadcef58e06 100644 --- a/drivers/gpu/drm/i915/display/intel_sprite.c +++ b/drivers/gpu/drm/i915/display/intel_sprite.c @@ -378,7 +378,8 @@ static void vlv_sprite_update_gamma(const struct intel_plane_state *plane_state) }
static void -vlv_sprite_update_noarm(struct intel_plane *plane, +vlv_sprite_update_noarm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -399,7 +400,8 @@ vlv_sprite_update_noarm(struct intel_plane *plane, }
static void -vlv_sprite_update_arm(struct intel_plane *plane, +vlv_sprite_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -449,7 +451,8 @@ vlv_sprite_update_arm(struct intel_plane *plane, }
static void -vlv_sprite_disable_arm(struct intel_plane *plane, +vlv_sprite_disable_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { struct intel_display *display = to_intel_display(plane->base.dev); @@ -795,7 +798,8 @@ static void ivb_sprite_update_gamma(const struct intel_plane_state *plane_state) }
static void -ivb_sprite_update_noarm(struct intel_plane *plane, +ivb_sprite_update_noarm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -826,7 +830,8 @@ ivb_sprite_update_noarm(struct intel_plane *plane, }
static void -ivb_sprite_update_arm(struct intel_plane *plane, +ivb_sprite_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -874,7 +879,8 @@ ivb_sprite_update_arm(struct intel_plane *plane, }
static void -ivb_sprite_disable_arm(struct intel_plane *plane, +ivb_sprite_disable_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { struct intel_display *display = to_intel_display(plane->base.dev); @@ -1133,7 +1139,8 @@ static void ilk_sprite_update_gamma(const struct intel_plane_state *plane_state) }
static void -g4x_sprite_update_noarm(struct intel_plane *plane, +g4x_sprite_update_noarm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -1162,7 +1169,8 @@ g4x_sprite_update_noarm(struct intel_plane *plane, }
static void -g4x_sprite_update_arm(struct intel_plane *plane, +g4x_sprite_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { @@ -1206,7 +1214,8 @@ g4x_sprite_update_arm(struct intel_plane *plane, }
static void -g4x_sprite_disable_arm(struct intel_plane *plane, +g4x_sprite_disable_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { struct intel_display *display = to_intel_display(plane->base.dev); diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c index 62a5287ea1d9c..7f77a76309bd5 100644 --- a/drivers/gpu/drm/i915/display/skl_universal_plane.c +++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c @@ -589,11 +589,11 @@ static u32 skl_plane_min_alignment(struct intel_plane *plane, * in full-range YCbCr. */ static void -icl_program_input_csc(struct intel_plane *plane, - const struct intel_crtc_state *crtc_state, +icl_program_input_csc(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_plane_state *plane_state) { - struct drm_i915_private *dev_priv = to_i915(plane->base.dev); + struct intel_display *display = to_intel_display(plane->base.dev); enum pipe pipe = plane->pipe; enum plane_id plane_id = plane->id;
@@ -637,31 +637,31 @@ icl_program_input_csc(struct intel_plane *plane, }; const u16 *csc = input_csc_matrix[plane_state->hw.color_encoding];
- intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0), - ROFF(csc[0]) | GOFF(csc[1])); - intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 1), - BOFF(csc[2])); - intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 2), - ROFF(csc[3]) | GOFF(csc[4])); - intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 3), - BOFF(csc[5])); - intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 4), - ROFF(csc[6]) | GOFF(csc[7])); - intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 5), - BOFF(csc[8])); - - intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0), - PREOFF_YUV_TO_RGB_HI); - intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1), - PREOFF_YUV_TO_RGB_ME); - intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2), - PREOFF_YUV_TO_RGB_LO); - intel_de_write_fw(dev_priv, - PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 0), 0x0); - intel_de_write_fw(dev_priv, - PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 1), 0x0); - intel_de_write_fw(dev_priv, - PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 2), 0x0); + intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0), + ROFF(csc[0]) | GOFF(csc[1])); + intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 1), + BOFF(csc[2])); + intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 2), + ROFF(csc[3]) | GOFF(csc[4])); + intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 3), + BOFF(csc[5])); + intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 4), + ROFF(csc[6]) | GOFF(csc[7])); + intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 5), + BOFF(csc[8])); + + intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0), + PREOFF_YUV_TO_RGB_HI); + intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1), + PREOFF_YUV_TO_RGB_ME); + intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2), + PREOFF_YUV_TO_RGB_LO); + intel_de_write_dsb(display, dsb, + PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 0), 0x0); + intel_de_write_dsb(display, dsb, + PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 1), 0x0); + intel_de_write_dsb(display, dsb, + PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 2), 0x0); }
static unsigned int skl_plane_stride_mult(const struct drm_framebuffer *fb, @@ -715,9 +715,11 @@ static u32 skl_plane_wm_reg_val(const struct skl_wm_level *level) return val; }
-static void skl_write_plane_wm(struct intel_plane *plane, +static void skl_write_plane_wm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { + struct intel_display *display = to_intel_display(plane->base.dev); struct drm_i915_private *i915 = to_i915(plane->base.dev); enum plane_id plane_id = plane->id; enum pipe pipe = plane->pipe; @@ -729,71 +731,75 @@ static void skl_write_plane_wm(struct intel_plane *plane, int level;
for (level = 0; level < i915->display.wm.num_levels; level++) - intel_de_write_fw(i915, PLANE_WM(pipe, plane_id, level), - skl_plane_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level))); + intel_de_write_dsb(display, dsb, PLANE_WM(pipe, plane_id, level), + skl_plane_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
- intel_de_write_fw(i915, PLANE_WM_TRANS(pipe, plane_id), - skl_plane_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id))); + intel_de_write_dsb(display, dsb, PLANE_WM_TRANS(pipe, plane_id), + skl_plane_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
if (HAS_HW_SAGV_WM(i915)) { const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id];
- intel_de_write_fw(i915, PLANE_WM_SAGV(pipe, plane_id), - skl_plane_wm_reg_val(&wm->sagv.wm0)); - intel_de_write_fw(i915, PLANE_WM_SAGV_TRANS(pipe, plane_id), - skl_plane_wm_reg_val(&wm->sagv.trans_wm)); + intel_de_write_dsb(display, dsb, PLANE_WM_SAGV(pipe, plane_id), + skl_plane_wm_reg_val(&wm->sagv.wm0)); + intel_de_write_dsb(display, dsb, PLANE_WM_SAGV_TRANS(pipe, plane_id), + skl_plane_wm_reg_val(&wm->sagv.trans_wm)); }
- intel_de_write_fw(i915, PLANE_BUF_CFG(pipe, plane_id), - skl_plane_ddb_reg_val(ddb)); + intel_de_write_dsb(display, dsb, PLANE_BUF_CFG(pipe, plane_id), + skl_plane_ddb_reg_val(ddb));
if (DISPLAY_VER(i915) < 11) - intel_de_write_fw(i915, PLANE_NV12_BUF_CFG(pipe, plane_id), - skl_plane_ddb_reg_val(ddb_y)); + intel_de_write_dsb(display, dsb, PLANE_NV12_BUF_CFG(pipe, plane_id), + skl_plane_ddb_reg_val(ddb_y)); }
static void -skl_plane_disable_arm(struct intel_plane *plane, +skl_plane_disable_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { - struct drm_i915_private *dev_priv = to_i915(plane->base.dev); + struct intel_display *display = to_intel_display(plane->base.dev); enum plane_id plane_id = plane->id; enum pipe pipe = plane->pipe;
- skl_write_plane_wm(plane, crtc_state); + skl_write_plane_wm(dsb, plane, crtc_state);
- intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), 0); - intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), 0); + intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id), 0); + intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id), 0); }
-static void icl_plane_disable_sel_fetch_arm(struct intel_plane *plane, +static void icl_plane_disable_sel_fetch_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { - struct drm_i915_private *i915 = to_i915(plane->base.dev); + struct intel_display *display = to_intel_display(plane->base.dev); enum pipe pipe = plane->pipe;
if (!crtc_state->enable_psr2_sel_fetch) return;
- intel_de_write_fw(i915, SEL_FETCH_PLANE_CTL(pipe, plane->id), 0); + intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_CTL(pipe, plane->id), 0); }
static void -icl_plane_disable_arm(struct intel_plane *plane, +icl_plane_disable_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state) { + struct intel_display *display = to_intel_display(plane->base.dev); struct drm_i915_private *dev_priv = to_i915(plane->base.dev); enum plane_id plane_id = plane->id; enum pipe pipe = plane->pipe;
if (icl_is_hdr_plane(dev_priv, plane_id)) - intel_de_write_fw(dev_priv, PLANE_CUS_CTL(pipe, plane_id), 0); + intel_de_write_dsb(display, dsb, PLANE_CUS_CTL(pipe, plane_id), 0);
- skl_write_plane_wm(plane, crtc_state); + skl_write_plane_wm(dsb, plane, crtc_state);
- icl_plane_disable_sel_fetch_arm(plane, crtc_state); - intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), 0); - intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), 0); + icl_plane_disable_sel_fetch_arm(dsb, plane, crtc_state); + intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id), 0); + intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id), 0); }
static bool @@ -1230,28 +1236,30 @@ static u32 skl_plane_keymsk(const struct intel_plane_state *plane_state) return keymsk; }
-static void icl_plane_csc_load_black(struct intel_plane *plane) +static void icl_plane_csc_load_black(struct intel_dsb *dsb, + struct intel_plane *plane, + const struct intel_crtc_state *crtc_state) { - struct drm_i915_private *i915 = to_i915(plane->base.dev); + struct intel_display *display = to_intel_display(plane->base.dev); enum plane_id plane_id = plane->id; enum pipe pipe = plane->pipe;
- intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 0), 0); - intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 1), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 0), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 1), 0);
- intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 2), 0); - intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 3), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 2), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 3), 0);
- intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 4), 0); - intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 5), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 4), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 5), 0);
- intel_de_write_fw(i915, PLANE_CSC_PREOFF(pipe, plane_id, 0), 0); - intel_de_write_fw(i915, PLANE_CSC_PREOFF(pipe, plane_id, 1), 0); - intel_de_write_fw(i915, PLANE_CSC_PREOFF(pipe, plane_id, 2), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_PREOFF(pipe, plane_id, 0), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_PREOFF(pipe, plane_id, 1), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_PREOFF(pipe, plane_id, 2), 0);
- intel_de_write_fw(i915, PLANE_CSC_POSTOFF(pipe, plane_id, 0), 0); - intel_de_write_fw(i915, PLANE_CSC_POSTOFF(pipe, plane_id, 1), 0); - intel_de_write_fw(i915, PLANE_CSC_POSTOFF(pipe, plane_id, 2), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_POSTOFF(pipe, plane_id, 0), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_POSTOFF(pipe, plane_id, 1), 0); + intel_de_write_dsb(display, dsb, PLANE_CSC_POSTOFF(pipe, plane_id, 2), 0); }
static int icl_plane_color_plane(const struct intel_plane_state *plane_state) @@ -1264,11 +1272,12 @@ static int icl_plane_color_plane(const struct intel_plane_state *plane_state) }
static void -skl_plane_update_noarm(struct intel_plane *plane, +skl_plane_update_noarm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { - struct drm_i915_private *dev_priv = to_i915(plane->base.dev); + struct intel_display *display = to_intel_display(plane->base.dev); enum plane_id plane_id = plane->id; enum pipe pipe = plane->pipe; u32 stride = skl_plane_stride(plane_state, 0); @@ -1283,21 +1292,23 @@ skl_plane_update_noarm(struct intel_plane *plane, crtc_y = 0; }
- intel_de_write_fw(dev_priv, PLANE_STRIDE(pipe, plane_id), - PLANE_STRIDE_(stride)); - intel_de_write_fw(dev_priv, PLANE_POS(pipe, plane_id), - PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x)); - intel_de_write_fw(dev_priv, PLANE_SIZE(pipe, plane_id), - PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1)); + intel_de_write_dsb(display, dsb, PLANE_STRIDE(pipe, plane_id), + PLANE_STRIDE_(stride)); + intel_de_write_dsb(display, dsb, PLANE_POS(pipe, plane_id), + PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x)); + intel_de_write_dsb(display, dsb, PLANE_SIZE(pipe, plane_id), + PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1));
- skl_write_plane_wm(plane, crtc_state); + skl_write_plane_wm(dsb, plane, crtc_state); }
static void -skl_plane_update_arm(struct intel_plane *plane, +skl_plane_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { + struct intel_display *display = to_intel_display(plane->base.dev); struct drm_i915_private *dev_priv = to_i915(plane->base.dev); enum plane_id plane_id = plane->id; enum pipe pipe = plane->pipe; @@ -1317,22 +1328,26 @@ skl_plane_update_arm(struct intel_plane *plane, plane_color_ctl = plane_state->color_ctl | glk_plane_color_ctl_crtc(crtc_state);
- intel_de_write_fw(dev_priv, PLANE_KEYVAL(pipe, plane_id), skl_plane_keyval(plane_state)); - intel_de_write_fw(dev_priv, PLANE_KEYMSK(pipe, plane_id), skl_plane_keymsk(plane_state)); - intel_de_write_fw(dev_priv, PLANE_KEYMAX(pipe, plane_id), skl_plane_keymax(plane_state)); + intel_de_write_dsb(display, dsb, PLANE_KEYVAL(pipe, plane_id), + skl_plane_keyval(plane_state)); + intel_de_write_dsb(display, dsb, PLANE_KEYMSK(pipe, plane_id), + skl_plane_keymsk(plane_state)); + intel_de_write_dsb(display, dsb, PLANE_KEYMAX(pipe, plane_id), + skl_plane_keymax(plane_state));
- intel_de_write_fw(dev_priv, PLANE_OFFSET(pipe, plane_id), - PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x)); + intel_de_write_dsb(display, dsb, PLANE_OFFSET(pipe, plane_id), + PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x));
- intel_de_write_fw(dev_priv, PLANE_AUX_DIST(pipe, plane_id), - skl_plane_aux_dist(plane_state, 0)); + intel_de_write_dsb(display, dsb, PLANE_AUX_DIST(pipe, plane_id), + skl_plane_aux_dist(plane_state, 0));
- intel_de_write_fw(dev_priv, PLANE_AUX_OFFSET(pipe, plane_id), - PLANE_OFFSET_Y(plane_state->view.color_plane[1].y) | - PLANE_OFFSET_X(plane_state->view.color_plane[1].x)); + intel_de_write_dsb(display, dsb, PLANE_AUX_OFFSET(pipe, plane_id), + PLANE_OFFSET_Y(plane_state->view.color_plane[1].y) | + PLANE_OFFSET_X(plane_state->view.color_plane[1].x));
if (DISPLAY_VER(dev_priv) >= 10) - intel_de_write_fw(dev_priv, PLANE_COLOR_CTL(pipe, plane_id), plane_color_ctl); + intel_de_write_dsb(display, dsb, PLANE_COLOR_CTL(pipe, plane_id), + plane_color_ctl);
/* * Enable the scaler before the plane so that we don't @@ -1349,17 +1364,19 @@ skl_plane_update_arm(struct intel_plane *plane, * disabled. Try to make the plane enable atomic by writing * the control register just before the surface register. */ - intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl); - intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), - skl_plane_surf(plane_state, 0)); + intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id), + plane_ctl); + intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id), + skl_plane_surf(plane_state, 0)); }
-static void icl_plane_update_sel_fetch_noarm(struct intel_plane *plane, +static void icl_plane_update_sel_fetch_noarm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state, int color_plane) { - struct drm_i915_private *i915 = to_i915(plane->base.dev); + struct intel_display *display = to_intel_display(plane->base.dev); enum pipe pipe = plane->pipe; const struct drm_rect *clip; u32 val; @@ -1376,7 +1393,7 @@ static void icl_plane_update_sel_fetch_noarm(struct intel_plane *plane, y = (clip->y1 + plane_state->uapi.dst.y1); val = y << 16; val |= plane_state->uapi.dst.x1; - intel_de_write_fw(i915, SEL_FETCH_PLANE_POS(pipe, plane->id), val); + intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_POS(pipe, plane->id), val);
x = plane_state->view.color_plane[color_plane].x;
@@ -1391,20 +1408,21 @@ static void icl_plane_update_sel_fetch_noarm(struct intel_plane *plane,
val = y << 16 | x;
- intel_de_write_fw(i915, SEL_FETCH_PLANE_OFFSET(pipe, plane->id), - val); + intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_OFFSET(pipe, plane->id), val);
/* Sizes are 0 based */ val = (drm_rect_height(clip) - 1) << 16; val |= (drm_rect_width(&plane_state->uapi.src) >> 16) - 1; - intel_de_write_fw(i915, SEL_FETCH_PLANE_SIZE(pipe, plane->id), val); + intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_SIZE(pipe, plane->id), val); }
static void -icl_plane_update_noarm(struct intel_plane *plane, +icl_plane_update_noarm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { + struct intel_display *display = to_intel_display(plane->base.dev); struct drm_i915_private *dev_priv = to_i915(plane->base.dev); enum plane_id plane_id = plane->id; enum pipe pipe = plane->pipe; @@ -1428,76 +1446,82 @@ icl_plane_update_noarm(struct intel_plane *plane, crtc_y = 0; }
- intel_de_write_fw(dev_priv, PLANE_STRIDE(pipe, plane_id), - PLANE_STRIDE_(stride)); - intel_de_write_fw(dev_priv, PLANE_POS(pipe, plane_id), - PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x)); - intel_de_write_fw(dev_priv, PLANE_SIZE(pipe, plane_id), - PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1)); + intel_de_write_dsb(display, dsb, PLANE_STRIDE(pipe, plane_id), + PLANE_STRIDE_(stride)); + intel_de_write_dsb(display, dsb, PLANE_POS(pipe, plane_id), + PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x)); + intel_de_write_dsb(display, dsb, PLANE_SIZE(pipe, plane_id), + PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1));
- intel_de_write_fw(dev_priv, PLANE_KEYVAL(pipe, plane_id), skl_plane_keyval(plane_state)); - intel_de_write_fw(dev_priv, PLANE_KEYMSK(pipe, plane_id), skl_plane_keymsk(plane_state)); - intel_de_write_fw(dev_priv, PLANE_KEYMAX(pipe, plane_id), skl_plane_keymax(plane_state)); + intel_de_write_dsb(display, dsb, PLANE_KEYVAL(pipe, plane_id), + skl_plane_keyval(plane_state)); + intel_de_write_dsb(display, dsb, PLANE_KEYMSK(pipe, plane_id), + skl_plane_keymsk(plane_state)); + intel_de_write_dsb(display, dsb, PLANE_KEYMAX(pipe, plane_id), + skl_plane_keymax(plane_state));
- intel_de_write_fw(dev_priv, PLANE_OFFSET(pipe, plane_id), - PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x)); + intel_de_write_dsb(display, dsb, PLANE_OFFSET(pipe, plane_id), + PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x));
if (intel_fb_is_rc_ccs_cc_modifier(fb->modifier)) { - intel_de_write_fw(dev_priv, PLANE_CC_VAL(pipe, plane_id, 0), - lower_32_bits(plane_state->ccval)); - intel_de_write_fw(dev_priv, PLANE_CC_VAL(pipe, plane_id, 1), - upper_32_bits(plane_state->ccval)); + intel_de_write_dsb(display, dsb, PLANE_CC_VAL(pipe, plane_id, 0), + lower_32_bits(plane_state->ccval)); + intel_de_write_dsb(display, dsb, PLANE_CC_VAL(pipe, plane_id, 1), + upper_32_bits(plane_state->ccval)); }
/* FLAT CCS doesn't need to program AUX_DIST */ if (!HAS_FLAT_CCS(dev_priv) && DISPLAY_VER(dev_priv) < 20) - intel_de_write_fw(dev_priv, PLANE_AUX_DIST(pipe, plane_id), - skl_plane_aux_dist(plane_state, color_plane)); + intel_de_write_dsb(display, dsb, PLANE_AUX_DIST(pipe, plane_id), + skl_plane_aux_dist(plane_state, color_plane));
if (icl_is_hdr_plane(dev_priv, plane_id)) - intel_de_write_fw(dev_priv, PLANE_CUS_CTL(pipe, plane_id), - plane_state->cus_ctl); + intel_de_write_dsb(display, dsb, PLANE_CUS_CTL(pipe, plane_id), + plane_state->cus_ctl);
- intel_de_write_fw(dev_priv, PLANE_COLOR_CTL(pipe, plane_id), plane_color_ctl); + intel_de_write_dsb(display, dsb, PLANE_COLOR_CTL(pipe, plane_id), + plane_color_ctl);
if (fb->format->is_yuv && icl_is_hdr_plane(dev_priv, plane_id)) - icl_program_input_csc(plane, crtc_state, plane_state); + icl_program_input_csc(dsb, plane, plane_state);
- skl_write_plane_wm(plane, crtc_state); + skl_write_plane_wm(dsb, plane, crtc_state);
/* * FIXME: pxp session invalidation can hit any time even at time of commit * or after the commit, display content will be garbage. */ if (plane_state->force_black) - icl_plane_csc_load_black(plane); + icl_plane_csc_load_black(dsb, plane, crtc_state);
- icl_plane_update_sel_fetch_noarm(plane, crtc_state, plane_state, color_plane); + icl_plane_update_sel_fetch_noarm(dsb, plane, crtc_state, plane_state, color_plane); }
-static void icl_plane_update_sel_fetch_arm(struct intel_plane *plane, +static void icl_plane_update_sel_fetch_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { - struct drm_i915_private *i915 = to_i915(plane->base.dev); + struct intel_display *display = to_intel_display(plane->base.dev); enum pipe pipe = plane->pipe;
if (!crtc_state->enable_psr2_sel_fetch) return;
if (drm_rect_height(&plane_state->psr2_sel_fetch_area) > 0) - intel_de_write_fw(i915, SEL_FETCH_PLANE_CTL(pipe, plane->id), - SEL_FETCH_PLANE_CTL_ENABLE); + intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_CTL(pipe, plane->id), + SEL_FETCH_PLANE_CTL_ENABLE); else - icl_plane_disable_sel_fetch_arm(plane, crtc_state); + icl_plane_disable_sel_fetch_arm(dsb, plane, crtc_state); }
static void -icl_plane_update_arm(struct intel_plane *plane, +icl_plane_update_arm(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state) { - struct drm_i915_private *dev_priv = to_i915(plane->base.dev); + struct intel_display *display = to_intel_display(plane->base.dev); enum plane_id plane_id = plane->id; enum pipe pipe = plane->pipe; int color_plane = icl_plane_color_plane(plane_state); @@ -1516,25 +1540,27 @@ icl_plane_update_arm(struct intel_plane *plane, if (plane_state->scaler_id >= 0) skl_program_plane_scaler(plane, crtc_state, plane_state);
- icl_plane_update_sel_fetch_arm(plane, crtc_state, plane_state); + icl_plane_update_sel_fetch_arm(dsb, plane, crtc_state, plane_state);
/* * The control register self-arms if the plane was previously * disabled. Try to make the plane enable atomic by writing * the control register just before the surface register. */ - intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl); - intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), - skl_plane_surf(plane_state, color_plane)); + intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id), + plane_ctl); + intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id), + skl_plane_surf(plane_state, color_plane)); }
static void -skl_plane_async_flip(struct intel_plane *plane, +skl_plane_async_flip(struct intel_dsb *dsb, + struct intel_plane *plane, const struct intel_crtc_state *crtc_state, const struct intel_plane_state *plane_state, bool async_flip) { - struct drm_i915_private *dev_priv = to_i915(plane->base.dev); + struct intel_display *display = to_intel_display(plane->base.dev); enum plane_id plane_id = plane->id; enum pipe pipe = plane->pipe; u32 plane_ctl = plane_state->ctl; @@ -1544,9 +1570,10 @@ skl_plane_async_flip(struct intel_plane *plane, if (async_flip) plane_ctl |= PLANE_CTL_ASYNC_FLIP;
- intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl); - intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), - skl_plane_surf(plane_state, 0)); + intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id), + plane_ctl); + intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id), + skl_plane_surf(plane_state, 0)); }
static bool intel_format_is_p01x(u32 format) diff --git a/drivers/gpu/drm/xe/display/xe_plane_initial.c b/drivers/gpu/drm/xe/display/xe_plane_initial.c index a50ab9eae40ae..0ae3f02b9261b 100644 --- a/drivers/gpu/drm/xe/display/xe_plane_initial.c +++ b/drivers/gpu/drm/xe/display/xe_plane_initial.c @@ -248,7 +248,7 @@ intel_find_initial_plane_obj(struct intel_crtc *crtc, * the lookup of sysmem scratch pages. */ plane->check_plane(crtc_state, plane_state); - plane->async_flip(plane, crtc_state, plane_state, true); + plane->async_flip(NULL, plane, crtc_state, plane_state, true); return;
nofb:
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maarten Lankhorst dev@lankhorst.se
[ Upstream commit 30bfc151f0c1ec80c27a80a7651b2c15c648ad16 ]
This is already handled below in the code by fixup_initial_plane_config.
Fixes: a8153627520a ("drm/i915: Try to relocate the BIOS fb to the start of ggtt") Cc: Ville Syrjälä ville.syrjala@linux.intel.com Reviewed-by: Vinod Govindapillai vinod.govindapillai@intel.com Reviewed-by: Lucas De Marchi lucas.demarchi@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20241210083111.230484-3-dev@la... Signed-off-by: Maarten Lankhorst dev@lankhorst.se (cherry picked from commit 2218704997979fbf11765281ef752f07c5cf25bb) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/xe/display/xe_plane_initial.c | 10 ---------- 1 file changed, 10 deletions(-)
diff --git a/drivers/gpu/drm/xe/display/xe_plane_initial.c b/drivers/gpu/drm/xe/display/xe_plane_initial.c index 0ae3f02b9261b..f99d38cc5d8e9 100644 --- a/drivers/gpu/drm/xe/display/xe_plane_initial.c +++ b/drivers/gpu/drm/xe/display/xe_plane_initial.c @@ -194,8 +194,6 @@ intel_find_initial_plane_obj(struct intel_crtc *crtc, to_intel_plane(crtc->base.primary); struct intel_plane_state *plane_state = to_intel_plane_state(plane->base.state); - struct intel_crtc_state *crtc_state = - to_intel_crtc_state(crtc->base.state); struct drm_framebuffer *fb; struct i915_vma *vma;
@@ -241,14 +239,6 @@ intel_find_initial_plane_obj(struct intel_crtc *crtc, atomic_or(plane->frontbuffer_bit, &to_intel_frontbuffer(fb)->bits);
plane_config->vma = vma; - - /* - * Flip to the newly created mapping ASAP, so we can re-use the - * first part of GGTT for WOPCM, prevent flickering, and prevent - * the lookup of sysmem scratch pages. - */ - plane->check_plane(crtc_state, plane_state); - plane->async_flip(NULL, plane, crtc_state, plane_state, true); return;
nofb:
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vicki Pfau vi@endrift.com
[ Upstream commit e53fc232a65f7488ab75d03a5b95f06aaada7262 ]
When a hid-steam device is removed it must clean up the client_hdev used for intercepting hidraw access. This can lead to scheduling deferred work to reattach the input device. Though the cleanup cancels the deferred work, this was done before the client_hdev itself is cleaned up, so it gets rescheduled. This patch fixes the ordering to make sure the deferred work is properly canceled.
Reported-by: syzbot+0154da2d403396b2bd59@syzkaller.appspotmail.com Fixes: 79504249d7e2 ("HID: hid-steam: Move hidraw input (un)registering to work") Signed-off-by: Vicki Pfau vi@endrift.com Signed-off-by: Jiri Kosina jkosina@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-steam.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c index 7b35966898785..19b7bb0c3d7f9 100644 --- a/drivers/hid/hid-steam.c +++ b/drivers/hid/hid-steam.c @@ -1327,11 +1327,11 @@ static void steam_remove(struct hid_device *hdev) return; }
+ hid_destroy_device(steam->client_hdev); cancel_delayed_work_sync(&steam->mode_switch); cancel_work_sync(&steam->work_connect); cancel_work_sync(&steam->rumble_work); cancel_work_sync(&steam->unregister_work); - hid_destroy_device(steam->client_hdev); steam->client_hdev = NULL; steam->client_opened = 0; if (steam->quirks & STEAM_QUIRK_WIRELESS) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luca Weiss luca.weiss@fairphone.com
[ Upstream commit 5eb3dc1396aa7e315486b24df80df782912334b7 ]
In the downstream IPA driver there's only one group defined for source and destination, and the destination group doesn't have a _DPL suffix.
Fixes: b310de784bac ("net: ipa: add IPA v4.7 support") Signed-off-by: Luca Weiss luca.weiss@fairphone.com Reviewed-by: Alex Elder elder@riscstar.com Link: https://patch.msgid.link/20250227-ipa-v4-7-fixes-v1-1-a88dd8249d8a@fairphone... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ipa/data/ipa_data-v4.7.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ipa/data/ipa_data-v4.7.c b/drivers/net/ipa/data/ipa_data-v4.7.c index c8c23d9be961b..7e315779e6648 100644 --- a/drivers/net/ipa/data/ipa_data-v4.7.c +++ b/drivers/net/ipa/data/ipa_data-v4.7.c @@ -28,12 +28,10 @@ enum ipa_resource_type { enum ipa_rsrc_group_id { /* Source resource group identifiers */ IPA_RSRC_GROUP_SRC_UL_DL = 0, - IPA_RSRC_GROUP_SRC_UC_RX_Q, IPA_RSRC_GROUP_SRC_COUNT, /* Last in set; not a source group */
/* Destination resource group identifiers */ - IPA_RSRC_GROUP_DST_UL_DL_DPL = 0, - IPA_RSRC_GROUP_DST_UNUSED_1, + IPA_RSRC_GROUP_DST_UL_DL = 0, IPA_RSRC_GROUP_DST_COUNT, /* Last; not a destination group */ };
@@ -81,7 +79,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { }, .endpoint = { .config = { - .resource_group = IPA_RSRC_GROUP_DST_UL_DL_DPL, + .resource_group = IPA_RSRC_GROUP_DST_UL_DL, .aggregation = true, .status_enable = true, .rx = { @@ -128,7 +126,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { }, .endpoint = { .config = { - .resource_group = IPA_RSRC_GROUP_DST_UL_DL_DPL, + .resource_group = IPA_RSRC_GROUP_DST_UL_DL, .qmap = true, .aggregation = true, .rx = { @@ -197,12 +195,12 @@ static const struct ipa_resource ipa_resource_src[] = { /* Destination resource configuration data for an SoC having IPA v4.7 */ static const struct ipa_resource ipa_resource_dst[] = { [IPA_RESOURCE_TYPE_DST_DATA_SECTORS] = { - .limits[IPA_RSRC_GROUP_DST_UL_DL_DPL] = { + .limits[IPA_RSRC_GROUP_DST_UL_DL] = { .min = 7, .max = 7, }, }, [IPA_RESOURCE_TYPE_DST_DPS_DMARS] = { - .limits[IPA_RSRC_GROUP_DST_UL_DL_DPL] = { + .limits[IPA_RSRC_GROUP_DST_UL_DL] = { .min = 2, .max = 2, }, },
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luca Weiss luca.weiss@fairphone.com
[ Upstream commit 6a2843aaf551d87beb92d774f7d5b8ae007fe774 ]
As per downstream reference, max_writes should be 12 and max_reads should be 13.
Fixes: b310de784bac ("net: ipa: add IPA v4.7 support") Signed-off-by: Luca Weiss luca.weiss@fairphone.com Reviewed-by: Alex Elder elder@riscstar.com Link: https://patch.msgid.link/20250227-ipa-v4-7-fixes-v1-2-a88dd8249d8a@fairphone... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ipa/data/ipa_data-v4.7.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ipa/data/ipa_data-v4.7.c b/drivers/net/ipa/data/ipa_data-v4.7.c index 7e315779e6648..e63dcf8d45567 100644 --- a/drivers/net/ipa/data/ipa_data-v4.7.c +++ b/drivers/net/ipa/data/ipa_data-v4.7.c @@ -38,8 +38,8 @@ enum ipa_rsrc_group_id { /* QSB configuration data for an SoC having IPA v4.7 */ static const struct ipa_qsb_data ipa_qsb_data[] = { [IPA_QSB_MASTER_DDR] = { - .max_writes = 8, - .max_reads = 0, /* no limit (hardware max) */ + .max_writes = 12, + .max_reads = 13, .max_reads_beats = 120, }, };
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luca Weiss luca.weiss@fairphone.com
[ Upstream commit 934e69669e32eb653234898424ae007bae2f636e ]
Enable the checksum option for these two endpoints in order to allow mobile data to actually work. Without this, no packets seem to make it through the IPA.
Fixes: b310de784bac ("net: ipa: add IPA v4.7 support") Signed-off-by: Luca Weiss luca.weiss@fairphone.com Reviewed-by: Alex Elder elder@riscstar.com Link: https://patch.msgid.link/20250227-ipa-v4-7-fixes-v1-3-a88dd8249d8a@fairphone... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ipa/data/ipa_data-v4.7.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/ipa/data/ipa_data-v4.7.c b/drivers/net/ipa/data/ipa_data-v4.7.c index e63dcf8d45567..41f212209993f 100644 --- a/drivers/net/ipa/data/ipa_data-v4.7.c +++ b/drivers/net/ipa/data/ipa_data-v4.7.c @@ -104,6 +104,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .filter_support = true, .config = { .resource_group = IPA_RSRC_GROUP_SRC_UL_DL, + .checksum = true, .qmap = true, .status_enable = true, .tx = { @@ -127,6 +128,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .endpoint = { .config = { .resource_group = IPA_RSRC_GROUP_DST_UL_DL, + .checksum = true, .qmap = true, .aggregation = true, .rx = {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiayuan Chen jiayuan.chen@linux.dev
[ Upstream commit 4c2d14c40a68678d885eab4008a0129646805bae ]
Syzbot caught an "KMSAN: uninit-value" warning [1], which is caused by the ppp driver not initializing a 2-byte header when using socket filter.
The following code can generate a PPP filter BPF program: ''' struct bpf_program fp; pcap_t *handle; handle = pcap_open_dead(DLT_PPP_PPPD, 65535); pcap_compile(handle, &fp, "ip and outbound", 0, 0); bpf_dump(&fp, 1); ''' Its output is: ''' (000) ldh [2] (001) jeq #0x21 jt 2 jf 5 (002) ldb [0] (003) jeq #0x1 jt 4 jf 5 (004) ret #65535 (005) ret #0 ''' Wen can find similar code at the following link: https://github.com/ppp-project/ppp/blob/master/pppd/options.c#L1680 The maintainer of this code repository is also the original maintainer of the ppp driver.
As you can see the BPF program skips 2 bytes of data and then reads the 'Protocol' field to determine if it's an IP packet. Then it read the first byte of the first 2 bytes to determine the direction.
The issue is that only the first byte indicating direction is initialized in current ppp driver code while the second byte is not initialized.
For normal BPF programs generated by libpcap, uninitialized data won't be used, so it's not a problem. However, for carefully crafted BPF programs, such as those generated by syzkaller [2], which start reading from offset 0, the uninitialized data will be used and caught by KMSAN.
[1] https://syzkaller.appspot.com/bug?extid=853242d9c9917165d791 [2] https://syzkaller.appspot.com/text?tag=ReproC&x=11994913980000
Cc: Paul Mackerras paulus@samba.org Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: syzbot+853242d9c9917165d791@syzkaller.appspotmail.com Closes: https://lore.kernel.org/bpf/000000000000dea025060d6bc3bc@google.com/ Signed-off-by: Jiayuan Chen jiayuan.chen@linux.dev Reviewed-by: Simon Horman horms@kernel.org Link: https://patch.msgid.link/20250228141408.393864-1-jiayuan.chen@linux.dev Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ppp/ppp_generic.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c index 4583e15ad03a0..1420c4efa48e6 100644 --- a/drivers/net/ppp/ppp_generic.c +++ b/drivers/net/ppp/ppp_generic.c @@ -72,6 +72,17 @@ #define PPP_PROTO_LEN 2 #define PPP_LCP_HDRLEN 4
+/* The filter instructions generated by libpcap are constructed + * assuming a four-byte PPP header on each packet, where the last + * 2 bytes are the protocol field defined in the RFC and the first + * byte of the first 2 bytes indicates the direction. + * The second byte is currently unused, but we still need to initialize + * it to prevent crafted BPF programs from reading them which would + * cause reading of uninitialized data. + */ +#define PPP_FILTER_OUTBOUND_TAG 0x0100 +#define PPP_FILTER_INBOUND_TAG 0x0000 + /* * An instance of /dev/ppp can be associated with either a ppp * interface unit or a ppp channel. In both cases, file->private_data @@ -1762,10 +1773,10 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
if (proto < 0x8000) { #ifdef CONFIG_PPP_FILTER - /* check if we should pass this packet */ - /* the filter instructions are constructed assuming - a four-byte PPP header on each packet */ - *(u8 *)skb_push(skb, 2) = 1; + /* check if the packet passes the pass and active filters. + * See comment for PPP_FILTER_OUTBOUND_TAG above. + */ + *(__be16 *)skb_push(skb, 2) = htons(PPP_FILTER_OUTBOUND_TAG); if (ppp->pass_filter && bpf_prog_run(ppp->pass_filter, skb) == 0) { if (ppp->debug & 1) @@ -2482,14 +2493,13 @@ ppp_receive_nonmp_frame(struct ppp *ppp, struct sk_buff *skb) /* network protocol frame - give it to the kernel */
#ifdef CONFIG_PPP_FILTER - /* check if the packet passes the pass and active filters */ - /* the filter instructions are constructed assuming - a four-byte PPP header on each packet */ if (ppp->pass_filter || ppp->active_filter) { if (skb_unclone(skb, GFP_ATOMIC)) goto err; - - *(u8 *)skb_push(skb, 2) = 0; + /* Check if the packet passes the pass and active filters. + * See comment for PPP_FILTER_INBOUND_TAG above. + */ + *(__be16 *)skb_push(skb, 2) = htons(PPP_FILTER_INBOUND_TAG); if (ppp->pass_filter && bpf_prog_run(ppp->pass_filter, skb) == 0) { if (ppp->debug & 1)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Oleksij Rempel o.rempel@pengutronix.de
[ Upstream commit fe55b1d401c697c2ef126fe3ebbcaa6885fced5a ]
Adapt linkstate_get_sqi() and linkstate_get_sqi_max() to take a phy_device argument directly, enabling support for setups with multiple PHYs. The previous assumption of a single PHY attached to a net_device no longer holds.
Use ethnl_req_get_phydev() to identify the appropriate PHY device for the operation. Update linkstate_prepare_data() and related logic to accommodate this change, ensuring compatibility with multi-PHY configurations.
Signed-off-by: Oleksij Rempel o.rempel@pengutronix.de Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: 637399bf7e77 ("net: ethtool: netlink: Allow NULL nlattrs when getting a phy_device") Signed-off-by: Sasha Levin sashal@kernel.org --- net/ethtool/linkstate.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/net/ethtool/linkstate.c b/net/ethtool/linkstate.c index 34d76e87847d0..459cfea7652d4 100644 --- a/net/ethtool/linkstate.c +++ b/net/ethtool/linkstate.c @@ -26,9 +26,8 @@ const struct nla_policy ethnl_linkstate_get_policy[] = { NLA_POLICY_NESTED(ethnl_header_policy_stats), };
-static int linkstate_get_sqi(struct net_device *dev) +static int linkstate_get_sqi(struct phy_device *phydev) { - struct phy_device *phydev = dev->phydev; int ret;
if (!phydev) @@ -46,9 +45,8 @@ static int linkstate_get_sqi(struct net_device *dev) return ret; }
-static int linkstate_get_sqi_max(struct net_device *dev) +static int linkstate_get_sqi_max(struct phy_device *phydev) { - struct phy_device *phydev = dev->phydev; int ret;
if (!phydev) @@ -100,19 +98,28 @@ static int linkstate_prepare_data(const struct ethnl_req_info *req_base, { struct linkstate_reply_data *data = LINKSTATE_REPDATA(reply_base); struct net_device *dev = reply_base->dev; + struct nlattr **tb = info->attrs; + struct phy_device *phydev; int ret;
+ phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_LINKSTATE_HEADER], + info->extack); + if (IS_ERR(phydev)) { + ret = PTR_ERR(phydev); + goto out; + } + ret = ethnl_ops_begin(dev); if (ret < 0) return ret; data->link = __ethtool_get_link(dev);
- ret = linkstate_get_sqi(dev); + ret = linkstate_get_sqi(phydev); if (linkstate_sqi_critical_error(ret)) goto out; data->sqi = ret;
- ret = linkstate_get_sqi_max(dev); + ret = linkstate_get_sqi_max(phydev); if (linkstate_sqi_critical_error(ret)) goto out; data->sqi_max = ret; @@ -127,9 +134,9 @@ static int linkstate_prepare_data(const struct ethnl_req_info *req_base, sizeof(data->link_stats) / 8);
if (req_base->flags & ETHTOOL_FLAG_STATS) { - if (dev->phydev) + if (phydev) data->link_stats.link_down_events = - READ_ONCE(dev->phydev->link_down_events); + READ_ONCE(phydev->link_down_events);
if (dev->ethtool_ops->get_link_ext_stats) dev->ethtool_ops->get_link_ext_stats(dev,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jakub Kicinski kuba@kernel.org
[ Upstream commit b7a2c1fe6b55364e61b4b54b991eb43a47bb1104 ]
Introduce support for standardized PHY statistics reporting in ethtool by extending the PHYLIB framework. Add the functions phy_ethtool_get_phy_stats() and phy_ethtool_get_link_ext_stats() to provide a consistent interface for retrieving PHY-level and link-specific statistics. These functions are used within the ethtool implementation to avoid direct access to the phy_device structure outside of the PHYLIB framework.
A new structure, ethtool_phy_stats, is introduced to standardize PHY statistics such as packet counts, byte counts, and error counters. Drivers are updated to include callbacks for retrieving PHY and link-specific statistics, ensuring values are explicitly set only for supported fields, initialized with ETHTOOL_STAT_NOT_SET to avoid ambiguity.
Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Oleksij Rempel o.rempel@pengutronix.de Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: 637399bf7e77 ("net: ethtool: netlink: Allow NULL nlattrs when getting a phy_device") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/phy/phy.c | 43 ++++++++++++++++++++++++++++++++++++ drivers/net/phy/phy_device.c | 2 ++ include/linux/ethtool.h | 23 +++++++++++++++++++ include/linux/phy.h | 36 ++++++++++++++++++++++++++++++ include/linux/phylib_stubs.h | 42 +++++++++++++++++++++++++++++++++++ net/ethtool/linkstate.c | 5 +++-- net/ethtool/stats.c | 18 +++++++++++++++ 7 files changed, 167 insertions(+), 2 deletions(-)
diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 4f3e742907cb6..c9cfdc33fc5f1 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -615,6 +615,49 @@ int phy_ethtool_get_stats(struct phy_device *phydev, } EXPORT_SYMBOL(phy_ethtool_get_stats);
+/** + * __phy_ethtool_get_phy_stats - Retrieve standardized PHY statistics + * @phydev: Pointer to the PHY device + * @phy_stats: Pointer to ethtool_eth_phy_stats structure + * @phydev_stats: Pointer to ethtool_phy_stats structure + * + * Fetches PHY statistics using a kernel-defined interface for consistent + * diagnostics. Unlike phy_ethtool_get_stats(), which allows custom stats, + * this function enforces a standardized format for better interoperability. + */ +void __phy_ethtool_get_phy_stats(struct phy_device *phydev, + struct ethtool_eth_phy_stats *phy_stats, + struct ethtool_phy_stats *phydev_stats) +{ + if (!phydev->drv || !phydev->drv->get_phy_stats) + return; + + mutex_lock(&phydev->lock); + phydev->drv->get_phy_stats(phydev, phy_stats, phydev_stats); + mutex_unlock(&phydev->lock); +} + +/** + * __phy_ethtool_get_link_ext_stats - Retrieve extended link statistics for a PHY + * @phydev: Pointer to the PHY device + * @link_stats: Pointer to the structure to store extended link statistics + * + * Populates the ethtool_link_ext_stats structure with link down event counts + * and additional driver-specific link statistics, if available. + */ +void __phy_ethtool_get_link_ext_stats(struct phy_device *phydev, + struct ethtool_link_ext_stats *link_stats) +{ + link_stats->link_down_events = READ_ONCE(phydev->link_down_events); + + if (!phydev->drv || !phydev->drv->get_link_stats) + return; + + mutex_lock(&phydev->lock); + phydev->drv->get_link_stats(phydev, link_stats); + mutex_unlock(&phydev->lock); +} + /** * phy_ethtool_get_plca_cfg - Get PLCA RS configuration * @phydev: the phy_device struct diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c index 499797646580e..119dfa2d6643a 100644 --- a/drivers/net/phy/phy_device.c +++ b/drivers/net/phy/phy_device.c @@ -3776,6 +3776,8 @@ static const struct ethtool_phy_ops phy_ethtool_phy_ops = { static const struct phylib_stubs __phylib_stubs = { .hwtstamp_get = __phy_hwtstamp_get, .hwtstamp_set = __phy_hwtstamp_set, + .get_phy_stats = __phy_ethtool_get_phy_stats, + .get_link_ext_stats = __phy_ethtool_get_link_ext_stats, };
static void phylib_register_stubs(void) diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h index b8b935b526033..b0ed740ca749b 100644 --- a/include/linux/ethtool.h +++ b/include/linux/ethtool.h @@ -412,6 +412,29 @@ struct ethtool_eth_phy_stats { ); };
+/** + * struct ethtool_phy_stats - PHY-level statistics counters + * @rx_packets: Total successfully received frames + * @rx_bytes: Total successfully received bytes + * @rx_errors: Total received frames with errors (e.g., CRC errors) + * @tx_packets: Total successfully transmitted frames + * @tx_bytes: Total successfully transmitted bytes + * @tx_errors: Total transmitted frames with errors + * + * This structure provides a standardized interface for reporting + * PHY-level statistics counters. It is designed to expose statistics + * commonly provided by PHYs but not explicitly defined in the IEEE + * 802.3 standard. + */ +struct ethtool_phy_stats { + u64 rx_packets; + u64 rx_bytes; + u64 rx_errors; + u64 tx_packets; + u64 tx_bytes; + u64 tx_errors; +}; + /* Basic IEEE 802.3 MAC Ctrl statistics (30.3.3.*), not otherwise exposed * via a more targeted API. */ diff --git a/include/linux/phy.h b/include/linux/phy.h index a98bc91a0cde9..945264f457d8a 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -1090,6 +1090,35 @@ struct phy_driver { int (*cable_test_get_status)(struct phy_device *dev, bool *finished);
/* Get statistics from the PHY using ethtool */ + /** + * @get_phy_stats: Retrieve PHY statistics. + * @dev: The PHY device for which the statistics are retrieved. + * @eth_stats: structure where Ethernet PHY stats will be stored. + * @stats: structure where additional PHY-specific stats will be stored. + * + * Retrieves the supported PHY statistics and populates the provided + * structures. The input structures are pre-initialized with + * `ETHTOOL_STAT_NOT_SET`, and the driver must only modify members + * corresponding to supported statistics. Unmodified members will remain + * set to `ETHTOOL_STAT_NOT_SET` and will not be returned to userspace. + */ + void (*get_phy_stats)(struct phy_device *dev, + struct ethtool_eth_phy_stats *eth_stats, + struct ethtool_phy_stats *stats); + + /** + * @get_link_stats: Retrieve link statistics. + * @dev: The PHY device for which the statistics are retrieved. + * @link_stats: structure where link-specific stats will be stored. + * + * Retrieves link-related statistics for the given PHY device. The input + * structure is pre-initialized with `ETHTOOL_STAT_NOT_SET`, and the + * driver must only modify members corresponding to supported + * statistics. Unmodified members will remain set to + * `ETHTOOL_STAT_NOT_SET` and will not be returned to userspace. + */ + void (*get_link_stats)(struct phy_device *dev, + struct ethtool_link_ext_stats *link_stats); /** @get_sset_count: Number of statistic counters */ int (*get_sset_count)(struct phy_device *dev); /** @get_strings: Names of the statistic counters */ @@ -2055,6 +2084,13 @@ int phy_ethtool_get_strings(struct phy_device *phydev, u8 *data); int phy_ethtool_get_sset_count(struct phy_device *phydev); int phy_ethtool_get_stats(struct phy_device *phydev, struct ethtool_stats *stats, u64 *data); + +void __phy_ethtool_get_phy_stats(struct phy_device *phydev, + struct ethtool_eth_phy_stats *phy_stats, + struct ethtool_phy_stats *phydev_stats); +void __phy_ethtool_get_link_ext_stats(struct phy_device *phydev, + struct ethtool_link_ext_stats *link_stats); + int phy_ethtool_get_plca_cfg(struct phy_device *phydev, struct phy_plca_cfg *plca_cfg); int phy_ethtool_set_plca_cfg(struct phy_device *phydev, diff --git a/include/linux/phylib_stubs.h b/include/linux/phylib_stubs.h index 1279f48c8a707..9d2d6090c86d1 100644 --- a/include/linux/phylib_stubs.h +++ b/include/linux/phylib_stubs.h @@ -5,6 +5,9 @@
#include <linux/rtnetlink.h>
+struct ethtool_eth_phy_stats; +struct ethtool_link_ext_stats; +struct ethtool_phy_stats; struct kernel_hwtstamp_config; struct netlink_ext_ack; struct phy_device; @@ -19,6 +22,11 @@ struct phylib_stubs { int (*hwtstamp_set)(struct phy_device *phydev, struct kernel_hwtstamp_config *config, struct netlink_ext_ack *extack); + void (*get_phy_stats)(struct phy_device *phydev, + struct ethtool_eth_phy_stats *phy_stats, + struct ethtool_phy_stats *phydev_stats); + void (*get_link_ext_stats)(struct phy_device *phydev, + struct ethtool_link_ext_stats *link_stats); };
static inline int phy_hwtstamp_get(struct phy_device *phydev, @@ -50,6 +58,29 @@ static inline int phy_hwtstamp_set(struct phy_device *phydev, return phylib_stubs->hwtstamp_set(phydev, config, extack); }
+static inline void phy_ethtool_get_phy_stats(struct phy_device *phydev, + struct ethtool_eth_phy_stats *phy_stats, + struct ethtool_phy_stats *phydev_stats) +{ + ASSERT_RTNL(); + + if (!phylib_stubs) + return; + + phylib_stubs->get_phy_stats(phydev, phy_stats, phydev_stats); +} + +static inline void phy_ethtool_get_link_ext_stats(struct phy_device *phydev, + struct ethtool_link_ext_stats *link_stats) +{ + ASSERT_RTNL(); + + if (!phylib_stubs) + return; + + phylib_stubs->get_link_ext_stats(phydev, link_stats); +} + #else
static inline int phy_hwtstamp_get(struct phy_device *phydev, @@ -65,4 +96,15 @@ static inline int phy_hwtstamp_set(struct phy_device *phydev, return -EOPNOTSUPP; }
+static inline void phy_ethtool_get_phy_stats(struct phy_device *phydev, + struct ethtool_eth_phy_stats *phy_stats, + struct ethtool_phy_stats *phydev_stats) +{ +} + +static inline void phy_ethtool_get_link_ext_stats(struct phy_device *phydev, + struct ethtool_link_ext_stats *link_stats) +{ +} + #endif diff --git a/net/ethtool/linkstate.c b/net/ethtool/linkstate.c index 459cfea7652d4..af19e1bed303f 100644 --- a/net/ethtool/linkstate.c +++ b/net/ethtool/linkstate.c @@ -3,6 +3,7 @@ #include "netlink.h" #include "common.h" #include <linux/phy.h> +#include <linux/phylib_stubs.h>
struct linkstate_req_info { struct ethnl_req_info base; @@ -135,8 +136,8 @@ static int linkstate_prepare_data(const struct ethnl_req_info *req_base,
if (req_base->flags & ETHTOOL_FLAG_STATS) { if (phydev) - data->link_stats.link_down_events = - READ_ONCE(phydev->link_down_events); + phy_ethtool_get_link_ext_stats(phydev, + &data->link_stats);
if (dev->ethtool_ops->get_link_ext_stats) dev->ethtool_ops->get_link_ext_stats(dev, diff --git a/net/ethtool/stats.c b/net/ethtool/stats.c index 912f0c4fff2fb..f4d822c225db6 100644 --- a/net/ethtool/stats.c +++ b/net/ethtool/stats.c @@ -1,5 +1,8 @@ // SPDX-License-Identifier: GPL-2.0-only
+#include <linux/phy.h> +#include <linux/phylib_stubs.h> + #include "netlink.h" #include "common.h" #include "bitset.h" @@ -20,6 +23,7 @@ struct stats_reply_data { struct ethtool_eth_mac_stats mac_stats; struct ethtool_eth_ctrl_stats ctrl_stats; struct ethtool_rmon_stats rmon_stats; + struct ethtool_phy_stats phydev_stats; ); const struct ethtool_rmon_hist_range *rmon_ranges; }; @@ -120,8 +124,15 @@ static int stats_prepare_data(const struct ethnl_req_info *req_base, struct stats_reply_data *data = STATS_REPDATA(reply_base); enum ethtool_mac_stats_src src = req_info->src; struct net_device *dev = reply_base->dev; + struct nlattr **tb = info->attrs; + struct phy_device *phydev; int ret;
+ phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_STATS_HEADER], + info->extack); + if (IS_ERR(phydev)) + return PTR_ERR(phydev); + ret = ethnl_ops_begin(dev); if (ret < 0) return ret; @@ -145,6 +156,13 @@ static int stats_prepare_data(const struct ethnl_req_info *req_base, data->ctrl_stats.src = src; data->rmon_stats.src = src;
+ if (test_bit(ETHTOOL_STATS_ETH_PHY, req_info->stat_mask) && + src == ETHTOOL_MAC_STATS_SRC_AGGREGATE) { + if (phydev) + phy_ethtool_get_phy_stats(phydev, &data->phy_stats, + &data->phydev_stats); + } + if (test_bit(ETHTOOL_STATS_ETH_PHY, req_info->stat_mask) && dev->ethtool_ops->get_eth_phy_stats) dev->ethtool_ops->get_eth_phy_stats(dev, &data->phy_stats);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maxime Chevallier maxime.chevallier@bootlin.com
[ Upstream commit 637399bf7e77797811adf340090b561a8f9d1213 ]
ethnl_req_get_phydev() is used to lookup a phy_device, in the case an ethtool netlink command targets a specific phydev within a netdev's topology.
It takes as a parameter a const struct nlattr *header that's used for error handling :
if (!phydev) { NL_SET_ERR_MSG_ATTR(extack, header, "no phy matching phyindex"); return ERR_PTR(-ENODEV); }
In the notify path after a ->set operation however, there's no request attributes available.
The typical callsite for the above function looks like:
phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_XXX_HEADER], info->extack);
So, when tb is NULL (such as in the ethnl notify path), we have a nice crash.
It turns out that there's only the PLCA command that is in that case, as the other phydev-specific commands don't have a notification.
This commit fixes the crash by passing the cmd index and the nlattr array separately, allowing NULL-checking it directly inside the helper.
Fixes: c15e065b46dc ("net: ethtool: Allow passing a phy index for some commands") Signed-off-by: Maxime Chevallier maxime.chevallier@bootlin.com Reviewed-by: Kory Maincent kory.maincent@bootlin.com Reported-by: Parthiban Veerasooran parthiban.veerasooran@microchip.com Link: https://patch.msgid.link/20250301141114.97204-1-maxime.chevallier@bootlin.co... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ethtool/cabletest.c | 8 ++++---- net/ethtool/linkstate.c | 2 +- net/ethtool/netlink.c | 6 +++--- net/ethtool/netlink.h | 5 +++-- net/ethtool/phy.c | 2 +- net/ethtool/plca.c | 6 +++--- net/ethtool/pse-pd.c | 4 ++-- net/ethtool/stats.c | 2 +- net/ethtool/strset.c | 2 +- 9 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/net/ethtool/cabletest.c b/net/ethtool/cabletest.c index f22051f33868a..84096f6b0236e 100644 --- a/net/ethtool/cabletest.c +++ b/net/ethtool/cabletest.c @@ -72,8 +72,8 @@ int ethnl_act_cable_test(struct sk_buff *skb, struct genl_info *info) dev = req_info.dev;
rtnl_lock(); - phydev = ethnl_req_get_phydev(&req_info, - tb[ETHTOOL_A_CABLE_TEST_HEADER], + phydev = ethnl_req_get_phydev(&req_info, tb, + ETHTOOL_A_CABLE_TEST_HEADER, info->extack); if (IS_ERR_OR_NULL(phydev)) { ret = -EOPNOTSUPP; @@ -339,8 +339,8 @@ int ethnl_act_cable_test_tdr(struct sk_buff *skb, struct genl_info *info) goto out_dev_put;
rtnl_lock(); - phydev = ethnl_req_get_phydev(&req_info, - tb[ETHTOOL_A_CABLE_TEST_TDR_HEADER], + phydev = ethnl_req_get_phydev(&req_info, tb, + ETHTOOL_A_CABLE_TEST_TDR_HEADER, info->extack); if (IS_ERR_OR_NULL(phydev)) { ret = -EOPNOTSUPP; diff --git a/net/ethtool/linkstate.c b/net/ethtool/linkstate.c index af19e1bed303f..05a5f72c99fab 100644 --- a/net/ethtool/linkstate.c +++ b/net/ethtool/linkstate.c @@ -103,7 +103,7 @@ static int linkstate_prepare_data(const struct ethnl_req_info *req_base, struct phy_device *phydev; int ret;
- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_LINKSTATE_HEADER], + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_LINKSTATE_HEADER, info->extack); if (IS_ERR(phydev)) { ret = PTR_ERR(phydev); diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 4d18dc29b3043..e233dfc8ca4be 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -210,7 +210,7 @@ int ethnl_parse_header_dev_get(struct ethnl_req_info *req_info, }
struct phy_device *ethnl_req_get_phydev(const struct ethnl_req_info *req_info, - const struct nlattr *header, + struct nlattr **tb, unsigned int header, struct netlink_ext_ack *extack) { struct phy_device *phydev; @@ -224,8 +224,8 @@ struct phy_device *ethnl_req_get_phydev(const struct ethnl_req_info *req_info, return req_info->dev->phydev;
phydev = phy_link_topo_get_phy(req_info->dev, req_info->phy_index); - if (!phydev) { - NL_SET_ERR_MSG_ATTR(extack, header, + if (!phydev && tb) { + NL_SET_ERR_MSG_ATTR(extack, tb[header], "no phy matching phyindex"); return ERR_PTR(-ENODEV); } diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 203b08eb6c6f6..5e176938d6d22 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -275,7 +275,8 @@ static inline void ethnl_parse_header_dev_put(struct ethnl_req_info *req_info) * ethnl_req_get_phydev() - Gets the phy_device targeted by this request, * if any. Must be called under rntl_lock(). * @req_info: The ethnl request to get the phy from. - * @header: The netlink header, used for error reporting. + * @tb: The netlink attributes array, for error reporting. + * @header: The netlink header index, used for error reporting. * @extack: The netlink extended ACK, for error reporting. * * The caller must hold RTNL, until it's done interacting with the returned @@ -289,7 +290,7 @@ static inline void ethnl_parse_header_dev_put(struct ethnl_req_info *req_info) * is returned. */ struct phy_device *ethnl_req_get_phydev(const struct ethnl_req_info *req_info, - const struct nlattr *header, + struct nlattr **tb, unsigned int header, struct netlink_ext_ack *extack);
/** diff --git a/net/ethtool/phy.c b/net/ethtool/phy.c index ed8f690f6bac8..e067cc234419d 100644 --- a/net/ethtool/phy.c +++ b/net/ethtool/phy.c @@ -125,7 +125,7 @@ static int ethnl_phy_parse_request(struct ethnl_req_info *req_base, struct phy_req_info *req_info = PHY_REQINFO(req_base); struct phy_device *phydev;
- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PHY_HEADER], + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PHY_HEADER, extack); if (!phydev) return 0; diff --git a/net/ethtool/plca.c b/net/ethtool/plca.c index d95d92f173a6d..e1f7820a6158f 100644 --- a/net/ethtool/plca.c +++ b/net/ethtool/plca.c @@ -62,7 +62,7 @@ static int plca_get_cfg_prepare_data(const struct ethnl_req_info *req_base, struct phy_device *phydev; int ret;
- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PLCA_HEADER], + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PLCA_HEADER, info->extack); // check that the PHY device is available and connected if (IS_ERR_OR_NULL(phydev)) { @@ -152,7 +152,7 @@ ethnl_set_plca(struct ethnl_req_info *req_info, struct genl_info *info) bool mod = false; int ret;
- phydev = ethnl_req_get_phydev(req_info, tb[ETHTOOL_A_PLCA_HEADER], + phydev = ethnl_req_get_phydev(req_info, tb, ETHTOOL_A_PLCA_HEADER, info->extack); // check that the PHY device is available and connected if (IS_ERR_OR_NULL(phydev)) @@ -211,7 +211,7 @@ static int plca_get_status_prepare_data(const struct ethnl_req_info *req_base, struct phy_device *phydev; int ret;
- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PLCA_HEADER], + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PLCA_HEADER, info->extack); // check that the PHY device is available and connected if (IS_ERR_OR_NULL(phydev)) { diff --git a/net/ethtool/pse-pd.c b/net/ethtool/pse-pd.c index a0705edca22a1..71843de832cca 100644 --- a/net/ethtool/pse-pd.c +++ b/net/ethtool/pse-pd.c @@ -64,7 +64,7 @@ static int pse_prepare_data(const struct ethnl_req_info *req_base, if (ret < 0) return ret;
- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PSE_HEADER], + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PSE_HEADER, info->extack); if (IS_ERR(phydev)) return -ENODEV; @@ -261,7 +261,7 @@ ethnl_set_pse(struct ethnl_req_info *req_info, struct genl_info *info) struct phy_device *phydev; int ret;
- phydev = ethnl_req_get_phydev(req_info, tb[ETHTOOL_A_PSE_HEADER], + phydev = ethnl_req_get_phydev(req_info, tb, ETHTOOL_A_PSE_HEADER, info->extack); ret = ethnl_set_pse_validate(phydev, info); if (ret) diff --git a/net/ethtool/stats.c b/net/ethtool/stats.c index f4d822c225db6..273ae4ff343fe 100644 --- a/net/ethtool/stats.c +++ b/net/ethtool/stats.c @@ -128,7 +128,7 @@ static int stats_prepare_data(const struct ethnl_req_info *req_base, struct phy_device *phydev; int ret;
- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_STATS_HEADER], + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_STATS_HEADER, info->extack); if (IS_ERR(phydev)) return PTR_ERR(phydev); diff --git a/net/ethtool/strset.c b/net/ethtool/strset.c index b3382b3cf325c..b9400d18f01d5 100644 --- a/net/ethtool/strset.c +++ b/net/ethtool/strset.c @@ -299,7 +299,7 @@ static int strset_prepare_data(const struct ethnl_req_info *req_base, return 0; }
- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_HEADER_FLAGS], + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_HEADER_FLAGS, info->extack);
/* phydev can be NULL, check for errors only */
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Oscar Maes oscmaes92@gmail.com
[ Upstream commit b33a534610067ade2bdaf2052900aaad99701353 ]
Currently, VLAN devices can be created on top of non-ethernet devices.
Besides the fact that it doesn't make much sense, this also causes a bug which leaks the address of a kernel function to usermode.
When creating a VLAN device, we initialize GARP (garp_init_applicant) and MRP (mrp_init_applicant) for the underlying device.
As part of the initialization process, we add the multicast address of each applicant to the underlying device, by calling dev_mc_add.
__dev_mc_add uses dev->addr_len to determine the length of the new multicast address.
This causes an out-of-bounds read if dev->addr_len is greater than 6, since the multicast addresses provided by GARP and MRP are only 6 bytes long.
This behaviour can be reproduced using the following commands:
ip tunnel add gretest mode ip6gre local ::1 remote ::2 dev lo ip l set up dev gretest ip link add link gretest name vlantest type vlan id 100
Then, the following command will display the address of garp_pdu_rcv:
ip maddr show | grep 01:80:c2:00:00:21
Fix the bug by enforcing the type of the underlying device during VLAN device initialization.
Fixes: 22bedad3ce11 ("net: convert multicast list to list_head") Reported-by: syzbot+91161fe81857b396c8a0@syzkaller.appspotmail.com Closes: https://lore.kernel.org/netdev/000000000000ca9a81061a01ec20@google.com/ Signed-off-by: Oscar Maes oscmaes92@gmail.com Reviewed-by: Jiri Pirko jiri@nvidia.com Link: https://patch.msgid.link/20250303155619.8918-1-oscmaes92@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/8021q/vlan.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c index e45187b882206..41be38264493d 100644 --- a/net/8021q/vlan.c +++ b/net/8021q/vlan.c @@ -131,7 +131,8 @@ int vlan_check_real_dev(struct net_device *real_dev, { const char *name = real_dev->name;
- if (real_dev->features & NETIF_F_VLAN_CHALLENGED) { + if (real_dev->features & NETIF_F_VLAN_CHALLENGED || + real_dev->type != ARPHRD_ETHER) { pr_info("VLANs not supported on %s\n", name); NL_SET_ERR_MSG_MOD(extack, "VLANs not supported on device"); return -EOPNOTSUPP;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jarkko Sakkinen jarkko@kernel.org
[ Upstream commit 0d3e0dfd68fb9e6b0ec865be9f3377cc3ff55733 ]
The total size calculated for EPC can overflow u64 given the added up page for SECS. Further, the total size calculated for shmem can overflow even when the EPC size stays within limits of u64, given that it adds the extra space for 128 byte PCMD structures (one for each page).
Address this by pre-evaluating the micro-architectural requirement of SGX: the address space size must be power of two. This is eventually checked up by ECREATE but the pre-check has the additional benefit of making sure that there is some space for additional data.
Fixes: 888d24911787 ("x86/sgx: Add SGX_IOC_ENCLAVE_CREATE") Reported-by: Dan Carpenter dan.carpenter@linaro.org Signed-off-by: Jarkko Sakkinen jarkko@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Acked-by: Dave Hansen dave.hansen@intel.com Cc: Peter Zijlstra peterz@infradead.org Cc: "H. Peter Anvin" hpa@zytor.com Link: https://lore.kernel.org/r/20250305050006.43896-1-jarkko@kernel.org
Closes: https://lore.kernel.org/linux-sgx/c87e01a0-e7dd-4749-a348-0980d3444f04@stanl... Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/cpu/sgx/ioctl.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index b65ab214bdf57..776a20172867e 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -64,6 +64,13 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs) struct file *backing; long ret;
+ /* + * ECREATE would detect this too, but checking here also ensures + * that the 'encl_size' calculations below can never overflow. + */ + if (!is_power_of_2(secs->size)) + return -EINVAL; + va_page = sgx_encl_grow(encl, true); if (IS_ERR(va_page)) return PTR_ERR(va_page);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yuezhang Mo Yuezhang.Mo@sony.com
[ Upstream commit 6697f819a10b238ccf01998c3f203d65d8374696 ]
This commit fixes the condition for allocating cluster to parent directory to avoid allocating new cluster to parent directory when there are just enough empty directory entries at the end of the parent directory.
Fixes: af02c72d0b62 ("exfat: convert exfat_find_empty_entry() to use dentry cache") Signed-off-by: Yuezhang Mo Yuezhang.Mo@sony.com Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/exfat/namei.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c index 337197ece5995..e47a5ddfc79b3 100644 --- a/fs/exfat/namei.c +++ b/fs/exfat/namei.c @@ -237,7 +237,7 @@ static int exfat_search_empty_slot(struct super_block *sb, dentry = 0; }
- while (dentry + num_entries < total_entries && + while (dentry + num_entries <= total_entries && clu.dir != EXFAT_EOF_CLUSTER) { i = dentry & (dentries_per_clu - 1);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Namjae Jeon linkinjeon@kernel.org
[ Upstream commit 9da33619e0ca53627641bc97d1b93ec741299111 ]
bitmap clear loop will take long time in __exfat_free_cluster() if data size of file/dir enty is invalid. If cluster bit in bitmap is already clear, stop clearing bitmap go to out of loop.
Fixes: 31023864e67a ("exfat: add fat entry operations") Reported-by: Kun Hu huk23@m.fudan.edu.cn, Jiaji Qin jjtan24@m.fudan.edu.cn Reviewed-by: Sungjong Seo sj1557.seo@samsung.com Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/exfat/balloc.c | 10 ++++++++-- fs/exfat/exfat_fs.h | 2 +- fs/exfat/fatent.c | 11 +++++++---- 3 files changed, 16 insertions(+), 7 deletions(-)
diff --git a/fs/exfat/balloc.c b/fs/exfat/balloc.c index ce9be95c9172f..9ff825f1502d5 100644 --- a/fs/exfat/balloc.c +++ b/fs/exfat/balloc.c @@ -141,7 +141,7 @@ int exfat_set_bitmap(struct inode *inode, unsigned int clu, bool sync) return 0; }
-void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync) +int exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync) { int i, b; unsigned int ent_idx; @@ -150,13 +150,17 @@ void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync) struct exfat_mount_options *opts = &sbi->options;
if (!is_valid_cluster(sbi, clu)) - return; + return -EIO;
ent_idx = CLUSTER_TO_BITMAP_ENT(clu); i = BITMAP_OFFSET_SECTOR_INDEX(sb, ent_idx); b = BITMAP_OFFSET_BIT_IN_SECTOR(sb, ent_idx);
+ if (!test_bit_le(b, sbi->vol_amap[i]->b_data)) + return -EIO; + clear_bit_le(b, sbi->vol_amap[i]->b_data); + exfat_update_bh(sbi->vol_amap[i], sync);
if (opts->discard) { @@ -171,6 +175,8 @@ void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync) opts->discard = 0; } } + + return 0; }
/* diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h index 3cdc1de362a94..d2ba8e2d0c398 100644 --- a/fs/exfat/exfat_fs.h +++ b/fs/exfat/exfat_fs.h @@ -452,7 +452,7 @@ int exfat_count_num_clusters(struct super_block *sb, int exfat_load_bitmap(struct super_block *sb); void exfat_free_bitmap(struct exfat_sb_info *sbi); int exfat_set_bitmap(struct inode *inode, unsigned int clu, bool sync); -void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync); +int exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync); unsigned int exfat_find_free_bitmap(struct super_block *sb, unsigned int clu); int exfat_count_used_clusters(struct super_block *sb, unsigned int *ret_count); int exfat_trim_fs(struct inode *inode, struct fstrim_range *range); diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c index 9e5492ac409b0..6f3651c6ca91e 100644 --- a/fs/exfat/fatent.c +++ b/fs/exfat/fatent.c @@ -175,6 +175,7 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain BITMAP_OFFSET_SECTOR_INDEX(sb, CLUSTER_TO_BITMAP_ENT(clu));
if (p_chain->flags == ALLOC_NO_FAT_CHAIN) { + int err; unsigned int last_cluster = p_chain->dir + p_chain->size - 1; do { bool sync = false; @@ -189,7 +190,9 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain cur_cmap_i = next_cmap_i; }
- exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode))); + err = exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode))); + if (err) + break; clu++; num_clusters++; } while (num_clusters < p_chain->size); @@ -210,12 +213,13 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain cur_cmap_i = next_cmap_i; }
- exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode))); + if (exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode)))) + break; clu = n_clu; num_clusters++;
if (err) - goto dec_used_clus; + break;
if (num_clusters >= sbi->num_clusters - EXFAT_FIRST_CLUSTER) { /* @@ -229,7 +233,6 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain } while (clu != EXFAT_EOF_CLUSTER); }
-dec_used_clus: sbi->used_clusters -= num_clusters; return 0; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Sandeen sandeen@redhat.com
[ Upstream commit fda94a9919fd632033979ad7765a99ae3cab9289 ]
When generic_write_checks() returns zero, it means that iov_iter_count() is zero, and there is no work to do.
Simply return success like all other filesystems do, rather than proceeding down the write path, which today yields an -EFAULT in generic_perform_write() via the (fault_in_iov_iter_readable(i, bytes) == bytes) check when bytes == 0.
Fixes: 11a347fb6cef ("exfat: change to get file size from DataLength") Reported-by: Noah kernel-org-10@maxgrass.eu Signed-off-by: Eric Sandeen sandeen@redhat.com Reviewed-by: Yuezhang Mo Yuezhang.Mo@sony.com Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/exfat/file.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/exfat/file.c b/fs/exfat/file.c index 05b51e7217838..807349d8ea050 100644 --- a/fs/exfat/file.c +++ b/fs/exfat/file.c @@ -587,7 +587,7 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter) valid_size = ei->valid_size;
ret = generic_write_checks(iocb, iter); - if (ret < 0) + if (ret <= 0) goto unlock;
if (iocb->ki_flags & IOCB_DIRECT) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jason Xing kerneljasonxing@gmail.com
[ Upstream commit 3c9231ea6497dfc50ac0ef69fff484da27d0df66 ]
When I read through the TSO codes, I found out that we probably miss initializing the tx_flags of last seg when TSO is turned off, which means at the following points no more timestamp (for this last one) will be generated. There are three flags to be handled in this patch: 1. SKBTX_HW_TSTAMP 2. SKBTX_BPF 3. SKBTX_SCHED_TSTAMP Note that SKBTX_BPF[1] was added in 6.14.0-rc2 by commit 6b98ec7e882af ("bpf: Add BPF_SOCK_OPS_TSTAMP_SCHED_CB callback") and only belongs to net-next branch material for now. The common issue of the above three flags can be fixed by this single patch.
This patch initializes the tx_flags to SKBTX_ANY_TSTAMP like what the UDP GSO does to make the newly segmented last skb inherit the tx_flags so that requested timestamp will be generated in each certain layer, or else that last one has zero value of tx_flags which leads to no timestamp at all.
Fixes: 4ed2d765dfacc ("net-timestamp: TCP timestamping") Signed-off-by: Jason Xing kerneljasonxing@gmail.com Reviewed-by: Willem de Bruijn willemb@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/tcp_offload.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c index 2308665b51c53..2dfac79dc78b8 100644 --- a/net/ipv4/tcp_offload.c +++ b/net/ipv4/tcp_offload.c @@ -13,12 +13,15 @@ #include <net/tcp.h> #include <net/protocol.h>
-static void tcp_gso_tstamp(struct sk_buff *skb, unsigned int ts_seq, +static void tcp_gso_tstamp(struct sk_buff *skb, struct sk_buff *gso_skb, unsigned int seq, unsigned int mss) { + u32 flags = skb_shinfo(gso_skb)->tx_flags & SKBTX_ANY_TSTAMP; + u32 ts_seq = skb_shinfo(gso_skb)->tskey; + while (skb) { if (before(ts_seq, seq + mss)) { - skb_shinfo(skb)->tx_flags |= SKBTX_SW_TSTAMP; + skb_shinfo(skb)->tx_flags |= flags; skb_shinfo(skb)->tskey = ts_seq; return; } @@ -193,8 +196,8 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb, th = tcp_hdr(skb); seq = ntohl(th->seq);
- if (unlikely(skb_shinfo(gso_skb)->tx_flags & SKBTX_SW_TSTAMP)) - tcp_gso_tstamp(segs, skb_shinfo(gso_skb)->tskey, seq, mss); + if (unlikely(skb_shinfo(gso_skb)->tx_flags & SKBTX_ANY_TSTAMP)) + tcp_gso_tstamp(segs, gso_skb, seq, mss);
newcheck = ~csum_fold(csum_add(csum_unfold(th->check), delta));
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Uday Shankar ushankar@purestorage.com
[ Upstream commit 5ac60242b0173be83709603ebaf27a473f16c4e4 ]
The parameters set by the set_params call are only applied to the block device in the start_dev call. So if a device has already been started, a subsequently issued set_params on that device will not have the desired effect, and should return an error. There is an existing check for this - set_params fails on devices in the LIVE state. But this check is not sufficient to cover the recovery case. In this case, the device will be in the QUIESCED or FAIL_IO states, so set_params will succeed. But this success is misleading, because the parameters will not be applied, since the device has already been started (by a previous ublk server). The bit UB_STATE_USED is set on completion of the start_dev; use it to detect and fail set_params commands which arrive too late to be applied (after start_dev).
Signed-off-by: Uday Shankar ushankar@purestorage.com Fixes: 0aa73170eba5 ("ublk_drv: add SET_PARAMS/GET_PARAMS control command") Reviewed-by: Ming Lei ming.lei@redhat.com Link: https://lore.kernel.org/r/20250304-set_params-v1-1-17b5e0887606@purestorage.... Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/block/ublk_drv.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 458ac54e7b201..c7d728d686e5a 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -2665,9 +2665,12 @@ static int ublk_ctrl_set_params(struct ublk_device *ub, if (ph.len > sizeof(struct ublk_params)) ph.len = sizeof(struct ublk_params);
- /* parameters can only be changed when device isn't live */ mutex_lock(&ub->mutex); - if (ub->dev_info.state == UBLK_S_DEV_LIVE) { + if (test_bit(UB_STATE_USED, &ub->state)) { + /* + * Parameters can only be changed when device hasn't + * been started yet + */ ret = -EACCES; } else if (copy_from_user(&ub->params, argp, ph.len)) { ret = -EFAULT;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zecheng Li zecheng@google.com
[ Upstream commit 3b4035ddbfc8e4521f85569998a7569668cccf51 ]
child_cfs_rq_on_list attempts to convert a 'prev' pointer to a cfs_rq. This 'prev' pointer can originate from struct rq's leaf_cfs_rq_list, making the conversion invalid and potentially leading to memory corruption. Depending on the relative positions of leaf_cfs_rq_list and the task group (tg) pointer within the struct, this can cause a memory fault or access garbage data.
The issue arises in list_add_leaf_cfs_rq, where both cfs_rq->leaf_cfs_rq_list and rq->leaf_cfs_rq_list are added to the same leaf list. Also, rq->tmp_alone_branch can be set to rq->leaf_cfs_rq_list.
This adds a check `if (prev == &rq->leaf_cfs_rq_list)` after the main conditional in child_cfs_rq_on_list. This ensures that the container_of operation will convert a correct cfs_rq struct.
This check is sufficient because only cfs_rqs on the same CPU are added to the list, so verifying the 'prev' pointer against the current rq's list head is enough.
Fixes a potential memory corruption issue that due to current struct layout might not be manifesting as a crash but could lead to unpredictable behavior when the layout changes.
Fixes: fdaba61ef8a2 ("sched/fair: Ensure that the CFS parent is added after unthrottling") Signed-off-by: Zecheng Li zecheng@google.com Reviewed-and-tested-by: K Prateek Nayak kprateek.nayak@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Vincent Guittot vincent.guittot@linaro.org Link: https://lore.kernel.org/r/20250304214031.2882646-1-zecheng@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/sched/fair.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ddc096d6b0c20..58ba14ed8fbcb 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4155,15 +4155,17 @@ static inline bool child_cfs_rq_on_list(struct cfs_rq *cfs_rq) { struct cfs_rq *prev_cfs_rq; struct list_head *prev; + struct rq *rq = rq_of(cfs_rq);
if (cfs_rq->on_list) { prev = cfs_rq->leaf_cfs_rq_list.prev; } else { - struct rq *rq = rq_of(cfs_rq); - prev = rq->tmp_alone_branch; }
+ if (prev == &rq->leaf_cfs_rq_list) + return false; + prev_cfs_rq = container_of(prev, struct cfs_rq, leaf_cfs_rq_list);
return (prev_cfs_rq->tg->parent == cfs_rq->tg);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter dan.carpenter@linaro.org
[ Upstream commit 528361c49962708a60f51a1afafeb00987cebedf ]
The kernel_recvmsg() function returns an int which could be either negative error codes or the number of bytes received. The problem is that the condition:
if (ret < sizeof(*icresp)) {
is type promoted to type unsigned long and negative values are treated as high positive values which is success, when they should be treated as failure. Handle invalid positive returns separately from negative error codes to avoid this problem.
Fixes: 578539e09690 ("nvme-tcp: fix connect failure on receiving partial ICResp PDU") Signed-off-by: Dan Carpenter dan.carpenter@linaro.org Reviewed-by: Caleb Sander Mateos csander@purestorage.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Reviewed-by: Chaitanya Kulkarni kch@nvidia.com Signed-off-by: Keith Busch kbusch@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/tcp.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 0bcc9bf57d1d0..3ee35f94660f9 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1521,11 +1521,11 @@ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue) msg.msg_flags = MSG_WAITALL; ret = kernel_recvmsg(queue->sock, &msg, &iov, 1, iov.iov_len, msg.msg_flags); - if (ret < sizeof(*icresp)) { + if (ret >= 0 && ret < sizeof(*icresp)) + ret = -ECONNRESET; + if (ret < 0) { pr_warn("queue %d: failed to receive icresp, error %d\n", nvme_tcp_queue_id(queue), ret); - if (ret >= 0) - ret = -ECONNRESET; goto free_icresp; } ret = -ENOTCONN;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lorenzo Bianconi lorenzo@kernel.org
[ Upstream commit ccc2f5a436fbb0ae1fb598932a9b8e48423c1959 ]
On MMIO devices (e.g. MT7988 or EN7581) unicast traffic received on lanX port is flooded on all other user ports if the DSA switch is configured without VLAN support since PORT_MATRIX in PCR regs contains all user ports. Similar to MDIO devices (e.g. MT7530 and MT7531) fix the issue defining default VLAN-ID 0 for MT7530 MMIO devices.
Fixes: 110c18bfed414 ("net: dsa: mt7530: introduce driver for MT7988 built-in switch") Signed-off-by: Lorenzo Bianconi lorenzo@kernel.org Reviewed-by: Chester A. Unal chester.a.unal@arinc9.com Link: https://patch.msgid.link/20250304-mt7988-flooding-fix-v1-1-905523ae83e9@kern... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/mt7530.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index d84ee1b419a61..abc979fbb45d1 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -2590,7 +2590,8 @@ mt7531_setup_common(struct dsa_switch *ds) if (ret < 0) return ret;
- return 0; + /* Setup VLAN ID 0 for VLAN-unaware bridges */ + return mt7530_setup_vlan0(priv); }
static int @@ -2686,11 +2687,6 @@ mt7531_setup(struct dsa_switch *ds) if (ret) return ret;
- /* Setup VLAN ID 0 for VLAN-unaware bridges */ - ret = mt7530_setup_vlan0(priv); - if (ret) - return ret; - ds->assisted_learning_on_cpu_port = true; ds->mtu_enforcement_ingress = true;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matt Johnston matt@codeconstruct.com.au
[ Upstream commit cf7ee25e70c6edfac4553d6b671e8b19db1d9573 ]
daddr can be NULL if there is no neighbour table entry present, in that case the tx packet should be dropped.
saddr will usually be set by MCTP core, but check for NULL in case a packet is transmitted by a different protocol.
Signed-off-by: Matt Johnston matt@codeconstruct.com.au Fixes: c8755b29b58e ("mctp i3c: MCTP I3C driver") Link: https://patch.msgid.link/20250304-mctp-i3c-null-v1-1-4416bbd56540@codeconstr... Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/mctp/mctp-i3c.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/net/mctp/mctp-i3c.c b/drivers/net/mctp/mctp-i3c.c index ee9d562f0817c..a2b15cddf46e6 100644 --- a/drivers/net/mctp/mctp-i3c.c +++ b/drivers/net/mctp/mctp-i3c.c @@ -507,6 +507,9 @@ static int mctp_i3c_header_create(struct sk_buff *skb, struct net_device *dev, { struct mctp_i3c_internal_hdr *ihdr;
+ if (!daddr || !saddr) + return -EINVAL; + skb_push(skb, sizeof(struct mctp_i3c_internal_hdr)); skb_reset_mac_header(skb); ihdr = (void *)skb_mac_header(skb);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit 0e7633d7b95b67f1758aea19f8e85621c5f506a3 ]
This patch follows commit 92191dd10730 ("net: ipv6: fix dst ref loops in rpl, seg6 and ioam6 lwtunnels") and, on a second thought, the same patch is also needed for ila (even though the config that triggered the issue was pathological, but still, we don't want that to happen).
Fixes: 79ff2fc31e0f ("ila: Cache a route to translated address") Cc: Tom Herbert tom@herbertland.com Signed-off-by: Justin Iurman justin.iurman@uliege.be Link: https://patch.msgid.link/20250304181039.35951-1-justin.iurman@uliege.be Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/ila/ila_lwt.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ipv6/ila/ila_lwt.c b/net/ipv6/ila/ila_lwt.c index ff7e734e335b0..ac4bcc623603a 100644 --- a/net/ipv6/ila/ila_lwt.c +++ b/net/ipv6/ila/ila_lwt.c @@ -88,7 +88,8 @@ static int ila_output(struct net *net, struct sock *sk, struct sk_buff *skb) goto drop; }
- if (ilwt->connected) { + /* cache only if we don't create a dst reference loop */ + if (ilwt->connected && orig_dst->lwtstate != dst->lwtstate) { local_bh_disable(); dst_cache_set_ip6(&ilwt->dst_cache, dst, &fl6.saddr); local_bh_enable();
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Justin Iurman justin.iurman@uliege.be
[ Upstream commit 5da15a9c11c1c47ef573e6805b60a7d8a1687a2a ]
Add missing skb_dst_drop() to drop reference to the old dst before adding the new dst to the skb.
Fixes: 79ff2fc31e0f ("ila: Cache a route to translated address") Cc: Tom Herbert tom@herbertland.com Signed-off-by: Justin Iurman justin.iurman@uliege.be Link: https://patch.msgid.link/20250305081655.19032-1-justin.iurman@uliege.be Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/ila/ila_lwt.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/net/ipv6/ila/ila_lwt.c b/net/ipv6/ila/ila_lwt.c index ac4bcc623603a..7d574f5132e2f 100644 --- a/net/ipv6/ila/ila_lwt.c +++ b/net/ipv6/ila/ila_lwt.c @@ -96,6 +96,7 @@ static int ila_output(struct net *net, struct sock *sk, struct sk_buff *skb) } }
+ skb_dst_drop(skb); skb_dst_set(skb, dst); return dst_output(net, sk, skb);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Fabrizio Castro fabrizio.castro.jz@renesas.com
[ Upstream commit 391b41f983bf7ff853de44704d8e14e7cc648a9b ]
of_parse_phandle_with_fixed_args() requires its caller to call into of_node_put() on the node pointer from the output structure, but such a call is currently missing.
Call into of_node_put() to rectify that.
Fixes: 159f8a0209af ("gpio-rcar: Add DT support") Signed-off-by: Fabrizio Castro fabrizio.castro.jz@renesas.com Reviewed-by: Lad Prabhakar prabhakar.mahadev-lad.rj@bp.renesas.com Reviewed-by: Geert Uytterhoeven geert+renesas@glider.be Link: https://lore.kernel.org/r/20250305163753.34913-2-fabrizio.castro.jz@renesas.... Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-rcar.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpio/gpio-rcar.c b/drivers/gpio/gpio-rcar.c index 436ff763341ae..6641ed5cd8e1c 100644 --- a/drivers/gpio/gpio-rcar.c +++ b/drivers/gpio/gpio-rcar.c @@ -468,7 +468,12 @@ static int gpio_rcar_parse_dt(struct gpio_rcar_priv *p, unsigned int *npins) p->info = *info;
ret = of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, 0, &args); - *npins = ret == 0 ? args.args[2] : RCAR_MAX_GPIO_PER_BANK; + if (ret) { + *npins = RCAR_MAX_GPIO_PER_BANK; + } else { + *npins = args.args[2]; + of_node_put(args.np); + }
if (*npins == 0 || *npins > RCAR_MAX_GPIO_PER_BANK) { dev_warn(p->dev, "Invalid number of gpio lines %u, using %u\n",
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Heusel christian@heusel.eu
commit 2397d61ee45cddb8f3bd3a3a9840ef0f0b5aa843 upstream.
This reverts commit 235b630eda072d7e7b102ab346d6b8a2c028a772.
This commit was found responsible for issues with SD card recognition, as users had to re-insert their cards in the readers and wait for a while. As for some people the SD card was involved in the boot process it also caused boot failures.
Cc: stable@vger.kernel.org Link: https://bbs.archlinux.org/viewtopic.php?id=303321 Fixes: 235b630eda07 ("drivers/card_reader/rtsx_usb: Restore interrupt based detection") Reported-by: qf quintafeira@tutanota.com Closes: https://lore.kernel.org/all/1de87dfa-1e81-45b7-8dcb-ad86c21d5352@heusel.eu Signed-off-by: Christian Heusel christian@heusel.eu Link: https://lore.kernel.org/r/20250224-revert-sdcard-patch-v1-1-d1a457fbb796@heu... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/misc/cardreader/rtsx_usb.c | 15 --------------- 1 file changed, 15 deletions(-)
--- a/drivers/misc/cardreader/rtsx_usb.c +++ b/drivers/misc/cardreader/rtsx_usb.c @@ -286,7 +286,6 @@ static int rtsx_usb_get_status_with_bulk int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status) { int ret; - u8 interrupt_val = 0; u16 *buf;
if (!status) @@ -309,20 +308,6 @@ int rtsx_usb_get_card_status(struct rtsx ret = rtsx_usb_get_status_with_bulk(ucr, status); }
- rtsx_usb_read_register(ucr, CARD_INT_PEND, &interrupt_val); - /* Cross check presence with interrupts */ - if (*status & XD_CD) - if (!(interrupt_val & XD_INT)) - *status &= ~XD_CD; - - if (*status & SD_CD) - if (!(interrupt_val & SD_INT)) - *status &= ~SD_CD; - - if (*status & MS_CD) - if (!(interrupt_val & MS_INT)) - *status &= ~MS_CD; - /* usb_control_msg may return positive when success */ if (ret < 0) return ret;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com
commit b5ea08aa883da05106fcc683d12489a4292d1122 upstream.
Clocks acquired with of_clk_get() need to be freed with clk_put(). Call clk_put() on priv->clks[0] on error path.
Fixes: 3df0e240caba ("usb: renesas_usbhs: Add multiple clocks management") Cc: stable stable@kernel.org Reviewed-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Tested-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Signed-off-by: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com Link: https://lore.kernel.org/r/20250225110248.870417-2-claudiu.beznea.uj@bp.renes... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/renesas_usbhs/common.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/usb/renesas_usbhs/common.c +++ b/drivers/usb/renesas_usbhs/common.c @@ -312,8 +312,10 @@ static int usbhsc_clk_get(struct device priv->clks[1] = of_clk_get(dev_of_node(dev), 1); if (PTR_ERR(priv->clks[1]) == -ENOENT) priv->clks[1] = NULL; - else if (IS_ERR(priv->clks[1])) + else if (IS_ERR(priv->clks[1])) { + clk_put(priv->clks[0]); return PTR_ERR(priv->clks[1]); + }
return 0; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier maz@kernel.org
commit 487cfd4a8e3dc42d34a759017978a4edaf85fce0 upstream.
When adding support for USB3-over-USB4 tunnelling detection, a check for an Intel-specific capability was added. This capability, which goes by ID 206, is used without any check that we are actually dealing with an Intel host.
As it turns out, the Cadence XHCI controller *also* exposes an extended capability numbered 206 (for unknown purposes), but of course doesn't have the Intel-specific registers that the tunnelling code is trying to access. Fun follows.
The core of the problems is that the tunnelling code blindly uses vendor-specific capabilities without any check (the Intel-provided documentation I have at hand indicates that 192-255 are indeed vendor-specific).
Restrict the detection code to Intel HW for real, preventing any further explosion on my (non-Intel) HW.
Cc: stable stable@kernel.org Fixes: 948ce83fbb7df ("xhci: Add USB4 tunnel detection for USB3 devices on Intel hosts") Signed-off-by: Marc Zyngier maz@kernel.org Acked-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20250227194529.2288718-1-maz@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci-hub.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c index 9693464c0520..69c278b64084 100644 --- a/drivers/usb/host/xhci-hub.c +++ b/drivers/usb/host/xhci-hub.c @@ -12,6 +12,7 @@ #include <linux/slab.h> #include <linux/unaligned.h> #include <linux/bitfield.h> +#include <linux/pci.h>
#include "xhci.h" #include "xhci-trace.h" @@ -770,9 +771,16 @@ static int xhci_exit_test_mode(struct xhci_hcd *xhci) enum usb_link_tunnel_mode xhci_port_is_tunneled(struct xhci_hcd *xhci, struct xhci_port *port) { + struct usb_hcd *hcd; void __iomem *base; u32 offset;
+ /* Don't try and probe this capability for non-Intel hosts */ + hcd = xhci_to_hcd(xhci); + if (!dev_is_pci(hcd->self.controller) || + to_pci_dev(hcd->self.controller)->vendor != PCI_VENDOR_ID_INTEL) + return USB_LINK_UNKNOWN; + base = &xhci->cap_regs->hc_capbase; offset = xhci_find_next_ext_cap(base, 0, XHCI_EXT_CAPS_INTEL_SPR_SHADOW);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com
commit e0c92440938930e7fa7aa6362780d39cdea34449 upstream.
The gpriv->transceiver is retrieved in probe() through usb_get_phy() but never released. Use devm_usb_get_phy() to handle this scenario.
This issue was identified through code investigation. No issue was found without this change.
Fixes: b5a2875605ca ("usb: renesas_usbhs: Allow an OTG PHY driver to provide VBUS") Cc: stable stable@kernel.org Reviewed-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Tested-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Signed-off-by: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com Link: https://lore.kernel.org/r/20250225110248.870417-3-claudiu.beznea.uj@bp.renes... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/renesas_usbhs/mod_gadget.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/usb/renesas_usbhs/mod_gadget.c +++ b/drivers/usb/renesas_usbhs/mod_gadget.c @@ -1094,7 +1094,7 @@ int usbhs_mod_gadget_probe(struct usbhs_ goto usbhs_mod_gadget_probe_err_gpriv; }
- gpriv->transceiver = usb_get_phy(USB_PHY_TYPE_UNDEFINED); + gpriv->transceiver = devm_usb_get_phy(dev, USB_PHY_TYPE_UNDEFINED); dev_info(dev, "%stransceiver found\n", !IS_ERR(gpriv->transceiver) ? "" : "no ");
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pawel Laszczak pawell@cadence.com
commit 2b66ef84d0d2a0ea955b40bd306f5e3abbc5cf9c upstream.
The xHC resources allocated for USB devices are not released in correct order after resuming in case when while suspend device was reconnected.
This issue has been detected during the fallowing scenario: - connect hub HS to root port - connect LS/FS device to hub port - wait for enumeration to finish - force host to suspend - reconnect hub attached to root port - wake host
For this scenario during enumeration of USB LS/FS device the Cadence xHC reports completion error code for xHC commands because the xHC resources used for devices has not been properly released. XHCI specification doesn't mention that device can be reset in any order so, we should not treat this issue as Cadence xHC controller bug. Similar as during disconnecting in this case the device resources should be cleared starting form the last usb device in tree toward the root hub. To fix this issue usbcore driver should call hcd->driver->reset_device for all USB devices connected to hub which was reconnected while suspending.
Fixes: 3d82904559f4 ("usb: cdnsp: cdns3 Add main part of Cadence USBSSP DRD Driver") Cc: stable stable@kernel.org Signed-off-by: Pawel Laszczak pawell@cadence.com Reviewed-by: Alan Stern stern@rowland.harvard.edu Link: https://lore.kernel.org/r/PH7PR07MB953841E38C088678ACDCF6EEDDCC2@PH7PR07MB95... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/core/hub.c | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+)
--- a/drivers/usb/core/hub.c +++ b/drivers/usb/core/hub.c @@ -6066,6 +6066,36 @@ void usb_hub_cleanup(void) } /* usb_hub_cleanup() */
/** + * hub_hc_release_resources - clear resources used by host controller + * @udev: pointer to device being released + * + * Context: task context, might sleep + * + * Function releases the host controller resources in correct order before + * making any operation on resuming usb device. The host controller resources + * allocated for devices in tree should be released starting from the last + * usb device in tree toward the root hub. This function is used only during + * resuming device when usb device require reinitialization – that is, when + * flag udev->reset_resume is set. + * + * This call is synchronous, and may not be used in an interrupt context. + */ +static void hub_hc_release_resources(struct usb_device *udev) +{ + struct usb_hub *hub = usb_hub_to_struct_hub(udev); + struct usb_hcd *hcd = bus_to_hcd(udev->bus); + int i; + + /* Release up resources for all children before this device */ + for (i = 0; i < udev->maxchild; i++) + if (hub->ports[i]->child) + hub_hc_release_resources(hub->ports[i]->child); + + if (hcd->driver->reset_device) + hcd->driver->reset_device(hcd, udev); +} + +/** * usb_reset_and_verify_device - perform a USB port reset to reinitialize a device * @udev: device to reset (not in SUSPENDED or NOTATTACHED state) * @@ -6129,6 +6159,9 @@ static int usb_reset_and_verify_device(s bos = udev->bos; udev->bos = NULL;
+ if (udev->reset_resume) + hub_hc_release_resources(udev); + mutex_lock(hcd->address0_mutex);
for (i = 0; i < PORT_INIT_TRIES; ++i) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miao Li limiao@kylinos.cn
commit ff712188daa3fe3ce7e11e530b4dca3826dae14a upstream.
When used on Huawei hisi platforms, Prolific Mass Storage Card Reader which the VID:PID is in 067b:2731 might fail to enumerate at boot time and doesn't work well with LPM enabled, combination quirks: USB_QUIRK_DELAY_INIT + USB_QUIRK_NO_LPM fixed the problems.
Signed-off-by: Miao Li limiao@kylinos.cn Cc: stable stable@kernel.org Link: https://lore.kernel.org/r/20250304070757.139473-1-limiao870622@163.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/core/quirks.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/usb/core/quirks.c +++ b/drivers/usb/core/quirks.c @@ -341,6 +341,10 @@ static const struct usb_device_id usb_qu { USB_DEVICE(0x0638, 0x0a13), .driver_info = USB_QUIRK_STRING_FETCH_255 },
+ /* Prolific Single-LUN Mass Storage Card Reader */ + { USB_DEVICE(0x067b, 0x2731), .driver_info = USB_QUIRK_DELAY_INIT | + USB_QUIRK_NO_LPM }, + /* Saitek Cyborg Gold Joystick */ { USB_DEVICE(0x06a3, 0x0006), .driver_info = USB_QUIRK_CONFIG_INTF_STRINGS },
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrei Kuchynski akuchynski@chromium.org
commit b13abcb7ddd8d38de769486db5bd917537b32ab1 upstream.
Resources should be released only after all threads that utilize them have been destroyed. This commit ensures that resources are not released prematurely by waiting for the associated workqueue to complete before deallocating them.
Cc: stable stable@kernel.org Fixes: b9aa02ca39a4 ("usb: typec: ucsi: Add polling mechanism for partner tasks like alt mode checking") Signed-off-by: Andrei Kuchynski akuchynski@chromium.org Reviewed-by: Heikki Krogerus heikki.krogerus@linux.intel.com Link: https://lore.kernel.org/r/20250305111739.1489003-2-akuchynski@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/typec/ucsi/ucsi.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
--- a/drivers/usb/typec/ucsi/ucsi.c +++ b/drivers/usb/typec/ucsi/ucsi.c @@ -1809,11 +1809,11 @@ static int ucsi_init(struct ucsi *ucsi)
err_unregister: for (con = connector; con->port; con++) { + if (con->wq) + destroy_workqueue(con->wq); ucsi_unregister_partner(con); ucsi_unregister_altmodes(con, UCSI_RECIPIENT_CON); ucsi_unregister_port_psy(con); - if (con->wq) - destroy_workqueue(con->wq);
usb_power_delivery_unregister_capabilities(con->port_sink_caps); con->port_sink_caps = NULL; @@ -1997,10 +1997,6 @@ void ucsi_unregister(struct ucsi *ucsi)
for (i = 0; i < ucsi->cap.num_connectors; i++) { cancel_work_sync(&ucsi->connector[i].work); - ucsi_unregister_partner(&ucsi->connector[i]); - ucsi_unregister_altmodes(&ucsi->connector[i], - UCSI_RECIPIENT_CON); - ucsi_unregister_port_psy(&ucsi->connector[i]);
if (ucsi->connector[i].wq) { struct ucsi_work *uwork; @@ -2016,6 +2012,11 @@ void ucsi_unregister(struct ucsi *ucsi) destroy_workqueue(ucsi->connector[i].wq); }
+ ucsi_unregister_partner(&ucsi->connector[i]); + ucsi_unregister_altmodes(&ucsi->connector[i], + UCSI_RECIPIENT_CON); + ucsi_unregister_port_psy(&ucsi->connector[i]); + usb_power_delivery_unregister_capabilities(ucsi->connector[i].port_sink_caps); ucsi->connector[i].port_sink_caps = NULL; usb_power_delivery_unregister_capabilities(ucsi->connector[i].port_source_caps);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com
commit 552ca6b87e3778f3dd5b87842f95138162e16c82 upstream.
When performing continuous unbind/bind operations on the USB drivers available on the Renesas RZ/G2L SoC, a kernel crash with the message "Unable to handle kernel NULL pointer dereference at virtual address" may occur. This issue points to the usbhsc_notify_hotplug() function.
Flush the delayed work to avoid its execution when driver resources are unavailable.
Fixes: bc57381e6347 ("usb: renesas_usbhs: use delayed_work instead of work_struct") Cc: stable stable@kernel.org Reviewed-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Tested-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Signed-off-by: Claudiu Beznea claudiu.beznea.uj@bp.renesas.com Link: https://lore.kernel.org/r/20250225110248.870417-4-claudiu.beznea.uj@bp.renes... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/renesas_usbhs/common.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/usb/renesas_usbhs/common.c +++ b/drivers/usb/renesas_usbhs/common.c @@ -781,6 +781,8 @@ static void usbhs_remove(struct platform
dev_dbg(&pdev->dev, "usb remove\n");
+ flush_delayed_work(&priv->notify_hotplug_work); + /* power off */ if (!usbhs_get_dparam(priv, runtime_pwctrl)) usbhsc_power_ctrl(priv, 0);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Prashanth K prashanth.k@oss.qualcomm.com
commit 17c2c87c37862c3e95b55f660681cc6e8d66660e upstream.
Currently while UDC suspends, u_ether attempts to remote wakeup the host if there are any pending transfers. However, if remote wakeup fails, the UDC remains suspended but the is_suspend flag is not set. And since is_suspend flag isn't set, the subsequent eth_start_xmit() would queue USB requests to suspended UDC.
To fix this, bail out from gether_suspend() only if remote wakeup operation is successful.
Cc: stable stable@kernel.org Fixes: 0a1af6dfa077 ("usb: gadget: f_ecm: Add suspend/resume and remote wakeup support") Signed-off-by: Prashanth K prashanth.k@oss.qualcomm.com Link: https://lore.kernel.org/r/20250212100840.3812153-1-prashanth.k@oss.qualcomm.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/gadget/function/u_ether.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/usb/gadget/function/u_ether.c +++ b/drivers/usb/gadget/function/u_ether.c @@ -1052,8 +1052,8 @@ void gether_suspend(struct gether *link) * There is a transfer in progress. So we trigger a remote * wakeup to inform the host. */ - ether_wakeup_host(dev->port_usb); - return; + if (!ether_wakeup_host(dev->port_usb)) + return; } spin_lock_irqsave(&dev->lock, flags); link->is_suspend = true;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikita Zhandarovich n.zhandarovich@fintech.ru
commit c90aad369899a607cfbc002bebeafd51e31900cd upstream.
Syzbot once again identified a flaw in usb endpoint checking, see [1]. This time the issue stems from a commit authored by me (2eabb655a968 ("usb: atm: cxacru: fix endpoint checking in cxacru_bind()")).
While using usb_find_common_endpoints() may usually be enough to discard devices with wrong endpoints, in this case one needs more than just finding and identifying the sufficient number of endpoints of correct types - one needs to check the endpoint's address as well.
Since cxacru_bind() fills URBs with CXACRU_EP_CMD address in mind, switch the endpoint verification approach to usb_check_XXX_endpoints() instead to fix incomplete ep testing.
[1] Syzbot report: usb 5-1: BOGUS urb xfer, pipe 3 != type 1 WARNING: CPU: 0 PID: 1378 at drivers/usb/core/urb.c:504 usb_submit_urb+0xc4e/0x18c0 drivers/usb/core/urb.c:503 ... RIP: 0010:usb_submit_urb+0xc4e/0x18c0 drivers/usb/core/urb.c:503 ... Call Trace: <TASK> cxacru_cm+0x3c8/0xe50 drivers/usb/atm/cxacru.c:649 cxacru_card_status drivers/usb/atm/cxacru.c:760 [inline] cxacru_bind+0xcf9/0x1150 drivers/usb/atm/cxacru.c:1223 usbatm_usb_probe+0x314/0x1d30 drivers/usb/atm/usbatm.c:1058 cxacru_usb_probe+0x184/0x220 drivers/usb/atm/cxacru.c:1377 usb_probe_interface+0x641/0xbb0 drivers/usb/core/driver.c:396 really_probe+0x2b9/0xad0 drivers/base/dd.c:658 __driver_probe_device+0x1a2/0x390 drivers/base/dd.c:800 driver_probe_device+0x50/0x430 drivers/base/dd.c:830 ...
Reported-and-tested-by: syzbot+ccbbc229a024fa3e13b5@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=ccbbc229a024fa3e13b5 Fixes: 2eabb655a968 ("usb: atm: cxacru: fix endpoint checking in cxacru_bind()") Cc: stable@kernel.org Signed-off-by: Nikita Zhandarovich n.zhandarovich@fintech.ru Link: https://lore.kernel.org/r/20250213122259.730772-1-n.zhandarovich@fintech.ru Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/atm/cxacru.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
--- a/drivers/usb/atm/cxacru.c +++ b/drivers/usb/atm/cxacru.c @@ -1131,7 +1131,10 @@ static int cxacru_bind(struct usbatm_dat struct cxacru_data *instance; struct usb_device *usb_dev = interface_to_usbdev(intf); struct usb_host_endpoint *cmd_ep = usb_dev->ep_in[CXACRU_EP_CMD]; - struct usb_endpoint_descriptor *in, *out; + static const u8 ep_addrs[] = { + CXACRU_EP_CMD + USB_DIR_IN, + CXACRU_EP_CMD + USB_DIR_OUT, + 0}; int ret;
/* instance init */ @@ -1179,13 +1182,11 @@ static int cxacru_bind(struct usbatm_dat }
if (usb_endpoint_xfer_int(&cmd_ep->desc)) - ret = usb_find_common_endpoints(intf->cur_altsetting, - NULL, NULL, &in, &out); + ret = usb_check_int_endpoints(intf, ep_addrs); else - ret = usb_find_common_endpoints(intf->cur_altsetting, - &in, &out, NULL, NULL); + ret = usb_check_bulk_endpoints(intf, ep_addrs);
- if (ret) { + if (!ret) { usb_err(usbatm_instance, "cxacru_bind: interface has incorrect endpoints\n"); ret = -ENODEV; goto fail;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thinh Nguyen Thinh.Nguyen@synopsys.com
commit cc5bfc4e16fc1d1c520cd7bb28646e82b6e69217 upstream.
After phy initialization, some phy operations can only be executed while in lower P states. Ensure GUSB3PIPECTL.SUSPENDENABLE and GUSB2PHYCFG.SUSPHY are set soon after initialization to avoid blocking phy ops.
Previously the SUSPENDENABLE bits are only set after the controller initialization, which may not happen right away if there's no gadget driver or xhci driver bound. Revise this to clear SUSPENDENABLE bits only when there's mode switching (change in GCTL.PRTCAPDIR).
Fixes: 6d735722063a ("usb: dwc3: core: Prevent phy suspend during init") Cc: stable stable@kernel.org Signed-off-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Link: https://lore.kernel.org/r/633aef0afee7d56d2316f7cc3e1b2a6d518a8cc9.173828091... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/dwc3/core.c | 69 +++++++++++++++++++++++++++++------------------- drivers/usb/dwc3/core.h | 2 - drivers/usb/dwc3/drd.c | 4 +- 3 files changed, 45 insertions(+), 30 deletions(-)
--- a/drivers/usb/dwc3/core.c +++ b/drivers/usb/dwc3/core.c @@ -131,11 +131,24 @@ void dwc3_enable_susphy(struct dwc3 *dwc } }
-void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode) +void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode, bool ignore_susphy) { + unsigned int hw_mode; u32 reg;
reg = dwc3_readl(dwc->regs, DWC3_GCTL); + + /* + * For DRD controllers, GUSB3PIPECTL.SUSPENDENABLE and + * GUSB2PHYCFG.SUSPHY should be cleared during mode switching, + * and they can be set after core initialization. + */ + hw_mode = DWC3_GHWPARAMS0_MODE(dwc->hwparams.hwparams0); + if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD && !ignore_susphy) { + if (DWC3_GCTL_PRTCAP(reg) != mode) + dwc3_enable_susphy(dwc, false); + } + reg &= ~(DWC3_GCTL_PRTCAPDIR(DWC3_GCTL_PRTCAP_OTG)); reg |= DWC3_GCTL_PRTCAPDIR(mode); dwc3_writel(dwc->regs, DWC3_GCTL, reg); @@ -216,7 +229,7 @@ static void __dwc3_set_mode(struct work_
spin_lock_irqsave(&dwc->lock, flags);
- dwc3_set_prtcap(dwc, desired_dr_role); + dwc3_set_prtcap(dwc, desired_dr_role, false);
spin_unlock_irqrestore(&dwc->lock, flags);
@@ -658,16 +671,7 @@ static int dwc3_ss_phy_setup(struct dwc3 */ reg &= ~DWC3_GUSB3PIPECTL_UX_EXIT_PX;
- /* - * Above DWC_usb3.0 1.94a, it is recommended to set - * DWC3_GUSB3PIPECTL_SUSPHY to '0' during coreConsultant configuration. - * So default value will be '0' when the core is reset. Application - * needs to set it to '1' after the core initialization is completed. - * - * Similarly for DRD controllers, GUSB3PIPECTL.SUSPENDENABLE must be - * cleared after power-on reset, and it can be set after core - * initialization. - */ + /* Ensure the GUSB3PIPECTL.SUSPENDENABLE is cleared prior to phy init. */ reg &= ~DWC3_GUSB3PIPECTL_SUSPHY;
if (dwc->u2ss_inp3_quirk) @@ -747,15 +751,7 @@ static int dwc3_hs_phy_setup(struct dwc3 break; }
- /* - * Above DWC_usb3.0 1.94a, it is recommended to set - * DWC3_GUSB2PHYCFG_SUSPHY to '0' during coreConsultant configuration. - * So default value will be '0' when the core is reset. Application - * needs to set it to '1' after the core initialization is completed. - * - * Similarly for DRD controllers, GUSB2PHYCFG.SUSPHY must be cleared - * after power-on reset, and it can be set after core initialization. - */ + /* Ensure the GUSB2PHYCFG.SUSPHY is cleared prior to phy init. */ reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
if (dwc->dis_enblslpm_quirk) @@ -830,6 +826,25 @@ static int dwc3_phy_init(struct dwc3 *dw goto err_exit_usb3_phy; }
+ /* + * Above DWC_usb3.0 1.94a, it is recommended to set + * DWC3_GUSB3PIPECTL_SUSPHY and DWC3_GUSB2PHYCFG_SUSPHY to '0' during + * coreConsultant configuration. So default value will be '0' when the + * core is reset. Application needs to set it to '1' after the core + * initialization is completed. + * + * Certain phy requires to be in P0 power state during initialization. + * Make sure GUSB3PIPECTL.SUSPENDENABLE and GUSB2PHYCFG.SUSPHY are clear + * prior to phy init to maintain in the P0 state. + * + * After phy initialization, some phy operations can only be executed + * while in lower P states. Ensure GUSB3PIPECTL.SUSPENDENABLE and + * GUSB2PHYCFG.SUSPHY are set soon after initialization to avoid + * blocking phy ops. + */ + if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A)) + dwc3_enable_susphy(dwc, true); + return 0;
err_exit_usb3_phy: @@ -1564,7 +1579,7 @@ static int dwc3_core_init_mode(struct dw
switch (dwc->dr_mode) { case USB_DR_MODE_PERIPHERAL: - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE); + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, false);
if (dwc->usb2_phy) otg_set_vbus(dwc->usb2_phy->otg, false); @@ -1576,7 +1591,7 @@ static int dwc3_core_init_mode(struct dw return dev_err_probe(dev, ret, "failed to initialize gadget\n"); break; case USB_DR_MODE_HOST: - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST); + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST, false);
if (dwc->usb2_phy) otg_set_vbus(dwc->usb2_phy->otg, true); @@ -1621,7 +1636,7 @@ static void dwc3_core_exit_mode(struct d }
/* de-assert DRVVBUS for HOST and OTG mode */ - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE); + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, true); }
static void dwc3_get_software_properties(struct dwc3 *dwc) @@ -2433,7 +2448,7 @@ static int dwc3_resume_common(struct dwc if (ret) return ret;
- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE); + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, true); dwc3_gadget_resume(dwc); break; case DWC3_GCTL_PRTCAP_HOST: @@ -2441,7 +2456,7 @@ static int dwc3_resume_common(struct dwc ret = dwc3_core_init_for_resume(dwc); if (ret) return ret; - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST); + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST, true); break; } /* Restore GUSB2PHYCFG bits that were modified in suspend */ @@ -2470,7 +2485,7 @@ static int dwc3_resume_common(struct dwc if (ret) return ret;
- dwc3_set_prtcap(dwc, dwc->current_dr_role); + dwc3_set_prtcap(dwc, dwc->current_dr_role, true);
dwc3_otg_init(dwc); if (dwc->current_otg_role == DWC3_OTG_ROLE_HOST) { --- a/drivers/usb/dwc3/core.h +++ b/drivers/usb/dwc3/core.h @@ -1562,7 +1562,7 @@ struct dwc3_gadget_ep_cmd_params { #define DWC3_HAS_OTG BIT(3)
/* prototypes */ -void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode); +void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode, bool ignore_susphy); void dwc3_set_mode(struct dwc3 *dwc, u32 mode); u32 dwc3_core_fifo_space(struct dwc3_ep *dep, u8 type);
--- a/drivers/usb/dwc3/drd.c +++ b/drivers/usb/dwc3/drd.c @@ -173,7 +173,7 @@ void dwc3_otg_init(struct dwc3 *dwc) * block "Initialize GCTL for OTG operation". */ /* GCTL.PrtCapDir=2'b11 */ - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG); + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG, true); /* GUSB2PHYCFG0.SusPHY=0 */ reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)); reg &= ~DWC3_GUSB2PHYCFG_SUSPHY; @@ -556,7 +556,7 @@ int dwc3_drd_init(struct dwc3 *dwc)
dwc3_drd_update(dwc); } else { - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG); + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG, true);
/* use OTG block to get ID event */ irq = dwc3_otg_get_irq(dwc);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Badhri Jagan Sridharan badhri@google.com
commit 69c58deec19628c8a686030102176484eb94fed4 upstream.
While commit d325a1de49d6 ("usb: dwc3: gadget: Prevent losing events in event cache") makes sure that top half(TH) does not end up overwriting the cached events before processing them when the TH gets invoked more than one time, returning IRQ_HANDLED results in occasional irq storm where the TH hogs the CPU. The irq storm can be prevented by the flag before event handler busy is cleared. Default enable interrupt moderation in all versions which support them.
ftrace event stub during dwc3 irq storm: irq/504_dwc3-1111 ( 1111) [000] .... 70.000866: irq_handler_exit: irq=14 ret=handled irq/504_dwc3-1111 ( 1111) [000] .... 70.000872: irq_handler_entry: irq=504 name=dwc3 irq/504_dwc3-1111 ( 1111) [000] .... 70.000874: irq_handler_exit: irq=504 ret=handled irq/504_dwc3-1111 ( 1111) [000] .... 70.000881: irq_handler_entry: irq=504 name=dwc3 irq/504_dwc3-1111 ( 1111) [000] .... 70.000883: irq_handler_exit: irq=504 ret=handled irq/504_dwc3-1111 ( 1111) [000] .... 70.000889: irq_handler_entry: irq=504 name=dwc3 irq/504_dwc3-1111 ( 1111) [000] .... 70.000892: irq_handler_exit: irq=504 ret=handled irq/504_dwc3-1111 ( 1111) [000] .... 70.000898: irq_handler_entry: irq=504 name=dwc3 irq/504_dwc3-1111 ( 1111) [000] .... 70.000901: irq_handler_exit: irq=504 ret=handled irq/504_dwc3-1111 ( 1111) [000] .... 70.000907: irq_handler_entry: irq=504 name=dwc3 irq/504_dwc3-1111 ( 1111) [000] .... 70.000909: irq_handler_exit: irq=504 ret=handled irq/504_dwc3-1111 ( 1111) [000] .... 70.000915: irq_handler_entry: irq=504 name=dwc3 irq/504_dwc3-1111 ( 1111) [000] .... 70.000918: irq_handler_exit: irq=504 ret=handled irq/504_dwc3-1111 ( 1111) [000] .... 70.000924: irq_handler_entry: irq=504 name=dwc3 irq/504_dwc3-1111 ( 1111) [000] .... 70.000927: irq_handler_exit: irq=504 ret=handled irq/504_dwc3-1111 ( 1111) [000] .... 70.000933: irq_handler_entry: irq=504 name=dwc3 irq/504_dwc3-1111 ( 1111) [000] .... 70.000935: irq_handler_exit: irq=504 ret=handled ....
Cc: stable stable@kernel.org Suggested-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Fixes: d325a1de49d6 ("usb: dwc3: gadget: Prevent losing events in event cache") Signed-off-by: Badhri Jagan Sridharan badhri@google.com Acked-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Link: https://lore.kernel.org/r/20250216223003.3568039-1-badhri@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/dwc3/core.c | 16 ++++++---------- drivers/usb/dwc3/gadget.c | 10 +++++++--- 2 files changed, 13 insertions(+), 13 deletions(-)
--- a/drivers/usb/dwc3/core.c +++ b/drivers/usb/dwc3/core.c @@ -1826,8 +1826,6 @@ static void dwc3_get_properties(struct d dwc->tx_thr_num_pkt_prd = tx_thr_num_pkt_prd; dwc->tx_max_burst_prd = tx_max_burst_prd;
- dwc->imod_interval = 0; - dwc->tx_fifo_resize_max_num = tx_fifo_resize_max_num; }
@@ -1845,21 +1843,19 @@ static void dwc3_check_params(struct dwc unsigned int hwparam_gen = DWC3_GHWPARAMS3_SSPHY_IFC(dwc->hwparams.hwparams3);
- /* Check for proper value of imod_interval */ - if (dwc->imod_interval && !dwc3_has_imod(dwc)) { - dev_warn(dwc->dev, "Interrupt moderation not supported\n"); - dwc->imod_interval = 0; - } - /* + * Enable IMOD for all supporting controllers. + * + * Particularly, DWC_usb3 v3.00a must enable this feature for + * the following reason: + * * Workaround for STAR 9000961433 which affects only version * 3.00a of the DWC_usb3 core. This prevents the controller * interrupt from being masked while handling events. IMOD * allows us to work around this issue. Enable it for the * affected version. */ - if (!dwc->imod_interval && - DWC3_VER_IS(DWC3, 300A)) + if (dwc3_has_imod((dwc))) dwc->imod_interval = 1;
/* Check the maximum_speed parameter */ --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -4507,14 +4507,18 @@ static irqreturn_t dwc3_process_event_bu dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0), DWC3_GEVNTSIZ_SIZE(evt->length));
+ evt->flags &= ~DWC3_EVENT_PENDING; + /* + * Add an explicit write memory barrier to make sure that the update of + * clearing DWC3_EVENT_PENDING is observed in dwc3_check_event_buf() + */ + wmb(); + if (dwc->imod_interval) { dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), DWC3_GEVNTCOUNT_EHB); dwc3_writel(dwc->regs, DWC3_DEV_IMOD(0), dwc->imod_interval); }
- /* Keep the clearing of DWC3_EVENT_PENDING at the end */ - evt->flags &= ~DWC3_EVENT_PENDING; - return ret; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Fedor Pchelkin boddah8794@gmail.com
commit bf4f9ae1cb08ccaafbe6874be6c46f59b83ae778 upstream.
It is observed that on some systems an initial PPM reset during the boot phase can trigger a timeout:
[ 6.482546] ucsi_acpi USBC000:00: failed to reset PPM! [ 6.482551] ucsi_acpi USBC000:00: error -ETIMEDOUT: PPM init failed
Still, increasing the timeout value, albeit being the most straightforward solution, eliminates the problem: the initial PPM reset may take up to ~8000-10000ms on some Lenovo laptops. When it is reset after the above period of time (or even if ucsi_reset_ppm() is not called overall), UCSI works as expected.
Moreover, if the ucsi_acpi module is loaded/unloaded manually after the system has booted, reading the CCI values and resetting the PPM works perfectly, without any timeout. Thus it's only a boot-time issue.
The reason for this behavior is not clear but it may be the consequence of some tricks that the firmware performs or be an actual firmware bug. As a workaround, increase the timeout to avoid failing the UCSI initialization prematurely.
Fixes: b1b59e16075f ("usb: typec: ucsi: Increase command completion timeout value") Cc: stable stable@kernel.org Signed-off-by: Fedor Pchelkin boddah8794@gmail.com Reviewed-by: Heikki Krogerus heikki.krogerus@linux.intel.com Link: https://lore.kernel.org/r/20250217105442.113486-3-boddah8794@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/typec/ucsi/ucsi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/usb/typec/ucsi/ucsi.c +++ b/drivers/usb/typec/ucsi/ucsi.c @@ -25,7 +25,7 @@ * difficult to estimate the time it takes for the system to process the command * before it is actually passed to the PPM. */ -#define UCSI_TIMEOUT_MS 5000 +#define UCSI_TIMEOUT_MS 10000
/* * UCSI_SWAP_TIMEOUT_MS - Timeout for role swap requests
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com
commit d6b82dafd17db0658f089b9cdec573982ca82bc5 upstream.
During probe, the TCPC alert interrupts are getting masked to avoid unwanted interrupts during chip setup: this is ok to do but there is no unmasking happening at any later time, which means that the chip will not raise any interrupt, essentially making it not functional as, while internally it does perform all of the intended functions, it won't signal anything to the outside.
Unmask the alert interrupts to fix functionality.
Fixes: ce08eaeb6388 ("staging: typec: rt1711h typec chip driver") Cc: stable stable@kernel.org Signed-off-by: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Link: https://lore.kernel.org/r/20250219114700.41700-1-angelogioacchino.delregno@c... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/typec/tcpm/tcpci_rt1711h.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/drivers/usb/typec/tcpm/tcpci_rt1711h.c +++ b/drivers/usb/typec/tcpm/tcpci_rt1711h.c @@ -334,6 +334,11 @@ static int rt1711h_probe(struct i2c_clie { int ret; struct rt1711h_chip *chip; + const u16 alert_mask = TCPC_ALERT_TX_SUCCESS | TCPC_ALERT_TX_DISCARDED | + TCPC_ALERT_TX_FAILED | TCPC_ALERT_RX_HARD_RST | + TCPC_ALERT_RX_STATUS | TCPC_ALERT_POWER_STATUS | + TCPC_ALERT_CC_STATUS | TCPC_ALERT_RX_BUF_OVF | + TCPC_ALERT_FAULT;
chip = devm_kzalloc(&client->dev, sizeof(*chip), GFP_KERNEL); if (!chip) @@ -382,6 +387,12 @@ static int rt1711h_probe(struct i2c_clie dev_name(chip->dev), chip); if (ret < 0) return ret; + + /* Enable alert interrupts */ + ret = rt1711h_write16(chip, TCPC_ALERT_MASK, alert_mask); + if (ret < 0) + return ret; + enable_irq_wake(client->irq);
return 0;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Prashanth K prashanth.k@oss.qualcomm.com
commit 40e89ff5750fca2c1d6da93f98a2038716bba86c upstream.
Currently the USB gadget will be set as bus-powered based solely on whether its bMaxPower is greater than 100mA, but this may miss devices that may legitimately draw less than 100mA but still want to report as bus-powered. Similarly during suspend & resume, USB gadget is incorrectly marked as bus/self powered without checking the bmAttributes field. Fix these by configuring the USB gadget as self or bus powered based on bmAttributes, and explicitly set it as bus-powered if it draws more than 100mA.
Cc: stable stable@kernel.org Fixes: 5e5caf4fa8d3 ("usb: gadget: composite: Inform controller driver of self-powered") Signed-off-by: Prashanth K prashanth.k@oss.qualcomm.com Link: https://lore.kernel.org/r/20250217120328.2446639-1-prashanth.k@oss.qualcomm.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/gadget/composite.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-)
--- a/drivers/usb/gadget/composite.c +++ b/drivers/usb/gadget/composite.c @@ -1050,10 +1050,11 @@ static int set_config(struct usb_composi else usb_gadget_set_remote_wakeup(gadget, 0); done: - if (power <= USB_SELF_POWER_VBUS_MAX_DRAW) - usb_gadget_set_selfpowered(gadget); - else + if (power > USB_SELF_POWER_VBUS_MAX_DRAW || + !(c->bmAttributes & USB_CONFIG_ATT_SELFPOWER)) usb_gadget_clear_selfpowered(gadget); + else + usb_gadget_set_selfpowered(gadget);
usb_gadget_vbus_draw(gadget, power); if (result >= 0 && cdev->delayed_status) @@ -2615,7 +2616,9 @@ void composite_suspend(struct usb_gadget
cdev->suspended = 1;
- usb_gadget_set_selfpowered(gadget); + if (cdev->config->bmAttributes & USB_CONFIG_ATT_SELFPOWER) + usb_gadget_set_selfpowered(gadget); + usb_gadget_vbus_draw(gadget, 2); }
@@ -2649,8 +2652,11 @@ void composite_resume(struct usb_gadget else maxpower = min(maxpower, 900U);
- if (maxpower > USB_SELF_POWER_VBUS_MAX_DRAW) + if (maxpower > USB_SELF_POWER_VBUS_MAX_DRAW || + !(cdev->config->bmAttributes & USB_CONFIG_ATT_SELFPOWER)) usb_gadget_clear_selfpowered(gadget); + else + usb_gadget_set_selfpowered(gadget);
usb_gadget_vbus_draw(gadget, maxpower); } else {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marek Szyprowski m.szyprowski@samsung.com
commit c783e1258f29c5caac9eea0aea6b172870f1baf8 upstream.
cdev->config might be NULL, so check it before dereferencing.
CC: stable stable@kernel.org Fixes: 40e89ff5750f ("usb: gadget: Set self-powered based on MaxPower and bmAttributes") Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com Link: https://lore.kernel.org/r/20250220120314.3614330-1-m.szyprowski@samsung.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/gadget/composite.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/usb/gadget/composite.c +++ b/drivers/usb/gadget/composite.c @@ -2616,7 +2616,8 @@ void composite_suspend(struct usb_gadget
cdev->suspended = 1;
- if (cdev->config->bmAttributes & USB_CONFIG_ATT_SELFPOWER) + if (cdev->config && + cdev->config->bmAttributes & USB_CONFIG_ATT_SELFPOWER) usb_gadget_set_selfpowered(gadget);
usb_gadget_vbus_draw(gadget, 2);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Prashanth K prashanth.k@oss.qualcomm.com
commit 8e812e9355a6f14dffd54a33d951ca403b9732f5 upstream.
If the USB configuration is not valid, then avoid checking for bmAttributes to prevent null pointer deference.
Cc: stable stable@kernel.org Fixes: 40e89ff5750f ("usb: gadget: Set self-powered based on MaxPower and bmAttributes") Signed-off-by: Prashanth K prashanth.k@oss.qualcomm.com Link: https://lore.kernel.org/r/20250224085604.417327-1-prashanth.k@oss.qualcomm.c... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/gadget/composite.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/usb/gadget/composite.c +++ b/drivers/usb/gadget/composite.c @@ -1051,7 +1051,7 @@ static int set_config(struct usb_composi usb_gadget_set_remote_wakeup(gadget, 0); done: if (power > USB_SELF_POWER_VBUS_MAX_DRAW || - !(c->bmAttributes & USB_CONFIG_ATT_SELFPOWER)) + (c && !(c->bmAttributes & USB_CONFIG_ATT_SELFPOWER))) usb_gadget_clear_selfpowered(gadget); else usb_gadget_set_selfpowered(gadget);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Weißschuh thomas.weissschuh@linutronix.de
commit dfc1b168a8c4b376fa222b27b97c2c4ad4b786e1 upstream.
The userprog infrastructure links objects files through $(CC). Either explicitly by manually calling $(CC) on multiple object files or implicitly by directly compiling a source file to an executable. The documentation at Documentation/kbuild/llvm.rst indicates that ld.lld would be used for linking if LLVM=1 is specified. However clang instead will use either a globally installed cross linker from $PATH called ${target}-ld or fall back to the system linker, which probably does not support crosslinking. For the normal kernel build this is not an issue because the linker is always executed directly, without the compiler being involved.
Explicitly pass --ld-path to clang so $(LD) is respected. As clang 13.0.1 is required to build the kernel, this option is available.
Fixes: 7f3a59db274c ("kbuild: add infrastructure to build userspace programs") Cc: stable@vger.kernel.org # needs wrapping in $(cc-option) for < 6.9 Signed-off-by: Thomas Weißschuh thomas.weissschuh@linutronix.de Reviewed-by: Nathan Chancellor nathan@kernel.org Signed-off-by: Masahiro Yamada masahiroy@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Makefile | 5 +++++ 1 file changed, 5 insertions(+)
--- a/Makefile +++ b/Makefile @@ -1067,6 +1067,11 @@ endif KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS)) KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
+# userspace programs are linked via the compiler, use the correct linker +ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_LD_IS_LLD),yy) +KBUILD_USERLDFLAGS += --ld-path=$(LD) +endif + # make the checker run with the right architecture CHECKFLAGS += --arch=$(ARCH)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian A. Ehrhardt lk@c--e.de
commit 976e7e9bdc7719a023a4ecccd2e3daec9ab20a40 upstream.
For the ACPI backend of UCSI the UCSI "registers" are just a memory copy of the register values in an opregion. The ACPI implementation in the BIOS ensures that the opregion contents are synced to the embedded controller and it ensures that the registers (in particular CCI) are synced back to the opregion on notifications. While there is an ACPI call that syncs the actual registers to the opregion there is rarely a need to do this and on some ACPI implementations it actually breaks in various interesting ways.
The only reason to force a sync from the embedded controller is to poll CCI while notifications are disabled. Only the ucsi core knows if this is the case and guessing based on the current command is suboptimal, i.e. leading to the following spurious assertion splat:
WARNING: CPU: 3 PID: 76 at drivers/usb/typec/ucsi/ucsi.c:1388 ucsi_reset_ppm+0x1b4/0x1c0 [typec_ucsi] CPU: 3 UID: 0 PID: 76 Comm: kworker/3:0 Not tainted 6.12.11-200.fc41.x86_64 #1 Hardware name: LENOVO 21D0/LNVNB161216, BIOS J6CN45WW 03/17/2023 Workqueue: events_long ucsi_init_work [typec_ucsi] RIP: 0010:ucsi_reset_ppm+0x1b4/0x1c0 [typec_ucsi] Call Trace: <TASK> ucsi_init_work+0x3c/0xac0 [typec_ucsi] process_one_work+0x179/0x330 worker_thread+0x252/0x390 kthread+0xd2/0x100 ret_from_fork+0x34/0x50 ret_from_fork_asm+0x1a/0x30 </TASK>
Thus introduce a ->poll_cci() method that works like ->read_cci() with an additional forced sync and document that this should be used when polling with notifications disabled. For all other backends that presumably don't have this issue use the same implementation for both methods.
Fixes: fa48d7e81624 ("usb: typec: ucsi: Do not call ACPI _DSM method for UCSI read operations") Cc: stable stable@kernel.org Signed-off-by: Christian A. Ehrhardt lk@c--e.de Tested-by: Fedor Pchelkin boddah8794@gmail.com Signed-off-by: Fedor Pchelkin boddah8794@gmail.com Reviewed-by: Heikki Krogerus heikki.krogerus@linux.intel.com Link: https://lore.kernel.org/r/20250217105442.113486-2-boddah8794@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/typec/ucsi/ucsi.c | 10 +++++----- drivers/usb/typec/ucsi/ucsi.h | 2 ++ drivers/usb/typec/ucsi/ucsi_acpi.c | 21 ++++++++++++++------- drivers/usb/typec/ucsi/ucsi_ccg.c | 1 + drivers/usb/typec/ucsi/ucsi_glink.c | 1 + drivers/usb/typec/ucsi/ucsi_stm32g0.c | 1 + drivers/usb/typec/ucsi/ucsi_yoga_c630.c | 1 + 7 files changed, 25 insertions(+), 12 deletions(-)
--- a/drivers/usb/typec/ucsi/ucsi.c +++ b/drivers/usb/typec/ucsi/ucsi.c @@ -1330,7 +1330,7 @@ static int ucsi_reset_ppm(struct ucsi *u
mutex_lock(&ucsi->ppm_lock);
- ret = ucsi->ops->read_cci(ucsi, &cci); + ret = ucsi->ops->poll_cci(ucsi, &cci); if (ret < 0) goto out;
@@ -1348,7 +1348,7 @@ static int ucsi_reset_ppm(struct ucsi *u
tmo = jiffies + msecs_to_jiffies(UCSI_TIMEOUT_MS); do { - ret = ucsi->ops->read_cci(ucsi, &cci); + ret = ucsi->ops->poll_cci(ucsi, &cci); if (ret < 0) goto out; if (cci & UCSI_CCI_COMMAND_COMPLETE) @@ -1377,7 +1377,7 @@ static int ucsi_reset_ppm(struct ucsi *u /* Give the PPM time to process a reset before reading CCI */ msleep(20);
- ret = ucsi->ops->read_cci(ucsi, &cci); + ret = ucsi->ops->poll_cci(ucsi, &cci); if (ret) goto out;
@@ -1913,8 +1913,8 @@ struct ucsi *ucsi_create(struct device * struct ucsi *ucsi;
if (!ops || - !ops->read_version || !ops->read_cci || !ops->read_message_in || - !ops->sync_control || !ops->async_control) + !ops->read_version || !ops->read_cci || !ops->poll_cci || + !ops->read_message_in || !ops->sync_control || !ops->async_control) return ERR_PTR(-EINVAL);
ucsi = kzalloc(sizeof(*ucsi), GFP_KERNEL); --- a/drivers/usb/typec/ucsi/ucsi.h +++ b/drivers/usb/typec/ucsi/ucsi.h @@ -60,6 +60,7 @@ struct dentry; * struct ucsi_operations - UCSI I/O operations * @read_version: Read implemented UCSI version * @read_cci: Read CCI register + * @poll_cci: Read CCI register while polling with notifications disabled * @read_message_in: Read message data from UCSI * @sync_control: Blocking control operation * @async_control: Non-blocking control operation @@ -74,6 +75,7 @@ struct dentry; struct ucsi_operations { int (*read_version)(struct ucsi *ucsi, u16 *version); int (*read_cci)(struct ucsi *ucsi, u32 *cci); + int (*poll_cci)(struct ucsi *ucsi, u32 *cci); int (*read_message_in)(struct ucsi *ucsi, void *val, size_t val_len); int (*sync_control)(struct ucsi *ucsi, u64 command); int (*async_control)(struct ucsi *ucsi, u64 command); --- a/drivers/usb/typec/ucsi/ucsi_acpi.c +++ b/drivers/usb/typec/ucsi/ucsi_acpi.c @@ -59,19 +59,24 @@ static int ucsi_acpi_read_version(struct static int ucsi_acpi_read_cci(struct ucsi *ucsi, u32 *cci) { struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); - int ret; - - if (UCSI_COMMAND(ua->cmd) == UCSI_PPM_RESET) { - ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ); - if (ret) - return ret; - }
memcpy(cci, ua->base + UCSI_CCI, sizeof(*cci));
return 0; }
+static int ucsi_acpi_poll_cci(struct ucsi *ucsi, u32 *cci) +{ + struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); + int ret; + + ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ); + if (ret) + return ret; + + return ucsi_acpi_read_cci(ucsi, cci); +} + static int ucsi_acpi_read_message_in(struct ucsi *ucsi, void *val, size_t val_len) { struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); @@ -94,6 +99,7 @@ static int ucsi_acpi_async_control(struc static const struct ucsi_operations ucsi_acpi_ops = { .read_version = ucsi_acpi_read_version, .read_cci = ucsi_acpi_read_cci, + .poll_cci = ucsi_acpi_poll_cci, .read_message_in = ucsi_acpi_read_message_in, .sync_control = ucsi_sync_control_common, .async_control = ucsi_acpi_async_control @@ -145,6 +151,7 @@ static int ucsi_gram_sync_control(struct static const struct ucsi_operations ucsi_gram_ops = { .read_version = ucsi_acpi_read_version, .read_cci = ucsi_acpi_read_cci, + .poll_cci = ucsi_acpi_poll_cci, .read_message_in = ucsi_gram_read_message_in, .sync_control = ucsi_gram_sync_control, .async_control = ucsi_acpi_async_control --- a/drivers/usb/typec/ucsi/ucsi_ccg.c +++ b/drivers/usb/typec/ucsi/ucsi_ccg.c @@ -664,6 +664,7 @@ err_put: static const struct ucsi_operations ucsi_ccg_ops = { .read_version = ucsi_ccg_read_version, .read_cci = ucsi_ccg_read_cci, + .poll_cci = ucsi_ccg_read_cci, .read_message_in = ucsi_ccg_read_message_in, .sync_control = ucsi_ccg_sync_control, .async_control = ucsi_ccg_async_control, --- a/drivers/usb/typec/ucsi/ucsi_glink.c +++ b/drivers/usb/typec/ucsi/ucsi_glink.c @@ -201,6 +201,7 @@ static void pmic_glink_ucsi_connector_st static const struct ucsi_operations pmic_glink_ucsi_ops = { .read_version = pmic_glink_ucsi_read_version, .read_cci = pmic_glink_ucsi_read_cci, + .poll_cci = pmic_glink_ucsi_read_cci, .read_message_in = pmic_glink_ucsi_read_message_in, .sync_control = ucsi_sync_control_common, .async_control = pmic_glink_ucsi_async_control, --- a/drivers/usb/typec/ucsi/ucsi_stm32g0.c +++ b/drivers/usb/typec/ucsi/ucsi_stm32g0.c @@ -424,6 +424,7 @@ static irqreturn_t ucsi_stm32g0_irq_hand static const struct ucsi_operations ucsi_stm32g0_ops = { .read_version = ucsi_stm32g0_read_version, .read_cci = ucsi_stm32g0_read_cci, + .poll_cci = ucsi_stm32g0_read_cci, .read_message_in = ucsi_stm32g0_read_message_in, .sync_control = ucsi_sync_control_common, .async_control = ucsi_stm32g0_async_control, --- a/drivers/usb/typec/ucsi/ucsi_yoga_c630.c +++ b/drivers/usb/typec/ucsi/ucsi_yoga_c630.c @@ -74,6 +74,7 @@ static int yoga_c630_ucsi_async_control( const struct ucsi_operations yoga_c630_ucsi_ops = { .read_version = yoga_c630_ucsi_read_version, .read_cci = yoga_c630_ucsi_read_cci, + .poll_cci = yoga_c630_ucsi_read_cci, .read_message_in = yoga_c630_ucsi_read_message_in, .sync_control = ucsi_sync_control_common, .async_control = yoga_c630_ucsi_async_control,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit 27c7518e7f1ccaaa43eb5f25dc362779d2dc2ccb upstream.
In the last kernel cycle we migrated most of the `core::ffi` cases in commit d072acda4862 ("rust: use custom FFI integer types"):
Currently FFI integer types are defined in libcore. This commit creates the `ffi` crate and asks bindgen to use that crate for FFI integer types instead of `core::ffi`.
This commit is preparatory and no type changes are made in this commit yet.
Finish now the few remaining/new cases so that we perform the actual remapping in the next commit as planned.
Acked-by: Jocelyn Falempe jfalempe@redhat.com # drm Link: https://lore.kernel.org/rust-for-linux/CANiq72m_rg42SvZK=bF2f0yEoBLVA33UBhiA... Link: https://lore.kernel.org/all/cc9253fa-9d5f-460b-9841-94948fb6580c@redhat.com/ Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/drm_panic_qr.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/drm_panic_qr.rs +++ b/drivers/gpu/drm/drm_panic_qr.rs @@ -931,7 +931,7 @@ impl QrImage<'_> { /// They must remain valid for the duration of the function call. #[no_mangle] pub unsafe extern "C" fn drm_panic_qr_generate( - url: *const i8, + url: *const kernel::ffi::c_char, data: *mut u8, data_len: usize, data_size: usize,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gary Guo gary@garyguo.net
commit 1bae8729e50a900f41e9a1c17ae81113e4cf62b8 upstream.
The following FFI types are replaced compared to `core::ffi`:
1. `char` type is now always mapped to `u8`, since kernel uses `-funsigned-char` on the C code. `core::ffi` maps it to platform default ABI, which can be either signed or unsigned.
2. `long` is now always mapped to `isize`. It's very common in the kernel to use `long` to represent a pointer-sized integer, and in fact `intptr_t` is a typedef of `long` in the kernel. Enforce this mapping rather than mapping to `i32/i64` depending on platform can save us a lot of unnecessary casts.
Signed-off-by: Gary Guo gary@garyguo.net Reviewed-by: Alice Ryhl aliceryhl@google.com Link: https://lore.kernel.org/r/20240913213041.395655-5-gary@garyguo.net [ Moved `uaccess` changes from the next commit, since they were irrefutable patterns that Rust >= 1.82.0 warns about. Reworded slightly and reformatted a few documentation comments. Rebased on top of `rust-next`. Added the removal of two casts to avoid Clippy warnings. - Miguel ] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/ffi.rs | 37 ++++++++++++++++++++++++++++++++++++- rust/kernel/error.rs | 5 +---- rust/kernel/firmware.rs | 2 +- rust/kernel/uaccess.rs | 27 +++++++-------------------- 4 files changed, 45 insertions(+), 26 deletions(-)
--- a/rust/ffi.rs +++ b/rust/ffi.rs @@ -10,4 +10,39 @@
#![no_std]
-pub use core::ffi::*; +macro_rules! alias { + ($($name:ident = $ty:ty;)*) => {$( + #[allow(non_camel_case_types, missing_docs)] + pub type $name = $ty; + + // Check size compatibility with `core`. + const _: () = assert!( + core::mem::size_of::<$name>() == core::mem::size_of::core::ffi::$name() + ); + )*} +} + +alias! { + // `core::ffi::c_char` is either `i8` or `u8` depending on architecture. In the kernel, we use + // `-funsigned-char` so it's always mapped to `u8`. + c_char = u8; + + c_schar = i8; + c_uchar = u8; + + c_short = i16; + c_ushort = u16; + + c_int = i32; + c_uint = u32; + + // In the kernel, `intptr_t` is defined to be `long` in all platforms, so we can map the type to + // `isize`. + c_long = isize; + c_ulong = usize; + + c_longlong = i64; + c_ulonglong = u64; +} + +pub use core::ffi::c_void; --- a/rust/kernel/error.rs +++ b/rust/kernel/error.rs @@ -153,11 +153,8 @@ impl Error {
/// Returns the error encoded as a pointer. pub fn to_ptr<T>(self) -> *mut T { - #[cfg_attr(target_pointer_width = "32", allow(clippy::useless_conversion))] // SAFETY: `self.0` is a valid error due to its invariant. - unsafe { - bindings::ERR_PTR(self.0.get().into()) as *mut _ - } + unsafe { bindings::ERR_PTR(self.0.get() as _) as *mut _ } }
/// Returns a string representing the error, if one exists. --- a/rust/kernel/firmware.rs +++ b/rust/kernel/firmware.rs @@ -12,7 +12,7 @@ use core::ptr::NonNull; /// One of the following: `bindings::request_firmware`, `bindings::firmware_request_nowarn`, /// `bindings::firmware_request_platform`, `bindings::request_firmware_direct`. struct FwFunc( - unsafe extern "C" fn(*mut *const bindings::firmware, *const i8, *mut bindings::device) -> i32, + unsafe extern "C" fn(*mut *const bindings::firmware, *const u8, *mut bindings::device) -> i32, );
impl FwFunc { --- a/rust/kernel/uaccess.rs +++ b/rust/kernel/uaccess.rs @@ -8,7 +8,7 @@ use crate::{ alloc::Flags, bindings, error::Result, - ffi::{c_ulong, c_void}, + ffi::c_void, prelude::*, types::{AsBytes, FromBytes}, }; @@ -224,13 +224,9 @@ impl UserSliceReader { if len > self.length { return Err(EFAULT); } - let Ok(len_ulong) = c_ulong::try_from(len) else { - return Err(EFAULT); - }; - // SAFETY: `out_ptr` points into a mutable slice of length `len_ulong`, so we may write + // SAFETY: `out_ptr` points into a mutable slice of length `len`, so we may write // that many bytes to it. - let res = - unsafe { bindings::copy_from_user(out_ptr, self.ptr as *const c_void, len_ulong) }; + let res = unsafe { bindings::copy_from_user(out_ptr, self.ptr as *const c_void, len) }; if res != 0 { return Err(EFAULT); } @@ -259,9 +255,6 @@ impl UserSliceReader { if len > self.length { return Err(EFAULT); } - let Ok(len_ulong) = c_ulong::try_from(len) else { - return Err(EFAULT); - }; let mut out: MaybeUninit<T> = MaybeUninit::uninit(); // SAFETY: The local variable `out` is valid for writing `size_of::<T>()` bytes. // @@ -272,7 +265,7 @@ impl UserSliceReader { bindings::_copy_from_user( out.as_mut_ptr().cast::<c_void>(), self.ptr as *const c_void, - len_ulong, + len, ) }; if res != 0 { @@ -335,12 +328,9 @@ impl UserSliceWriter { if len > self.length { return Err(EFAULT); } - let Ok(len_ulong) = c_ulong::try_from(len) else { - return Err(EFAULT); - }; - // SAFETY: `data_ptr` points into an immutable slice of length `len_ulong`, so we may read + // SAFETY: `data_ptr` points into an immutable slice of length `len`, so we may read // that many bytes from it. - let res = unsafe { bindings::copy_to_user(self.ptr as *mut c_void, data_ptr, len_ulong) }; + let res = unsafe { bindings::copy_to_user(self.ptr as *mut c_void, data_ptr, len) }; if res != 0 { return Err(EFAULT); } @@ -359,9 +349,6 @@ impl UserSliceWriter { if len > self.length { return Err(EFAULT); } - let Ok(len_ulong) = c_ulong::try_from(len) else { - return Err(EFAULT); - }; // SAFETY: The reference points to a value of type `T`, so it is valid for reading // `size_of::<T>()` bytes. // @@ -372,7 +359,7 @@ impl UserSliceWriter { bindings::_copy_to_user( self.ptr as *mut c_void, (value as *const T).cast::<c_void>(), - len_ulong, + len, ) }; if res != 0 {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
commit 0309ed83791c079f239c13e0c605210425cd1a61 upstream.
Some of the definitions are missing the one TAB, add it to them.
Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Link: https://lore.kernel.org/r/20241106101459.775897-23-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci-pci.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -28,8 +28,8 @@ #define SPARSE_CNTL_ENABLE 0xC12C
/* Device for a quirk */ -#define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73 -#define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000 +#define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73 +#define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1009 0x1009 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 0x1100 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400 0x1400 @@ -38,8 +38,8 @@ #define PCI_DEVICE_ID_EJ168 0x7023 #define PCI_DEVICE_ID_EJ188 0x7052
-#define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 -#define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 +#define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 +#define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 #define PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_XHCI 0x9cb1 #define PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI 0x22b5 #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI 0xa12f
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michal Pecio michal.pecio@gmail.com
commit c133ec0e5717868c9967fa3df92a55e537b1aead upstream.
Raspberry Pi is a major user of those chips and they discovered a bug - when the end of a transfer ring segment is reached, up to four TRBs can be prefetched from the next page even if the segment ends with link TRB and on page boundary (the chip claims to support standard 4KB pages).
It also appears that if the prefetched TRBs belong to a different ring whose doorbell is later rung, they may be used without refreshing from system RAM and the endpoint will stay idle if their cycle bit is stale.
Other users complain about IOMMU faults on x86 systems, unsurprisingly.
Deal with it by using existing quirk which allocates a dummy page after each transfer ring segment. This was seen to resolve both problems. RPi came up with a more efficient solution, shortening each segment by four TRBs, but it complicated the driver and they ditched it for this quirk.
Also rename the quirk and add VL805 device ID macro.
Signed-off-by: Michal Pecio michal.pecio@gmail.com Link: https://github.com/raspberrypi/linux/issues/4685 Closes: https://bugzilla.kernel.org/show_bug.cgi?id=215906 CC: stable@vger.kernel.org Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20250225095927.2512358-2-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci-mem.c | 3 ++- drivers/usb/host/xhci-pci.c | 10 +++++++--- drivers/usb/host/xhci.h | 2 +- 3 files changed, 10 insertions(+), 5 deletions(-)
--- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -2443,7 +2443,8 @@ int xhci_mem_init(struct xhci_hcd *xhci, * and our use of dma addresses in the trb_address_map radix tree needs * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need. */ - if (xhci->quirks & XHCI_ZHAOXIN_TRB_FETCH) + if (xhci->quirks & XHCI_TRB_OVERFETCH) + /* Buggy HC prefetches beyond segment bounds - allocate dummy space at the end */ xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, TRB_SEGMENT_SIZE * 2, TRB_SEGMENT_SIZE * 2, xhci->page_size * 2); else --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -38,6 +38,8 @@ #define PCI_DEVICE_ID_EJ168 0x7023 #define PCI_DEVICE_ID_EJ188 0x7052
+#define PCI_DEVICE_ID_VIA_VL805 0x3483 + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 #define PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_XHCI 0x9cb1 @@ -424,8 +426,10 @@ static void xhci_pci_quirks(struct devic pdev->device == 0x3432) xhci->quirks |= XHCI_BROKEN_STREAMS;
- if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) + if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == PCI_DEVICE_ID_VIA_VL805) { xhci->quirks |= XHCI_LPM_SUPPORT; + xhci->quirks |= XHCI_TRB_OVERFETCH; + }
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI) { @@ -473,11 +477,11 @@ static void xhci_pci_quirks(struct devic
if (pdev->device == 0x9202) { xhci->quirks |= XHCI_RESET_ON_RESUME; - xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH; + xhci->quirks |= XHCI_TRB_OVERFETCH; }
if (pdev->device == 0x9203) - xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH; + xhci->quirks |= XHCI_TRB_OVERFETCH; }
if (pdev->vendor == PCI_DEVICE_ID_CADENCE && --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1621,7 +1621,7 @@ struct xhci_hcd { #define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42) #define XHCI_SUSPEND_RESUME_CLKS BIT_ULL(43) #define XHCI_RESET_TO_DEFAULT BIT_ULL(44) -#define XHCI_ZHAOXIN_TRB_FETCH BIT_ULL(45) +#define XHCI_TRB_OVERFETCH BIT_ULL(45) #define XHCI_ZHAOXIN_HOST BIT_ULL(46) #define XHCI_WRITE_64_HI_LO BIT_ULL(47) #define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Christopherson seanjc@google.com
commit be45bc4eff33d9a7dae84a2150f242a91a617402 upstream.
Enable/disable local IRQs, i.e. set/clear RFLAGS.IF, in the common svm_vcpu_enter_exit() just after/before guest_state_{enter,exit}_irqoff() so that VMRUN is not executed in an STI shadow. AMD CPUs have a quirk (some would say "bug"), where the STI shadow bleeds into the guest's intr_state field if a #VMEXIT occurs during injection of an event, i.e. if the VMRUN doesn't complete before the subsequent #VMEXIT.
The spurious "interrupts masked" state is relatively benign, as it only occurs during event injection and is transient. Because KVM is already injecting an event, the guest can't be in HLT, and if KVM is querying IRQ blocking for injection, then KVM would need to force an immediate exit anyways since injecting multiple events is impossible.
However, because KVM copies int_state verbatim from vmcb02 to vmcb12, the spurious STI shadow is visible to L1 when running a nested VM, which can trip sanity checks, e.g. in VMware's VMM.
Hoist the STI+CLI all the way to C code, as the aforementioned calls to guest_state_{enter,exit}_irqoff() already inform lockdep that IRQs are enabled/disabled, and taking a fault on VMRUN with RFLAGS.IF=1 is already possible. I.e. if there's kernel code that is confused by running with RFLAGS.IF=1, then it's already a problem. In practice, since GIF=0 also blocks NMIs, the only change in exposure to non-KVM code (relative to surrounding VMRUN with STI+CLI) is exception handling code, and except for the kvm_rebooting=1 case, all exception in the core VM-Enter/VM-Exit path are fatal.
Use the "raw" variants to enable/disable IRQs to avoid tracing in the "no instrumentation" code; the guest state helpers also take care of tracing IRQ state.
Oppurtunstically document why KVM needs to do STI in the first place.
Reported-by: Doug Covelli doug.covelli@broadcom.com Closes: https://lore.kernel.org/all/CADH9ctBs1YPmE4aCfGPNBwA10cA8RuAk2gO7542DjMZgs4u... Fixes: f14eec0a3203 ("KVM: SVM: move more vmentry code to assembly") Cc: stable@vger.kernel.org Reviewed-by: Jim Mattson jmattson@google.com Link: https://lore.kernel.org/r/20250224165442.2338294-2-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/svm/svm.c | 14 ++++++++++++++ arch/x86/kvm/svm/vmenter.S | 10 +--------- 2 files changed, 15 insertions(+), 9 deletions(-)
--- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4176,6 +4176,18 @@ static noinstr void svm_vcpu_enter_exit(
guest_state_enter_irqoff();
+ /* + * Set RFLAGS.IF prior to VMRUN, as the host's RFLAGS.IF at the time of + * VMRUN controls whether or not physical IRQs are masked (KVM always + * runs with V_INTR_MASKING_MASK). Toggle RFLAGS.IF here to avoid the + * temptation to do STI+VMRUN+CLI, as AMD CPUs bleed the STI shadow + * into guest state if delivery of an event during VMRUN triggers a + * #VMEXIT, and the guest_state transitions already tell lockdep that + * IRQs are being enabled/disabled. Note! GIF=0 for the entirety of + * this path, so IRQs aren't actually unmasked while running host code. + */ + raw_local_irq_enable(); + amd_clear_divider();
if (sev_es_guest(vcpu->kvm)) @@ -4184,6 +4196,8 @@ static noinstr void svm_vcpu_enter_exit( else __svm_vcpu_run(svm, spec_ctrl_intercepted);
+ raw_local_irq_disable(); + guest_state_exit_irqoff(); }
--- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -170,12 +170,8 @@ SYM_FUNC_START(__svm_vcpu_run) mov VCPU_RDI(%_ASM_DI), %_ASM_DI
/* Enter guest mode */ - sti - 3: vmrun %_ASM_AX 4: - cli - /* Pop @svm to RAX while it's the only available register. */ pop %_ASM_AX
@@ -340,12 +336,8 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run) mov KVM_VMCB_pa(%rax), %rax
/* Enter guest mode */ - sti - 1: vmrun %rax - -2: cli - +2: /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Christopherson seanjc@google.com
commit b2653cd3b75f62f29b72df4070e20357acb52bc4 upstream.
When running SEV-SNP guests on a CPU that supports DebugSwap, always save the host's DR0..DR3 mask MSR values irrespective of whether or not DebugSwap is enabled, to ensure the host values aren't clobbered by the CPU. And for now, also save DR0..DR3, even though doing so isn't necessary (see below).
SVM_VMGEXIT_AP_CREATE is deeply flawed in that it allows the *guest* to create a VMSA with guest-controlled SEV_FEATURES. A well behaved guest can inform the hypervisor, i.e. KVM, of its "requested" features, but on CPUs without ALLOWED_SEV_FEATURES support, nothing prevents the guest from lying about which SEV features are being enabled (or not!).
If a misbehaving guest enables DebugSwap in a secondary vCPU's VMSA, the CPU will load the DR0..DR3 mask MSRs on #VMEXIT, i.e. will clobber the MSRs with '0' if KVM doesn't save its desired value.
Note, DR0..DR3 themselves are "ok", as DR7 is reset on #VMEXIT, and KVM restores all DRs in common x86 code as needed via hw_breakpoint_restore(). I.e. there is no risk of host DR0..DR3 being clobbered (when it matters). However, there is a flaw in the opposite direction; because the guest can lie about enabling DebugSwap, i.e. can *disable* DebugSwap without KVM's knowledge, KVM must not rely on the CPU to restore DRs. Defer fixing that wart, as it's more of a documentation issue than a bug in the code.
Note, KVM added support for DebugSwap on commit d1f85fbe836e ("KVM: SEV: Enable data breakpoints in SEV-ES"), but that is not an appropriate Fixes, as the underlying flaw exists in hardware, not in KVM. I.e. all kernels that support SEV-SNP need to be patched, not just kernels with KVM's full support for DebugSwap (ignoring that DebugSwap support landed first).
Opportunistically fix an incorrect statement in the comment; on CPUs without DebugSwap, the CPU does NOT save or load debug registers, i.e.
Fixes: e366f92ea99e ("KVM: SEV: Support SEV-SNP AP Creation NAE event") Cc: stable@vger.kernel.org Cc: Naveen N Rao naveen@kernel.org Cc: Kim Phillips kim.phillips@amd.com Cc: Tom Lendacky thomas.lendacky@amd.com Cc: Alexey Kardashevskiy aik@amd.com Reviewed-by: Tom Lendacky thomas.lendacky@amd.com Link: https://lore.kernel.org/r/20250227012541.3234589-2-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/svm/sev.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
--- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -4579,6 +4579,8 @@ void sev_es_vcpu_reset(struct vcpu_svm *
void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa) { + struct kvm *kvm = svm->vcpu.kvm; + /* * All host state for SEV-ES guests is categorized into three swap types * based on how it is handled by hardware during a world switch: @@ -4602,10 +4604,15 @@ void sev_es_prepare_switch_to_guest(stru
/* * If DebugSwap is enabled, debug registers are loaded but NOT saved by - * the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU both - * saves and loads debug registers (Type-A). + * the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU does + * not save or load debug registers. Sadly, on CPUs without + * ALLOWED_SEV_FEATURES, KVM can't prevent SNP guests from enabling + * DebugSwap on secondary vCPUs without KVM's knowledge via "AP Create". + * Save all registers if DebugSwap is supported to prevent host state + * from being clobbered by a misbehaving guest. */ - if (sev_vcpu_has_debug_swap(svm)) { + if (sev_vcpu_has_debug_swap(svm) || + (sev_snp_guest(kvm) && cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP))) { hostsa->dr0 = native_get_debugreg(0); hostsa->dr1 = native_get_debugreg(1); hostsa->dr2 = native_get_debugreg(2);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Christopherson seanjc@google.com
commit ee89e8013383d50a27ea9bf3c8a69eed6799856f upstream.
Drop bits 5:2 from the guest's effective DEBUGCTL value, as AMD changed the architectural behavior of the bits and broke backwards compatibility. On CPUs without BusLockTrap (or at least, in APMs from before ~2023), bits 5:2 controlled the behavior of external pins:
Performance-Monitoring/Breakpoint Pin-Control (PBi)—Bits 5:2, read/write. Software uses thesebits to control the type of information reported by the four external performance-monitoring/breakpoint pins on the processor. When a PBi bit is cleared to 0, the corresponding external pin (BPi) reports performance-monitor information. When a PBi bit is set to 1, the corresponding external pin (BPi) reports breakpoint information.
With the introduction of BusLockTrap, presumably to be compatible with Intel CPUs, AMD redefined bit 2 to be BLCKDB:
Bus Lock #DB Trap (BLCKDB)—Bit 2, read/write. Software sets this bit to enable generation of a #DB trap following successful execution of a bus lock when CPL is > 0.
and redefined bits 5:3 (and bit 6) as "6:3 Reserved MBZ".
Ideally, KVM would treat bits 5:2 as reserved. Defer that change to a feature cleanup to avoid breaking existing guest in LTS kernels. For now, drop the bits to retain backwards compatibility (of a sort).
Note, dropping bits 5:2 is still a guest-visible change, e.g. if the guest is enabling LBRs *and* the legacy PBi bits, then the state of the PBi bits is visible to the guest, whereas now the guest will always see '0'.
Reported-by: Ravi Bangoria ravi.bangoria@amd.com Cc: stable@vger.kernel.org Reviewed-and-tested-by: Ravi Bangoria ravi.bangoria@amd.com Link: https://lore.kernel.org/r/20250227222411.3490595-2-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/svm/svm.c | 12 ++++++++++++ arch/x86/kvm/svm/svm.h | 2 +- 2 files changed, 13 insertions(+), 1 deletion(-)
--- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3167,6 +3167,18 @@ static int svm_set_msr(struct kvm_vcpu * kvm_pr_unimpl_wrmsr(vcpu, ecx, data); break; } + + /* + * AMD changed the architectural behavior of bits 5:2. On CPUs + * without BusLockTrap, bits 5:2 control "external pins", but + * on CPUs that support BusLockDetect, bit 2 enables BusLockTrap + * and bits 5:3 are reserved-to-zero. Sadly, old KVM allowed + * the guest to set bits 5:2 despite not actually virtualizing + * Performance-Monitoring/Breakpoint external pins. Drop bits + * 5:2 for backwards compatibility. + */ + data &= ~GENMASK(5, 2); + if (data & DEBUGCTL_RESERVED_BITS) return 1;
--- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -591,7 +591,7 @@ static inline bool is_vnmi_enabled(struc /* svm.c */ #define MSR_INVALID 0xffffffffU
-#define DEBUGCTL_RESERVED_BITS (~(0x3fULL)) +#define DEBUGCTL_RESERVED_BITS (~(DEBUGCTLMSR_BTF | DEBUGCTLMSR_LBR))
extern bool dump_invalid_vmcb;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Christopherson seanjc@google.com
commit d0eac42f5cecce009d315655bee341304fbe075e upstream.
Mark BTF as reserved in DEBUGCTL on AMD, as KVM doesn't actually support BTF, and fully enabling BTF virtualization is non-trivial due to interactions with the emulator, guest_debug, #DB interception, nested SVM, etc.
Don't inject #GP if the guest attempts to set BTF, as there's no way to communicate lack of support to the guest, and instead suppress the flag and treat the WRMSR as (partially) unsupported.
In short, make KVM behave the same on AMD and Intel (VMX already squashes BTF).
Note, due to other bugs in KVM's handling of DEBUGCTL, the only way BTF has "worked" in any capacity is if the guest simultaneously enables LBRs.
Reported-by: Ravi Bangoria ravi.bangoria@amd.com Cc: stable@vger.kernel.org Reviewed-and-tested-by: Ravi Bangoria ravi.bangoria@amd.com Link: https://lore.kernel.org/r/20250227222411.3490595-3-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/svm/svm.c | 9 +++++++++ arch/x86/kvm/svm/svm.h | 2 +- 2 files changed, 10 insertions(+), 1 deletion(-)
--- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3179,6 +3179,15 @@ static int svm_set_msr(struct kvm_vcpu * */ data &= ~GENMASK(5, 2);
+ /* + * Suppress BTF as KVM doesn't virtualize BTF, but there's no + * way to communicate lack of support to the guest. + */ + if (data & DEBUGCTLMSR_BTF) { + kvm_pr_unimpl_wrmsr(vcpu, MSR_IA32_DEBUGCTLMSR, data); + data &= ~DEBUGCTLMSR_BTF; + } + if (data & DEBUGCTL_RESERVED_BITS) return 1;
--- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -591,7 +591,7 @@ static inline bool is_vnmi_enabled(struc /* svm.c */ #define MSR_INVALID 0xffffffffU
-#define DEBUGCTL_RESERVED_BITS (~(DEBUGCTLMSR_BTF | DEBUGCTLMSR_LBR)) +#define DEBUGCTL_RESERVED_BITS (~DEBUGCTLMSR_LBR)
extern bool dump_invalid_vmcb;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Christopherson seanjc@google.com
commit fb71c795935652fa20eaf9517ca9547f5af99a76 upstream.
Move KVM's snapshot of DEBUGCTL to kvm_vcpu_arch and take the snapshot in common x86, so that SVM can also use the snapshot.
Opportunistically change the field to a u64. While bits 63:32 are reserved on AMD, not mentioned at all in Intel's SDM, and managed as an "unsigned long" by the kernel, DEBUGCTL is an MSR and therefore a 64-bit value.
Reviewed-by: Xiaoyao Li xiaoyao.li@intel.com Cc: stable@vger.kernel.org Reviewed-and-tested-by: Ravi Bangoria ravi.bangoria@amd.com Link: https://lore.kernel.org/r/20250227222411.3490595-4-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/vmx/vmx.c | 8 ++------ arch/x86/kvm/vmx/vmx.h | 2 -- arch/x86/kvm/x86.c | 1 + 4 files changed, 4 insertions(+), 8 deletions(-)
--- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -761,6 +761,7 @@ struct kvm_vcpu_arch { u32 pkru; u32 hflags; u64 efer; + u64 host_debugctl; u64 apic_base; struct kvm_lapic *apic; /* kernel irqchip context */ bool load_eoi_exitmap_pending; --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1515,16 +1515,12 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu */ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { - struct vcpu_vmx *vmx = to_vmx(vcpu); - if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm)) shrink_ple_window(vcpu);
vmx_vcpu_load_vmcs(vcpu, cpu, NULL);
vmx_vcpu_pi_load(vcpu, cpu); - - vmx->host_debugctlmsr = get_debugctlmsr(); }
void vmx_vcpu_put(struct kvm_vcpu *vcpu) @@ -7454,8 +7450,8 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu }
/* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */ - if (vmx->host_debugctlmsr) - update_debugctlmsr(vmx->host_debugctlmsr); + if (vcpu->arch.host_debugctl) + update_debugctlmsr(vcpu->arch.host_debugctl);
#ifndef CONFIG_X86_64 /* --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -339,8 +339,6 @@ struct vcpu_vmx { /* apic deadline value in host tsc */ u64 hv_deadline_tsc;
- unsigned long host_debugctlmsr; - /* * Only bits masked by msr_ia32_feature_control_valid_bits can be set in * msr_ia32_feature_control. FEAT_CTL_LOCKED is always included --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4993,6 +4993,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu
/* Save host pkru register if supported */ vcpu->arch.host_pkru = read_pkru(); + vcpu->arch.host_debugctl = get_debugctlmsr();
/* Apply any externally detected TSC adjustments (due to suspend) */ if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Christopherson seanjc@google.com
commit 433265870ab3455b418885bff48fa5fd02f7e448 upstream.
Manually load the guest's DEBUGCTL prior to VMRUN (and restore the host's value on #VMEXIT) if it diverges from the host's value and LBR virtualization is disabled, as hardware only context switches DEBUGCTL if LBR virtualization is fully enabled. Running the guest with the host's value has likely been mildly problematic for quite some time, e.g. it will result in undesirable behavior if BTF diverges (with the caveat that KVM now suppresses guest BTF due to lack of support).
But the bug became fatal with the introduction of Bus Lock Trap ("Detect" in kernel paralance) support for AMD (commit 408eb7417a92 ("x86/bus_lock: Add support for AMD")), as a bus lock in the guest will trigger an unexpected #DB.
Note, suppressing the bus lock #DB, i.e. simply resuming the guest without injecting a #DB, is not an option. It wouldn't address the general issue with DEBUGCTL, e.g. for things like BTF, and there are other guest-visible side effects if BusLockTrap is left enabled.
If BusLockTrap is disabled, then DR6.BLD is reserved-to-1; any attempts to clear it by software are ignored. But if BusLockTrap is enabled, software can clear DR6.BLD:
Software enables bus lock trap by setting DebugCtl MSR[BLCKDB] (bit 2) to 1. When bus lock trap is enabled, ... The processor indicates that this #DB was caused by a bus lock by clearing DR6[BLD] (bit 11). DR6[11] previously had been defined to be always 1.
and clearing DR6.BLD is "sticky" in that it's not set (i.e. lowered) by other #DBs:
All other #DB exceptions leave DR6[BLD] unmodified
E.g. leaving BusLockTrap enable can confuse a legacy guest that writes '0' to reset DR6.
Reported-by: rangemachine@gmail.com Reported-by: whanos@sergal.fun Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219787 Closes: https://lore.kernel.org/all/bug-219787-28872@https.bugzilla.kernel.org%2F Cc: Ravi Bangoria ravi.bangoria@amd.com Cc: stable@vger.kernel.org Reviewed-and-tested-by: Ravi Bangoria ravi.bangoria@amd.com Link: https://lore.kernel.org/r/20250227222411.3490595-5-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/svm/svm.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
--- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4275,6 +4275,16 @@ static __no_kcsan fastpath_t svm_vcpu_ru clgi(); kvm_load_guest_xsave_state(vcpu);
+ /* + * Hardware only context switches DEBUGCTL if LBR virtualization is + * enabled. Manually load DEBUGCTL if necessary (and restore it after + * VM-Exit), as running with the host's DEBUGCTL can negatively affect + * guest state and can even be fatal, e.g. due to Bus Lock Detect. + */ + if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) && + vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl) + update_debugctlmsr(svm->vmcb->save.dbgctl); + kvm_wait_lapic_expire(vcpu);
/* @@ -4302,6 +4312,10 @@ static __no_kcsan fastpath_t svm_vcpu_ru if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) kvm_before_interrupt(vcpu, KVM_HANDLING_NMI);
+ if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) && + vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl) + update_debugctlmsr(vcpu->arch.host_debugctl); + kvm_load_host_xsave_state(vcpu); stgi();
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Christopherson seanjc@google.com
commit 189ecdb3e112da703ac0699f4ec76aa78122f911 upstream.
Snapshot the host's DEBUGCTL after disabling IRQs, as perf can toggle debugctl bits from IRQ context, e.g. when enabling/disabling events via smp_call_function_single(). Taking the snapshot (long) before IRQs are disabled could result in KVM effectively clobbering DEBUGCTL due to using a stale snapshot.
Cc: stable@vger.kernel.org Reviewed-and-tested-by: Ravi Bangoria ravi.bangoria@amd.com Link: https://lore.kernel.org/r/20250227222411.3490595-6-seanjc@google.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/x86.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4993,7 +4993,6 @@ void kvm_arch_vcpu_load(struct kvm_vcpu
/* Save host pkru register if supported */ vcpu->arch.host_pkru = read_pkru(); - vcpu->arch.host_debugctl = get_debugctlmsr();
/* Apply any externally detected TSC adjustments (due to suspend) */ if (unlikely(vcpu->arch.tsc_offset_adjustment)) { @@ -10965,6 +10964,8 @@ static int vcpu_enter_guest(struct kvm_v set_debugreg(0, 7); }
+ vcpu->arch.host_debugctl = get_debugctlmsr(); + guest_timing_enter_irqoff();
for (;;) {
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xiaoyao Li xiaoyao.li@intel.com
commit f9dc8fb3afc968042bdaf4b6e445a9272071c9f3 upstream.
Fix a goof where KVM sets CPUID.0x80000022.EAX to CPUID.0x80000022.EBX instead of zeroing both when PERFMON_V2 isn't supported by KVM. In practice, barring a buggy CPU (or vCPU model when running nested) only the !enable_pmu case is affected, as KVM always supports PERFMON_V2 if it's available in hardware, i.e. CPUID.0x80000022.EBX will be '0' if PERFMON_V2 is unsupported.
For the !enable_pmu case, the bug is relatively benign as KVM will refuse to enable PMU capabilities, but a VMM that reflects KVM's supported CPUID into the guest could inadvertently induce #GPs in the guest due to advertising support for MSRs that KVM refuses to emulate.
Fixes: 94cdeebd8211 ("KVM: x86/cpuid: Add AMD CPUID ExtPerfMonAndDbg leaf 0x80000022") Signed-off-by: Xiaoyao Li xiaoyao.li@intel.com Link: https://lore.kernel.org/r/20250304082314.472202-3-xiaoyao.li@intel.com [sean: massage shortlog and changelog, tag for stable] Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/cpuid.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -1387,7 +1387,7 @@ static inline int __do_cpuid_func(struct
entry->ecx = entry->edx = 0; if (!enable_pmu || !kvm_cpu_cap_has(X86_FEATURE_PERFMON_V2)) { - entry->eax = entry->ebx; + entry->eax = entry->ebx = 0; break; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Qiu-ji Chen chenqiuji666@gmail.com
commit 91d44c1afc61a2fec37a9c7a3485368309391e0b upstream.
Fixed a possible UAF problem in driver_override_show() in drivers/cdx/cdx.c
This function driver_override_show() is part of DEVICE_ATTR_RW, which includes both driver_override_show() and driver_override_store(). These functions can be executed concurrently in sysfs.
The driver_override_store() function uses driver_set_override() to update the driver_override value, and driver_set_override() internally locks the device (device_lock(dev)). If driver_override_show() reads cdx_dev->driver_override without locking, it could potentially access a freed pointer if driver_override_store() frees the string concurrently. This could lead to printing a kernel address, which is a security risk since DEVICE_ATTR can be read by all users.
Additionally, a similar pattern is used in drivers/amba/bus.c, as well as many other bus drivers, where device_lock() is taken in the show function, and it has been working without issues.
This potential bug was detected by our experimental static analysis tool, which analyzes locking APIs and paired functions to identify data races and atomicity violations.
Fixes: 1f86a00c1159 ("bus/fsl-mc: add support for 'driver_override' in the mc-bus") Cc: stable stable@kernel.org Signed-off-by: Qiu-ji Chen chenqiuji666@gmail.com Link: https://lore.kernel.org/r/20250118070833.27201-1-chenqiuji666@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/cdx/cdx.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/cdx/cdx.c +++ b/drivers/cdx/cdx.c @@ -470,8 +470,12 @@ static ssize_t driver_override_show(stru struct device_attribute *attr, char *buf) { struct cdx_device *cdx_dev = to_cdx_device(dev); + ssize_t len;
- return sysfs_emit(buf, "%s\n", cdx_dev->driver_override); + device_lock(dev); + len = sysfs_emit(buf, "%s\n", cdx_dev->driver_override); + device_unlock(dev); + return len; } static DEVICE_ATTR_RW(driver_override);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Usyskin alexander.usyskin@intel.com
commit a8e8ffcc3afce2ee5fb70162aeaef3f03573ee1e upstream.
Add Panther Lake P device id.
Cc: stable stable@kernel.org Co-developed-by: Tomas Winkler tomasw@gmail.com Signed-off-by: Tomas Winkler tomasw@gmail.com Signed-off-by: Alexander Usyskin alexander.usyskin@intel.com Link: https://lore.kernel.org/r/20250209110550.1582982-1-alexander.usyskin@intel.c... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/misc/mei/hw-me-regs.h | 2 ++ drivers/misc/mei/pci-me.c | 2 ++ 2 files changed, 4 insertions(+)
--- a/drivers/misc/mei/hw-me-regs.h +++ b/drivers/misc/mei/hw-me-regs.h @@ -117,6 +117,8 @@
#define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */
+#define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */ + /* * MEI HW Section */ --- a/drivers/misc/mei/pci-me.c +++ b/drivers/misc/mei/pci-me.c @@ -124,6 +124,8 @@ static const struct pci_device_id mei_me
{MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_P, MEI_ME_PCH15_CFG)}, + /* required last entry */ {0, } };
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans de Goede hdegoede@redhat.com
commit fdb1ada57cf8b8752cdf54f08709d76d74999544 upstream.
The _CRS ACPI resources table has 2 entries for the host wakeup GPIO, the first one being a regular GpioIo () resource while the second one is a GpioInt () resource for the same pin.
The acpi_gpio_mapping table used by vsc-tp.c maps the first Gpio () resource to "wakeuphost-gpios" where as the second GpioInt () entry is mapped to "wakeuphostint-gpios".
Using "wakeuphost" to request the GPIO as was done until now, means that the gpiolib-acpi code does not know that the GPIO is active-low as that info is only available in the GpioInt () entry.
Things were still working before due to the following happening:
1. Since the 2 entries point to the same pin they share a struct gpio_desc 2. The SPI core creates the SPI device vsc-tp.c binds to and calls acpi_dev_gpio_irq_get(). This does use the second entry and sets FLAG_ACTIVE_LOW in gpio_desc.flags . 3. vsc_tp_probe() requests the "wakeuphost" GPIO and inherits the active-low flag set by acpi_dev_gpio_irq_get()
But there is a possible scenario where things do not work:
1. - 3. happen as above 4. After requesting the "wakeuphost" GPIO, the "resetfw" GPIO is requested next, but its USB GPIO controller is not available yet, so this call returns -EPROBE_DEFER. 5. The gpio_desc for "wakeuphost" is put() and during this the active-low flag is cleared from gpio_desc.flags . 6. Later on vsc_tp_probe() requests the "wakeuphost" GPIO again, but now it is not marked active-low.
The difference can also be seen in /sys/kernel/debug/gpio, which contains the following line for this GPIO:
gpio-535 ( |wakeuphost ) in hi IRQ ACTIVE LOW
If the second scenario is hit the "ACTIVE LOW" at the end disappears and things do not work.
Fix this by requesting the GPIO through the "wakeuphostint" mapping instead which provides active-low info without relying on acpi_dev_gpio_irq_get() pre-populating this info in the gpio_desc.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=2316918 Signed-off-by: Hans de Goede hdegoede@redhat.com Reviewed-by: Stanislaw Gruszka stanislaw.gruszka@linux.intel.com Tested-by: Sakari Ailus sakari.ailus@linux.intel.com Fixes: 566f5ca97680 ("mei: Add transport driver for IVSC device") Cc: stable stable@kernel.org Link: https://lore.kernel.org/r/20250214212425.84021-1-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/misc/mei/vsc-tp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/misc/mei/vsc-tp.c +++ b/drivers/misc/mei/vsc-tp.c @@ -504,7 +504,7 @@ static int vsc_tp_probe(struct spi_devic if (ret) return ret;
- tp->wakeuphost = devm_gpiod_get(dev, "wakeuphost", GPIOD_IN); + tp->wakeuphost = devm_gpiod_get(dev, "wakeuphostint", GPIOD_IN); if (IS_ERR(tp->wakeuphost)) return PTR_ERR(tp->wakeuphost);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pawel Chmielewski pawel.chmielewski@intel.com
commit b5edccae9f447a92d475267d94c33f4926963eec upstream.
Add support for the Trace Hub in Arrow Lake.
Signed-off-by: Pawel Chmielewski pawel.chmielewski@intel.com Signed-off-by: Alexander Shishkin alexander.shishkin@linux.intel.com Reviewed-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Cc: stable@kernel.org Link: https://lore.kernel.org/r/20250211185017.1759193-4-alexander.shishkin@linux.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hwtracing/intel_th/pci.c | 5 +++++ 1 file changed, 5 insertions(+)
--- a/drivers/hwtracing/intel_th/pci.c +++ b/drivers/hwtracing/intel_th/pci.c @@ -330,6 +330,11 @@ static const struct pci_device_id intel_ .driver_data = (kernel_ulong_t)&intel_th_2x, }, { + /* Arrow Lake */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7724), + .driver_data = (kernel_ulong_t)&intel_th_2x, + }, + { /* Alder Lake CPU */ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f), .driver_data = (kernel_ulong_t)&intel_th_2x,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Shishkin alexander.shishkin@linux.intel.com
commit a70034d6c0d5f3cdee40bb00a578e17fd2ebe426 upstream.
Add support for the Trace Hub in Panther Lake-H.
Signed-off-by: Alexander Shishkin alexander.shishkin@linux.intel.com Reviewed-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Cc: stable@kernel.org Link: https://lore.kernel.org/r/20250211185017.1759193-5-alexander.shishkin@linux.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hwtracing/intel_th/pci.c | 5 +++++ 1 file changed, 5 insertions(+)
--- a/drivers/hwtracing/intel_th/pci.c +++ b/drivers/hwtracing/intel_th/pci.c @@ -335,6 +335,11 @@ static const struct pci_device_id intel_ .driver_data = (kernel_ulong_t)&intel_th_2x, }, { + /* Panther Lake-H */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xe324), + .driver_data = (kernel_ulong_t)&intel_th_2x, + }, + { /* Alder Lake CPU */ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f), .driver_data = (kernel_ulong_t)&intel_th_2x,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Shishkin alexander.shishkin@linux.intel.com
commit 49114ff05770264ae233f50023fc64a719a9dcf9 upstream.
Add support for the Trace Hub in Panther Lake-P/U.
Signed-off-by: Alexander Shishkin alexander.shishkin@linux.intel.com Reviewed-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Cc: stable@kernel.org Link: https://lore.kernel.org/r/20250211185017.1759193-6-alexander.shishkin@linux.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hwtracing/intel_th/pci.c | 5 +++++ 1 file changed, 5 insertions(+)
--- a/drivers/hwtracing/intel_th/pci.c +++ b/drivers/hwtracing/intel_th/pci.c @@ -340,6 +340,11 @@ static const struct pci_device_id intel_ .driver_data = (kernel_ulong_t)&intel_th_2x, }, { + /* Panther Lake-P/U */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xe424), + .driver_data = (kernel_ulong_t)&intel_th_2x, + }, + { /* Alder Lake CPU */ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f), .driver_data = (kernel_ulong_t)&intel_th_2x,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thadeu Lima de Souza Cascardo cascardo@igalia.com
commit 6d991f569c5ef6eaeadf1238df2c36e3975233ad upstream.
When creating sysfs files fail, the allocated minor must be freed such that it can be later reused. That is specially harmful for static minor numbers, since those would always fail to register later on.
Fixes: 6d04d2b554b1 ("misc: misc_minor_alloc to use ida for all dynamic/misc dynamic minors") Cc: stable stable@kernel.org Signed-off-by: Thadeu Lima de Souza Cascardo cascardo@igalia.com Link: https://lore.kernel.org/r/20250123123249.4081674-5-cascardo@igalia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/char/misc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/char/misc.c +++ b/drivers/char/misc.c @@ -264,8 +264,8 @@ int misc_register(struct miscdevice *mis device_create_with_groups(&misc_class, misc->parent, dev, misc, misc->groups, "%s", misc->name); if (IS_ERR(misc->this_device)) { + misc_minor_free(misc->minor); if (is_dynamic) { - misc_minor_free(misc->minor); misc->minor = MISC_DYNAMIC_MINOR; } err = PTR_ERR(misc->this_device);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luca Ceresoli luca.ceresoli@bootlin.com
commit 78eb41f518f414378643ab022241df2a9dcd008b upstream.
Commit bac3b10b78e5 ("driver core: fw_devlink: Stop trying to optimize cycle detection logic") introduced a new struct device *con_dev and a get_dev_from_fwnode() call to get it, but without adding a corresponding put_device().
Closes: https://lore.kernel.org/all/20241204124826.2e055091@booty/ Fixes: bac3b10b78e5 ("driver core: fw_devlink: Stop trying to optimize cycle detection logic") Cc: stable@vger.kernel.org Reviewed-by: Saravana Kannan saravanak@google.com Signed-off-by: Luca Ceresoli luca.ceresoli@bootlin.com Link: https://lore.kernel.org/r/20250213-fix__fw_devlink_relax_cycles_missing_devi... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/base/core.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -2079,6 +2079,7 @@ static bool __fw_devlink_relax_cycles(st out: sup_handle->flags &= ~FWNODE_FLAG_VISITED; put_device(sup_dev); + put_device(con_dev); put_device(par_dev); return ret; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Visweswara Tanuku quic_vtanuku@quicinc.com
commit dcb0d43ba8eb9517e70b1a0e4b0ae0ab657a0e5a upstream.
In case of interrupt delay for any reason, slim_do_transfer() returns timeout error but the transaction ID (TID) is not freed. This results into invalid memory access inside qcom_slim_ngd_rx_msgq_cb() due to invalid TID.
Fix the issue by freeing the TID in slim_do_transfer() before returning timeout error to avoid invalid memory access.
Call trace: __memcpy_fromio+0x20/0x190 qcom_slim_ngd_rx_msgq_cb+0x130/0x290 [slim_qcom_ngd_ctrl] vchan_complete+0x2a0/0x4a0 tasklet_action_common+0x274/0x700 tasklet_action+0x28/0x3c _stext+0x188/0x620 run_ksoftirqd+0x34/0x74 smpboot_thread_fn+0x1d8/0x464 kthread+0x178/0x238 ret_from_fork+0x10/0x20 Code: aa0003e8 91000429 f100044a 3940002b (3800150b) ---[ end trace 0fe00bec2b975c99 ]--- Kernel panic - not syncing: Oops: Fatal exception in interrupt.
Fixes: afbdcc7c384b ("slimbus: Add messaging APIs to slimbus framework") Cc: stable stable@kernel.org Signed-off-by: Visweswara Tanuku quic_vtanuku@quicinc.com Link: https://lore.kernel.org/r/20250124125740.16897-1-quic_vtanuku@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/slimbus/messaging.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/drivers/slimbus/messaging.c +++ b/drivers/slimbus/messaging.c @@ -148,8 +148,9 @@ int slim_do_transfer(struct slim_control }
ret = ctrl->xfer_msg(ctrl, txn); - - if (!ret && need_tid && !txn->msg->comp) { + if (ret == -ETIMEDOUT) { + slim_free_txn_tid(ctrl, txn); + } else if (!ret && need_tid && !txn->msg->comp) { unsigned long ms = txn->rl + HZ;
time_left = wait_for_completion_timeout(txn->comp,
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org
commit a321d163de3d8aa38a6449ab2becf4b1581aed96 upstream.
There are multiple places from where the recovery work gets scheduled asynchronously. Also, there are multiple places where the caller waits synchronously for the recovery to be completed. One such place is during the PM shutdown() callback.
If the device is not alive during recovery_work, it will try to reset the device using pci_reset_function(). This function internally will take the device_lock() first before resetting the device. By this time, if the lock has already been acquired, then recovery_work will get stalled while waiting for the lock. And if the lock was already acquired by the caller which waits for the recovery_work to be completed, it will lead to deadlock.
This is what happened on the X1E80100 CRD device when the device died before shutdown() callback. Driver core calls the driver's shutdown() callback while holding the device_lock() leading to deadlock.
And this deadlock scenario can occur on other paths as well, like during the PM suspend() callback, where the driver core would hold the device_lock() before calling driver's suspend() callback. And if the recovery_work was already started, it could lead to deadlock. This is also observed on the X1E80100 CRD.
So to fix both issues, use pci_try_reset_function() in recovery_work. This function first checks for the availability of the device_lock() before trying to reset the device. If the lock is available, it will acquire it and reset the device. Otherwise, it will return -EAGAIN. If that happens, recovery_work will fail with the error message "Recovery failed" as not much could be done.
Cc: stable@vger.kernel.org # 5.12 Reported-by: Johan Hovold johan@kernel.org Closes: https://lore.kernel.org/mhi/Z1me8iaK7cwgjL92@hovoldconsulting.com Fixes: 7389337f0a78 ("mhi: pci_generic: Add suspend/resume/recovery procedure") Reviewed-by: Johan Hovold johan+linaro@kernel.org Tested-by: Johan Hovold johan+linaro@kernel.org Analyzed-by: Johan Hovold johan@kernel.org Link: https://lore.kernel.org/mhi/Z2KKjWY2mPen6GPL@hovoldconsulting.com/ Reviewed-by: Loic Poulain loic.poulain@linaro.org Link: https://lore.kernel.org/r/20250108-mhi_recovery_fix-v1-1-a0a00a17da46@linaro... Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/bus/mhi/host/pci_generic.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/drivers/bus/mhi/host/pci_generic.c +++ b/drivers/bus/mhi/host/pci_generic.c @@ -1040,8 +1040,9 @@ static void mhi_pci_recovery_work(struct err_unprepare: mhi_unprepare_after_power_down(mhi_cntrl); err_try_reset: - if (pci_reset_function(pdev)) - dev_err(&pdev->dev, "Recovery failed\n"); + err = pci_try_reset_function(pdev); + if (err) + dev_err(&pdev->dev, "Recovery failed: %d\n", err); }
static void health_check(struct timer_list *t)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
commit 038ef0754aae76f79b147b8867f9250e6a976872 upstream.
The dev_id value in the GPIO lookup table must match to the device instance name, which in this case is combined of name and platform device ID, i.e. "spi_gpio.1". But the table assumed that there was no platform device ID defined, which is wrong. Fix the dev_id value accordingly.
Fixes: 9b00bc7b901f ("spi: spi-gpio: Rewrite to use GPIO descriptors") Cc: stable stable@kernel.org Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Link: https://lore.kernel.org/r/20250206220311.1554075-1-andriy.shevchenko@linux.i... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/misc/eeprom/digsy_mtc_eeprom.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/misc/eeprom/digsy_mtc_eeprom.c +++ b/drivers/misc/eeprom/digsy_mtc_eeprom.c @@ -50,7 +50,7 @@ static struct platform_device digsy_mtc_ };
static struct gpiod_lookup_table eeprom_spi_gpiod_table = { - .dev_id = "spi_gpio", + .dev_id = "spi_gpio.1", .table = { GPIO_LOOKUP("gpio@b00", GPIO_EEPROM_CLK, "sck", GPIO_ACTIVE_HIGH),
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Haoyu Li lihaoyu499@gmail.com
commit 819cec1dc47cdeac8f5dd6ba81c1dbee2a68c3bb upstream.
In the "pmcmd_ioctl" function, three memory objects allocated by kmalloc are initialized by "hcall_get_cpu_state", which are then copied to user space. The initializer is indeed implemented in "acrn_hypercall2" (arch/x86/include/asm/acrn.h). There is a risk of information leakage due to uninitialized bytes.
Fixes: 3d679d5aec64 ("virt: acrn: Introduce interfaces to query C-states and P-states allowed by hypervisor") Signed-off-by: Haoyu Li lihaoyu499@gmail.com Cc: stable stable@kernel.org Acked-by: Fei Li fei1.li@intel.com Link: https://lore.kernel.org/r/20250130115811.92424-1-lihaoyu499@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/virt/acrn/hsm.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/virt/acrn/hsm.c +++ b/drivers/virt/acrn/hsm.c @@ -49,7 +49,7 @@ static int pmcmd_ioctl(u64 cmd, void __u switch (cmd & PMCMD_TYPE_MASK) { case ACRN_PMCMD_GET_PX_CNT: case ACRN_PMCMD_GET_CX_CNT: - pm_info = kmalloc(sizeof(u64), GFP_KERNEL); + pm_info = kzalloc(sizeof(u64), GFP_KERNEL); if (!pm_info) return -ENOMEM;
@@ -64,7 +64,7 @@ static int pmcmd_ioctl(u64 cmd, void __u kfree(pm_info); break; case ACRN_PMCMD_GET_PX_DATA: - px_data = kmalloc(sizeof(*px_data), GFP_KERNEL); + px_data = kzalloc(sizeof(*px_data), GFP_KERNEL); if (!px_data) return -ENOMEM;
@@ -79,7 +79,7 @@ static int pmcmd_ioctl(u64 cmd, void __u kfree(px_data); break; case ACRN_PMCMD_GET_CX_DATA: - cx_data = kmalloc(sizeof(*cx_data), GFP_KERNEL); + cx_data = kzalloc(sizeof(*cx_data), GFP_KERNEL); if (!cx_data) return -ENOMEM;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sam Winchenbach swinchenbach@arka.org
commit cc2c3540d9477a9931fb0fd851fcaeba524a5b35 upstream.
When a weak pull-up is present on the SDO line, regmap_update_bits fails to write both the SOFTRESET and SDOACTIVE bits because it incorrectly reads them as already set.
Since the soft reset disables the SDO line, performing a read-modify-write operation on ADI_SPI_CONFIG_A to enable the SDO line doesn't make sense. This change directly writes to the register instead of using regmap_update_bits.
Fixes: f34fe888ad05 ("iio:filter:admv8818: add support for ADMV8818") Signed-off-by: Sam Winchenbach swinchenbach@arka.org Link: https://patch.msgid.link/SA1P110MB106904C961B0F3FAFFED74C0BCF5A@SA1P110MB106... Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/filter/admv8818.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-)
--- a/drivers/iio/filter/admv8818.c +++ b/drivers/iio/filter/admv8818.c @@ -574,21 +574,15 @@ static int admv8818_init(struct admv8818 struct spi_device *spi = st->spi; unsigned int chip_id;
- ret = regmap_update_bits(st->regmap, ADMV8818_REG_SPI_CONFIG_A, - ADMV8818_SOFTRESET_N_MSK | - ADMV8818_SOFTRESET_MSK, - FIELD_PREP(ADMV8818_SOFTRESET_N_MSK, 1) | - FIELD_PREP(ADMV8818_SOFTRESET_MSK, 1)); + ret = regmap_write(st->regmap, ADMV8818_REG_SPI_CONFIG_A, + ADMV8818_SOFTRESET_N_MSK | ADMV8818_SOFTRESET_MSK); if (ret) { dev_err(&spi->dev, "ADMV8818 Soft Reset failed.\n"); return ret; }
- ret = regmap_update_bits(st->regmap, ADMV8818_REG_SPI_CONFIG_A, - ADMV8818_SDOACTIVE_N_MSK | - ADMV8818_SDOACTIVE_MSK, - FIELD_PREP(ADMV8818_SDOACTIVE_N_MSK, 1) | - FIELD_PREP(ADMV8818_SDOACTIVE_MSK, 1)); + ret = regmap_write(st->regmap, ADMV8818_REG_SPI_CONFIG_A, + ADMV8818_SDOACTIVE_N_MSK | ADMV8818_SDOACTIVE_MSK); if (ret) { dev_err(&spi->dev, "ADMV8818 SDO Enable failed.\n"); return ret;
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Javier Carrasco javier.carrasco.cruz@gmail.com
commit a96d3e2beca0e51c8444d0a3b6b3ec484c4c5a8f upstream.
The two provided max_scale_nano values must be multiplied by 100 and 10 respectively to achieve nano units. According to the comments:
Max scale for apds0306 is 16.326432 → the fractional part is 0.326432, which is 326432000 in NANO. The current value is 3264320.
Max scale for apds0306-065 is 14.09721 → the fractional part is 0.09712, which is 97120000 in NANO. The current value is 9712000.
Update max_scale_nano initialization to use the right NANO fractional parts.
Cc: stable@vger.kernel.org Fixes: 620d1e6c7a3f ("iio: light: Add support for APDS9306 Light Sensor") Signed-off-by: Javier Carrasco javier.carrasco.cruz@gmail.com Tested-by: subhajit.ghosh@tweaklogic.com Link: https://patch.msgid.link/20250112-apds9306_nano_vals-v1-1-82fb145d0b16@gmail... Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/light/apds9306.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/iio/light/apds9306.c +++ b/drivers/iio/light/apds9306.c @@ -108,11 +108,11 @@ static const struct part_id_gts_multipli { .part_id = 0xB1, .max_scale_int = 16, - .max_scale_nano = 3264320, + .max_scale_nano = 326432000, }, { .part_id = 0xB3, .max_scale_int = 14, - .max_scale_nano = 9712000, + .max_scale_nano = 97120000, }, };
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Angelo Dureghello adureghello@baylibre.com
commit e17b9f20da7d2bc1f48878ab2230523b2512d965 upstream.
Clear reset status flag, to keep error status register clean after reset (ad3552r manual, rev B table 38).
Reset error flag was left to 1, so debugging registers, the "Error Status Register" was dirty (0x01). It is important to clear this bit, so if there is any reset event over normal working mode, it is possible to detect it.
Fixes: 8f2b54824b28 ("drivers:iio:dac: Add AD3552R driver support") Signed-off-by: Angelo Dureghello adureghello@baylibre.com Link: https://patch.msgid.link/20250125-wip-bl-ad3552r-clear-reset-v2-1-aa3a27f3ff... Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/dac/ad3552r.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/iio/dac/ad3552r.c +++ b/drivers/iio/dac/ad3552r.c @@ -714,6 +714,12 @@ static int ad3552r_reset(struct ad3552r_ return ret; }
+ /* Clear reset error flag, see ad3552r manual, rev B table 38. */ + ret = ad3552r_write_reg(dac, AD3552R_REG_ADDR_ERR_STATUS, + AD3552R_MASK_RESET_STATUS); + if (ret) + return ret; + return ad3552r_update_reg_field(dac, addr_mask_map[AD3552R_ADDR_ASCENSION][0], addr_mask_map[AD3552R_ADDR_ASCENSION][1],
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Markus Burri markus.burri@mt.com
commit 21d7241faf406e8aee3ce348451cc362d5db6a02 upstream.
Channel configuration doesn't work as expected. For FIELD_PREP the bit mask is needed and not the bit number.
Fixes: 874bbd1219c7 ("iio: adc: ad7192: Use bitfield access macros") Signed-off-by: Markus Burri markus.burri@mt.com Reviewed-by: Nuno Sá nuno.sa@analog.com Link: https://patch.msgid.link/20250124150703.97848-1-markus.burri@mt.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/ad7192.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/iio/adc/ad7192.c +++ b/drivers/iio/adc/ad7192.c @@ -1082,7 +1082,7 @@ static int ad7192_update_scan_mode(struc
conf &= ~AD7192_CONF_CHAN_MASK; for_each_set_bit(i, scan_mask, 8) - conf |= FIELD_PREP(AD7192_CONF_CHAN_MASK, i); + conf |= FIELD_PREP(AD7192_CONF_CHAN_MASK, BIT(i));
ret = ad_sd_write_reg(&st->sd, AD7192_REG_CONF, 3, conf); if (ret < 0)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nayab Sayed nayabbasha.sayed@microchip.com
commit aa5119c36d19639397d29ef305aa53a5ecd72b27 upstream.
The number of valid bits in SAMA7G5 ADC channel data register are 16. Hence changing the realbits value to 16
Fixes: 840bf6cb983f ("iio: adc: at91-sama5d2_adc: add support for sama7g5 device") Signed-off-by: Nayab Sayed nayabbasha.sayed@microchip.com Link: https://patch.msgid.link/20250115-fix-sama7g5-adc-realbits-v2-1-58a6e4087584... Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/at91-sama5d2_adc.c | 68 +++++++++++++++++++++---------------- 1 file changed, 40 insertions(+), 28 deletions(-)
--- a/drivers/iio/adc/at91-sama5d2_adc.c +++ b/drivers/iio/adc/at91-sama5d2_adc.c @@ -329,7 +329,7 @@ static const struct at91_adc_reg_layout #define AT91_HWFIFO_MAX_SIZE_STR "128" #define AT91_HWFIFO_MAX_SIZE 128
-#define AT91_SAMA5D2_CHAN_SINGLE(index, num, addr) \ +#define AT91_SAMA_CHAN_SINGLE(index, num, addr, rbits) \ { \ .type = IIO_VOLTAGE, \ .channel = num, \ @@ -337,7 +337,7 @@ static const struct at91_adc_reg_layout .scan_index = index, \ .scan_type = { \ .sign = 'u', \ - .realbits = 14, \ + .realbits = rbits, \ .storagebits = 16, \ }, \ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ @@ -350,7 +350,13 @@ static const struct at91_adc_reg_layout .indexed = 1, \ }
-#define AT91_SAMA5D2_CHAN_DIFF(index, num, num2, addr) \ +#define AT91_SAMA5D2_CHAN_SINGLE(index, num, addr) \ + AT91_SAMA_CHAN_SINGLE(index, num, addr, 14) + +#define AT91_SAMA7G5_CHAN_SINGLE(index, num, addr) \ + AT91_SAMA_CHAN_SINGLE(index, num, addr, 16) + +#define AT91_SAMA_CHAN_DIFF(index, num, num2, addr, rbits) \ { \ .type = IIO_VOLTAGE, \ .differential = 1, \ @@ -360,7 +366,7 @@ static const struct at91_adc_reg_layout .scan_index = index, \ .scan_type = { \ .sign = 's', \ - .realbits = 14, \ + .realbits = rbits, \ .storagebits = 16, \ }, \ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ @@ -373,6 +379,12 @@ static const struct at91_adc_reg_layout .indexed = 1, \ }
+#define AT91_SAMA5D2_CHAN_DIFF(index, num, num2, addr) \ + AT91_SAMA_CHAN_DIFF(index, num, num2, addr, 14) + +#define AT91_SAMA7G5_CHAN_DIFF(index, num, num2, addr) \ + AT91_SAMA_CHAN_DIFF(index, num, num2, addr, 16) + #define AT91_SAMA5D2_CHAN_TOUCH(num, name, mod) \ { \ .type = IIO_POSITIONRELATIVE, \ @@ -666,30 +678,30 @@ static const struct iio_chan_spec at91_s };
static const struct iio_chan_spec at91_sama7g5_adc_channels[] = { - AT91_SAMA5D2_CHAN_SINGLE(0, 0, 0x60), - AT91_SAMA5D2_CHAN_SINGLE(1, 1, 0x64), - AT91_SAMA5D2_CHAN_SINGLE(2, 2, 0x68), - AT91_SAMA5D2_CHAN_SINGLE(3, 3, 0x6c), - AT91_SAMA5D2_CHAN_SINGLE(4, 4, 0x70), - AT91_SAMA5D2_CHAN_SINGLE(5, 5, 0x74), - AT91_SAMA5D2_CHAN_SINGLE(6, 6, 0x78), - AT91_SAMA5D2_CHAN_SINGLE(7, 7, 0x7c), - AT91_SAMA5D2_CHAN_SINGLE(8, 8, 0x80), - AT91_SAMA5D2_CHAN_SINGLE(9, 9, 0x84), - AT91_SAMA5D2_CHAN_SINGLE(10, 10, 0x88), - AT91_SAMA5D2_CHAN_SINGLE(11, 11, 0x8c), - AT91_SAMA5D2_CHAN_SINGLE(12, 12, 0x90), - AT91_SAMA5D2_CHAN_SINGLE(13, 13, 0x94), - AT91_SAMA5D2_CHAN_SINGLE(14, 14, 0x98), - AT91_SAMA5D2_CHAN_SINGLE(15, 15, 0x9c), - AT91_SAMA5D2_CHAN_DIFF(16, 0, 1, 0x60), - AT91_SAMA5D2_CHAN_DIFF(17, 2, 3, 0x68), - AT91_SAMA5D2_CHAN_DIFF(18, 4, 5, 0x70), - AT91_SAMA5D2_CHAN_DIFF(19, 6, 7, 0x78), - AT91_SAMA5D2_CHAN_DIFF(20, 8, 9, 0x80), - AT91_SAMA5D2_CHAN_DIFF(21, 10, 11, 0x88), - AT91_SAMA5D2_CHAN_DIFF(22, 12, 13, 0x90), - AT91_SAMA5D2_CHAN_DIFF(23, 14, 15, 0x98), + AT91_SAMA7G5_CHAN_SINGLE(0, 0, 0x60), + AT91_SAMA7G5_CHAN_SINGLE(1, 1, 0x64), + AT91_SAMA7G5_CHAN_SINGLE(2, 2, 0x68), + AT91_SAMA7G5_CHAN_SINGLE(3, 3, 0x6c), + AT91_SAMA7G5_CHAN_SINGLE(4, 4, 0x70), + AT91_SAMA7G5_CHAN_SINGLE(5, 5, 0x74), + AT91_SAMA7G5_CHAN_SINGLE(6, 6, 0x78), + AT91_SAMA7G5_CHAN_SINGLE(7, 7, 0x7c), + AT91_SAMA7G5_CHAN_SINGLE(8, 8, 0x80), + AT91_SAMA7G5_CHAN_SINGLE(9, 9, 0x84), + AT91_SAMA7G5_CHAN_SINGLE(10, 10, 0x88), + AT91_SAMA7G5_CHAN_SINGLE(11, 11, 0x8c), + AT91_SAMA7G5_CHAN_SINGLE(12, 12, 0x90), + AT91_SAMA7G5_CHAN_SINGLE(13, 13, 0x94), + AT91_SAMA7G5_CHAN_SINGLE(14, 14, 0x98), + AT91_SAMA7G5_CHAN_SINGLE(15, 15, 0x9c), + AT91_SAMA7G5_CHAN_DIFF(16, 0, 1, 0x60), + AT91_SAMA7G5_CHAN_DIFF(17, 2, 3, 0x68), + AT91_SAMA7G5_CHAN_DIFF(18, 4, 5, 0x70), + AT91_SAMA7G5_CHAN_DIFF(19, 6, 7, 0x78), + AT91_SAMA7G5_CHAN_DIFF(20, 8, 9, 0x80), + AT91_SAMA7G5_CHAN_DIFF(21, 10, 11, 0x88), + AT91_SAMA7G5_CHAN_DIFF(22, 12, 13, 0x90), + AT91_SAMA7G5_CHAN_DIFF(23, 14, 15, 0x98), IIO_CHAN_SOFT_TIMESTAMP(24), AT91_SAMA5D2_CHAN_TEMP(AT91_SAMA7G5_ADC_TEMP_CHANNEL, "temp", 0xdc), };
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ryan Roberts ryan.roberts@arm.com
commit 02410ac72ac3707936c07ede66e94360d0d65319 upstream.
In order to fix a bug, arm64 needs to be told the size of the huge page for which the huge_pte is being cleared in huge_ptep_get_and_clear(). Provide for this by adding an `unsigned long sz` parameter to the function. This follows the same pattern as huge_pte_clear() and set_huge_pte_at().
This commit makes the required interface modifications to the core mm as well as all arches that implement this function (arm64, loongarch, mips, parisc, powerpc, riscv, s390, sparc). The actual arm64 bug will be fixed in a separate commit.
Cc: stable@vger.kernel.org Fixes: 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit") Acked-by: David Hildenbrand david@redhat.com Reviewed-by: Alexandre Ghiti alexghiti@rivosinc.com # riscv Reviewed-by: Christophe Leroy christophe.leroy@csgroup.eu Reviewed-by: Catalin Marinas catalin.marinas@arm.com Reviewed-by: Anshuman Khandual anshuman.khandual@arm.com Signed-off-by: Ryan Roberts ryan.roberts@arm.com Acked-by: Alexander Gordeev agordeev@linux.ibm.com # s390 Link: https://lore.kernel.org/r/20250226120656.2400136-2-ryan.roberts@arm.com Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/include/asm/hugetlb.h | 4 ++-- arch/arm64/mm/hugetlbpage.c | 8 +++++--- arch/loongarch/include/asm/hugetlb.h | 6 ++++-- arch/mips/include/asm/hugetlb.h | 6 ++++-- arch/parisc/include/asm/hugetlb.h | 2 +- arch/parisc/mm/hugetlbpage.c | 2 +- arch/powerpc/include/asm/hugetlb.h | 6 ++++-- arch/riscv/include/asm/hugetlb.h | 3 ++- arch/riscv/mm/hugetlbpage.c | 2 +- arch/s390/include/asm/hugetlb.h | 17 ++++++++++++----- arch/s390/mm/hugetlbpage.c | 4 ++-- arch/sparc/include/asm/hugetlb.h | 2 +- arch/sparc/mm/hugetlbpage.c | 2 +- include/asm-generic/hugetlb.h | 2 +- include/linux/hugetlb.h | 4 +++- mm/hugetlb.c | 4 ++-- 16 files changed, 46 insertions(+), 28 deletions(-)
--- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -34,8 +34,8 @@ extern int huge_ptep_set_access_flags(st unsigned long addr, pte_t *ptep, pte_t pte, int dirty); #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR -extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep); +extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned long sz); #define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT extern void huge_ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep); --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -385,8 +385,8 @@ void huge_pte_clear(struct mm_struct *mm __pte_clear(mm, addr, ptep); }
-pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) +pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned long sz) { int ncontig; size_t pgsize; @@ -538,6 +538,8 @@ bool __init arch_hugetlb_valid_size(unsi
pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { + unsigned long psize = huge_page_size(hstate_vma(vma)); + if (alternative_has_cap_unlikely(ARM64_WORKAROUND_2645198)) { /* * Break-before-make (BBM) is required for all user space mappings @@ -547,7 +549,7 @@ pte_t huge_ptep_modify_prot_start(struct if (pte_user_exec(__ptep_get(ptep))) return huge_ptep_clear_flush(vma, addr, ptep); } - return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, psize); }
void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, --- a/arch/loongarch/include/asm/hugetlb.h +++ b/arch/loongarch/include/asm/hugetlb.h @@ -41,7 +41,8 @@ static inline void huge_pte_clear(struct
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) + unsigned long addr, pte_t *ptep, + unsigned long sz) { pte_t clear; pte_t pte = ptep_get(ptep); @@ -56,8 +57,9 @@ static inline pte_t huge_ptep_clear_flus unsigned long addr, pte_t *ptep) { pte_t pte; + unsigned long sz = huge_page_size(hstate_vma(vma));
- pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz); flush_tlb_page(vma, addr); return pte; } --- a/arch/mips/include/asm/hugetlb.h +++ b/arch/mips/include/asm/hugetlb.h @@ -32,7 +32,8 @@ static inline int prepare_hugepage_range
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) + unsigned long addr, pte_t *ptep, + unsigned long sz) { pte_t clear; pte_t pte = *ptep; @@ -47,13 +48,14 @@ static inline pte_t huge_ptep_clear_flus unsigned long addr, pte_t *ptep) { pte_t pte; + unsigned long sz = huge_page_size(hstate_vma(vma));
/* * clear the huge pte entry firstly, so that the other smp threads will * not get old pte entry after finishing flush_tlb_page and before * setting new huge pte entry */ - pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz); flush_tlb_page(vma, addr); return pte; } --- a/arch/parisc/include/asm/hugetlb.h +++ b/arch/parisc/include/asm/hugetlb.h @@ -10,7 +10,7 @@ void set_huge_pte_at(struct mm_struct *m
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep); + pte_t *ptep, unsigned long sz);
/* * If the arch doesn't supply something else, assume that hugepage --- a/arch/parisc/mm/hugetlbpage.c +++ b/arch/parisc/mm/hugetlbpage.c @@ -147,7 +147,7 @@ void set_huge_pte_at(struct mm_struct *m
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep) + pte_t *ptep, unsigned long sz) { pte_t entry;
--- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -45,7 +45,8 @@ void set_huge_pte_at(struct mm_struct *m
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) + unsigned long addr, pte_t *ptep, + unsigned long sz) { return __pte(pte_update(mm, addr, ptep, ~0UL, 0, 1)); } @@ -55,8 +56,9 @@ static inline pte_t huge_ptep_clear_flus unsigned long addr, pte_t *ptep) { pte_t pte; + unsigned long sz = huge_page_size(hstate_vma(vma));
- pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz); flush_hugetlb_page(vma, addr); return pte; } --- a/arch/riscv/include/asm/hugetlb.h +++ b/arch/riscv/include/asm/hugetlb.h @@ -28,7 +28,8 @@ void set_huge_pte_at(struct mm_struct *m
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep); + unsigned long addr, pte_t *ptep, + unsigned long sz);
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, --- a/arch/riscv/mm/hugetlbpage.c +++ b/arch/riscv/mm/hugetlbpage.c @@ -293,7 +293,7 @@ int huge_ptep_set_access_flags(struct vm
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep) + pte_t *ptep, unsigned long sz) { pte_t orig_pte = ptep_get(ptep); int pte_num; --- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -20,8 +20,15 @@ void set_huge_pte_at(struct mm_struct *m void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte); pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep); -pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep); +pte_t __huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, + pte_t *ptep); + +static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, + unsigned long sz) +{ + return __huge_ptep_get_and_clear(mm, addr, ptep); +}
/* * If the arch doesn't supply something else, assume that hugepage @@ -57,7 +64,7 @@ static inline void huge_pte_clear(struct static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { - return huge_ptep_get_and_clear(vma->vm_mm, address, ptep); + return __huge_ptep_get_and_clear(vma->vm_mm, address, ptep); }
static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma, @@ -66,7 +73,7 @@ static inline int huge_ptep_set_access_f { int changed = !pte_same(huge_ptep_get(vma->vm_mm, addr, ptep), pte); if (changed) { - huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + __huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); __set_huge_pte_at(vma->vm_mm, addr, ptep, pte); } return changed; @@ -75,7 +82,7 @@ static inline int huge_ptep_set_access_f static inline void huge_ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - pte_t pte = huge_ptep_get_and_clear(mm, addr, ptep); + pte_t pte = __huge_ptep_get_and_clear(mm, addr, ptep); __set_huge_pte_at(mm, addr, ptep, pte_wrprotect(pte)); }
--- a/arch/s390/mm/hugetlbpage.c +++ b/arch/s390/mm/hugetlbpage.c @@ -174,8 +174,8 @@ pte_t huge_ptep_get(struct mm_struct *mm return __rste_to_pte(pte_val(*ptep)); }
-pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) +pte_t __huge_ptep_get_and_clear(struct mm_struct *mm, + unsigned long addr, pte_t *ptep) { pte_t pte = huge_ptep_get(mm, addr, ptep); pmd_t *pmdp = (pmd_t *) ptep; --- a/arch/sparc/include/asm/hugetlb.h +++ b/arch/sparc/include/asm/hugetlb.h @@ -20,7 +20,7 @@ void __set_huge_pte_at(struct mm_struct
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep); + pte_t *ptep, unsigned long sz);
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -368,7 +368,7 @@ void set_huge_pte_at(struct mm_struct *m }
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep) + pte_t *ptep, unsigned long sz) { unsigned int i, nptes, orig_shift, shift; unsigned long size; --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -84,7 +84,7 @@ static inline void set_huge_pte_at(struc
#ifndef __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) + unsigned long addr, pte_t *ptep, unsigned long sz) { return ptep_get_and_clear(mm, addr, ptep); } --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1009,7 +1009,9 @@ static inline void hugetlb_count_sub(lon static inline pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + unsigned long psize = huge_page_size(hstate_vma(vma)); + + return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, psize); } #endif
--- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5395,7 +5395,7 @@ static void move_huge_pte(struct vm_area if (src_ptl != dst_ptl) spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
- pte = huge_ptep_get_and_clear(mm, old_addr, src_pte); + pte = huge_ptep_get_and_clear(mm, old_addr, src_pte, sz);
if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) huge_pte_clear(mm, new_addr, dst_pte, sz); @@ -5570,7 +5570,7 @@ void __unmap_hugepage_range(struct mmu_g set_vma_resv_flags(vma, HPAGE_RESV_UNMAPPED); }
- pte = huge_ptep_get_and_clear(mm, address, ptep); + pte = huge_ptep_get_and_clear(mm, address, ptep, sz); tlb_remove_huge_tlb_entry(h, tlb, ptep, address); if (huge_pte_dirty(pte)) set_page_dirty(page);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ryan Roberts ryan.roberts@arm.com
commit 49c87f7677746f3c5bd16c81b23700bb6b88bfd4 upstream.
arm64 supports multiple huge_pte sizes. Some of the sizes are covered by a single pte entry at a particular level (PMD_SIZE, PUD_SIZE), and some are covered by multiple ptes at a particular level (CONT_PTE_SIZE, CONT_PMD_SIZE). So the function has to figure out the size from the huge_pte pointer. This was previously done by walking the pgtable to determine the level and by using the PTE_CONT bit to determine the number of ptes at the level.
But the PTE_CONT bit is only valid when the pte is present. For non-present pte values (e.g. markers, migration entries), the previous implementation was therefore erroneously determining the size. There is at least one known caller in core-mm, move_huge_pte(), which may call huge_ptep_get_and_clear() for a non-present pte. So we must be robust to this case. Additionally the "regular" ptep_get_and_clear() is robust to being called for non-present ptes so it makes sense to follow the behavior.
Fix this by using the new sz parameter which is now provided to the function. Additionally when clearing each pte in a contig range, don't gather the access and dirty bits if the pte is not present.
An alternative approach that would not require API changes would be to store the PTE_CONT bit in a spare bit in the swap entry pte for the non-present case. But it felt cleaner to follow other APIs' lead and just pass in the size.
As an aside, PTE_CONT is bit 52, which corresponds to bit 40 in the swap entry offset field (layout of non-present pte). Since hugetlb is never swapped to disk, this field will only be populated for markers, which always set this bit to 0 and hwpoison swap entries, which set the offset field to a PFN; So it would only ever be 1 for a 52-bit PVA system where memory in that high half was poisoned (I think!). So in practice, this bit would almost always be zero for non-present ptes and we would only clear the first entry if it was actually a contiguous block. That's probably a less severe symptom than if it was always interpreted as 1 and cleared out potentially-present neighboring PTEs.
Cc: stable@vger.kernel.org Fixes: 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit") Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Ryan Roberts ryan.roberts@arm.com Link: https://lore.kernel.org/r/20250226120656.2400136-3-ryan.roberts@arm.com Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/mm/hugetlbpage.c | 51 ++++++++++++++++---------------------------- 1 file changed, 19 insertions(+), 32 deletions(-)
--- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -100,20 +100,11 @@ static int find_num_contig(struct mm_str
static inline int num_contig_ptes(unsigned long size, size_t *pgsize) { - int contig_ptes = 0; + int contig_ptes = 1;
*pgsize = size;
switch (size) { -#ifndef __PAGETABLE_PMD_FOLDED - case PUD_SIZE: - if (pud_sect_supported()) - contig_ptes = 1; - break; -#endif - case PMD_SIZE: - contig_ptes = 1; - break; case CONT_PMD_SIZE: *pgsize = PMD_SIZE; contig_ptes = CONT_PMDS; @@ -122,6 +113,8 @@ static inline int num_contig_ptes(unsign *pgsize = PAGE_SIZE; contig_ptes = CONT_PTES; break; + default: + WARN_ON(!__hugetlb_valid_size(size)); }
return contig_ptes; @@ -163,24 +156,23 @@ static pte_t get_clear_contig(struct mm_ unsigned long pgsize, unsigned long ncontig) { - pte_t orig_pte = __ptep_get(ptep); - unsigned long i; + pte_t pte, tmp_pte; + bool present;
- for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) { - pte_t pte = __ptep_get_and_clear(mm, addr, ptep); - - /* - * If HW_AFDBM is enabled, then the HW could turn on - * the dirty or accessed bit for any page in the set, - * so check them all. - */ - if (pte_dirty(pte)) - orig_pte = pte_mkdirty(orig_pte); - - if (pte_young(pte)) - orig_pte = pte_mkyoung(orig_pte); + pte = __ptep_get_and_clear(mm, addr, ptep); + present = pte_present(pte); + while (--ncontig) { + ptep++; + addr += pgsize; + tmp_pte = __ptep_get_and_clear(mm, addr, ptep); + if (present) { + if (pte_dirty(tmp_pte)) + pte = pte_mkdirty(pte); + if (pte_young(tmp_pte)) + pte = pte_mkyoung(pte); + } } - return orig_pte; + return pte; }
static pte_t get_clear_contig_flush(struct mm_struct *mm, @@ -390,13 +382,8 @@ pte_t huge_ptep_get_and_clear(struct mm_ { int ncontig; size_t pgsize; - pte_t orig_pte = __ptep_get(ptep); - - if (!pte_cont(orig_pte)) - return __ptep_get_and_clear(mm, addr, ptep); - - ncontig = find_num_contig(mm, addr, ptep, &pgsize);
+ ncontig = num_contig_ptes(sz, &pgsize); return get_clear_contig(mm, addr, ptep, pgsize, ncontig); }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Max Kellermann max.kellermann@ionos.com
[ Note this patch doesn't apply to v6.14 as it was obsoleted by commit e2d46f2ec332 ("netfs: Change the read result collector to only use one work item"). ]
At the beginning of the function, folio queues with marks3==0 are skipped, but after that, the `marks3` field is ignored. If one such queue is found, `slot` is set to 64 (because `__ffs(0)==64`), leading to a buffer overflow in the folioq_folio() call. The resulting crash may look like this:
BUG: kernel NULL pointer dereference, address: 0000000000000000 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 0 P4D 0 Oops: Oops: 0000 [#1] SMP PTI CPU: 11 UID: 0 PID: 2909 Comm: kworker/u262:1 Not tainted 6.13.1-cm4all2-vm #415 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 Workqueue: events_unbound netfs_read_termination_worker RIP: 0010:netfs_pgpriv2_write_to_the_cache+0x15a/0x3f0 Code: 48 85 c0 48 89 44 24 08 0f 84 24 01 00 00 48 8b 80 40 01 00 00 48 8b 7c 24 08 f3 48 0f bc c0 89 44 24 18 89 c0 48 8b 74 c7 08 <48> 8b 06 48 c7 04 24 00 10 00 00 a8 40 74 10 0f b6 4e 40 b8 00 10 RSP: 0018:ffffbbc440effe18 EFLAGS: 00010203 RAX: 0000000000000040 RBX: ffff96f8fc034000 RCX: 0000000000000000 RDX: 0000000000000040 RSI: 0000000000000000 RDI: ffff96f8fc036400 RBP: 0000000000001000 R08: ffff96f9132bb400 R09: 0000000000001000 R10: ffff96f8c1263c80 R11: 0000000000000003 R12: 0000000000001000 R13: ffff96f8fb75ade8 R14: fffffaaf5ca90000 R15: ffff96f8fb75ad00 FS: 0000000000000000(0000) GS:ffff9703cf0c0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000010c9ca003 CR4: 00000000001706b0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> ? __die+0x1f/0x60 ? page_fault_oops+0x158/0x450 ? search_extable+0x22/0x30 ? netfs_pgpriv2_write_to_the_cache+0x15a/0x3f0 ? search_module_extables+0xe/0x40 ? exc_page_fault+0x62/0x120 ? asm_exc_page_fault+0x22/0x30 ? netfs_pgpriv2_write_to_the_cache+0x15a/0x3f0 ? netfs_pgpriv2_write_to_the_cache+0xf6/0x3f0 netfs_read_termination_worker+0x1f/0x60 process_one_work+0x138/0x2d0 worker_thread+0x2a5/0x3b0 ? __pfx_worker_thread+0x10/0x10 kthread+0xba/0xe0 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x30/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 </TASK>
Fixes: ee4cdf7ba857 ("netfs: Speed up buffered reading") Cc: stable@vger.kernel.org Signed-off-by: Max Kellermann max.kellermann@ionos.com Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/netfs/read_pgpriv2.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/fs/netfs/read_pgpriv2.c +++ b/fs/netfs/read_pgpriv2.c @@ -181,16 +181,17 @@ void netfs_pgpriv2_write_to_the_cache(st break;
folioq_unmark3(folioq, slot); - if (!folioq->marks3) { + while (!folioq->marks3) { folioq = folioq->next; if (!folioq) - break; + goto end_of_queue; }
slot = __ffs(folioq->marks3); folio = folioq_folio(folioq, slot); }
+end_of_queue: netfs_issue_write(wreq, &wreq->io_streams[1]); smp_wmb(); /* Write lists before ALL_QUEUED. */ set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Max Kellermann max.kellermann@ionos.com
[ Just like the other two netfs patches I sent yesterday, this one doesn't apply to v6.14 as it was obsoleted by commit e2d46f2ec332 ("netfs: Change the read result collector to only use one work item"). ]
When checking whether the edges of adjacent subrequests touch, the `prev` variable is deferenced, but it might not have been initialized. This causes crashes like this one:
BUG: unable to handle page fault for address: 0000000181343843 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 8000001c66db0067 P4D 8000001c66db0067 PUD 0 Oops: Oops: 0000 [#1] SMP PTI CPU: 1 UID: 33333 PID: 24424 Comm: php-cgi8.2 Kdump: loaded Not tainted 6.13.2-cm4all0-hp+ #427 Hardware name: HP ProLiant DL380 Gen9/ProLiant DL380 Gen9, BIOS P89 11/23/2021 RIP: 0010:netfs_consume_read_data.isra.0+0x5ef/0xb00 Code: fe ff ff 48 8b 83 88 00 00 00 48 8b 4c 24 30 4c 8b 43 78 48 85 c0 48 8d 51 70 75 20 48 8b 73 30 48 39 d6 74 17 48 8b 7c 24 40 <48> 8b 4f 78 48 03 4f 68 48 39 4b 68 0f 84 ab 02 00 00 49 29 c0 48 RSP: 0000:ffffc90037adbd00 EFLAGS: 00010283 RAX: 0000000000000000 RBX: ffff88811bda0600 RCX: ffff888620e7b980 RDX: ffff888620e7b9f0 RSI: ffff88811bda0428 RDI: 00000001813437cb RBP: 0000000000000000 R08: 0000000000004000 R09: 0000000000000000 R10: ffffffff82e070c0 R11: 0000000007ffffff R12: 0000000000004000 R13: ffff888620e7bb68 R14: 0000000000008000 R15: ffff888620e7bb68 FS: 00007ff2e0e7ddc0(0000) GS:ffff88981f840000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000181343843 CR3: 0000001bc10ba006 CR4: 00000000001706f0 Call Trace: <TASK> ? __die+0x1f/0x60 ? page_fault_oops+0x15c/0x450 ? search_extable+0x22/0x30 ? netfs_consume_read_data.isra.0+0x5ef/0xb00 ? search_module_extables+0xe/0x40 ? exc_page_fault+0x5e/0x100 ? asm_exc_page_fault+0x22/0x30 ? netfs_consume_read_data.isra.0+0x5ef/0xb00 ? intel_iommu_unmap_pages+0xaa/0x190 ? __pfx_cachefiles_read_complete+0x10/0x10 netfs_read_subreq_terminated+0x24f/0x390 cachefiles_read_complete+0x48/0xf0 iomap_dio_bio_end_io+0x125/0x160 blk_update_request+0xea/0x3e0 scsi_end_request+0x27/0x190 scsi_io_completion+0x43/0x6c0 blk_complete_reqs+0x40/0x50 handle_softirqs+0xd1/0x280 irq_exit_rcu+0x91/0xb0 common_interrupt+0x3b/0xa0 asm_common_interrupt+0x22/0x40 RIP: 0033:0x55fe8470d2ab Code: 00 00 3c 7e 74 3b 3c b6 0f 84 dd 03 00 00 3c 1e 74 2f 83 c1 01 48 83 c2 38 48 83 c7 30 44 39 d1 74 3e 48 63 42 08 85 c0 79 a3 <49> 8b 46 48 8b 04 38 f6 c4 04 75 0b 0f b6 42 30 83 e0 0c 3c 04 75 RSP: 002b:00007ffca5ef2720 EFLAGS: 00000216 RAX: 0000000000000023 RBX: 0000000000000008 RCX: 000000000000001b RDX: 00007ff2e0cdb6f8 RSI: 0000000000000006 RDI: 0000000000000510 RBP: 00007ffca5ef27a0 R08: 00007ffca5ef2720 R09: 0000000000000001 R10: 000000000000001e R11: 00007ff2e0c10d08 R12: 0000000000000001 R13: 0000000000000120 R14: 00007ff2e0cb1ed0 R15: 00000000000000b0 </TASK>
Fixes: ee4cdf7ba857 ("netfs: Speed up buffered reading") Cc: stable@vger.kernel.org Signed-off-by: Max Kellermann max.kellermann@ionos.com Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/netfs/read_collect.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-)
--- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -258,17 +258,18 @@ donation_changed: */ if (!subreq->consumed && !prev_donated && - !list_is_first(&subreq->rreq_link, &rreq->subrequests) && - subreq->start == prev->start + prev->len) { + !list_is_first(&subreq->rreq_link, &rreq->subrequests)) { prev = list_prev_entry(subreq, rreq_link); - WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len); - subreq->start += subreq->len; - subreq->len = 0; - subreq->transferred = 0; - trace_netfs_donate(rreq, subreq, prev, subreq->len, - netfs_trace_donate_to_prev); - trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_prev); - goto remove_subreq_locked; + if (subreq->start == prev->start + prev->len) { + WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len); + subreq->start += subreq->len; + subreq->len = 0; + subreq->transferred = 0; + trace_netfs_donate(rreq, subreq, prev, subreq->len, + netfs_trace_donate_to_prev); + trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_prev); + goto remove_subreq_locked; + } }
/* If we can't donate down the chain, donate up the chain instead. */
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann arnd@arndb.de
[ Upstream commit 02e9a22ceef0227175e391902d8760425fa072c6 ]
The headercheck tries to call clang with a mix of compiler arguments that don't include the target architecture. When building e.g. x86 headers on arm64, this produces a warning like
clang: warning: unknown platform, assuming -mfloat-abi=soft
Add in the KBUILD_CPPFLAGS, which contain the target, in order to make it build properly.
See also 1b71c2fb04e7 ("kbuild: userprogs: fix bitsize and target detection on clang").
Reviewed-by: Nathan Chancellor nathan@kernel.org Fixes: feb843a469fb ("kbuild: add $(CLANG_FLAGS) to KBUILD_CPPFLAGS") Signed-off-by: Arnd Bergmann arnd@arndb.de Signed-off-by: Sasha Levin sashal@kernel.org --- usr/include/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/usr/include/Makefile b/usr/include/Makefile index 771e32872b2ab..58173cfe5ff17 100644 --- a/usr/include/Makefile +++ b/usr/include/Makefile @@ -10,7 +10,7 @@ UAPI_CFLAGS := -std=c90 -Wall -Werror=implicit-function-declaration
# In theory, we do not care -m32 or -m64 for header compile tests. # It is here just because CONFIG_CC_CAN_LINK is tested with -m32 or -m64. -UAPI_CFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS)) +UAPI_CFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
# USERCFLAGS might contain sysroot location for CC. UAPI_CFLAGS += $(USERCFLAGS)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann arnd@arndb.de
commit d0bbe332669c5db32c8c92bc967f8e7f8d460ddf upstream.
The alternative path leads to a build error after a recent change:
sound/pci/hda/patch_realtek.c: In function 'alc233_fixup_lenovo_low_en_micmute_led': include/linux/stddef.h:9:14: error: called object is not a function or function pointer 9 | #define NULL ((void *)0) | ^ sound/pci/hda/patch_realtek.c:5041:49: note: in expansion of macro 'NULL' 5041 | #define alc233_fixup_lenovo_line2_mic_hotkey NULL | ^~~~ sound/pci/hda/patch_realtek.c:5063:9: note: in expansion of macro 'alc233_fixup_lenovo_line2_mic_hotkey' 5063 | alc233_fixup_lenovo_line2_mic_hotkey(codec, fix, action);
Using IS_REACHABLE() is somewhat questionable here anyway since it leads to the input code not working when the HDA driver is builtin but input is in a loadable module. Replace this with a hard compile-time dependency on CONFIG_INPUT. In practice this won't chance much other than solve the compiler error because it is rare to require sound output but no input support.
Fixes: f603b159231b ("ALSA: hda/realtek - add supported Mic Mute LED for Lenovo platform") Signed-off-by: Arnd Bergmann arnd@arndb.de Link: https://patch.msgid.link/20250304142620.582191-1-arnd@kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/Kconfig | 1 + sound/pci/hda/patch_realtek.c | 5 ----- 2 files changed, 1 insertion(+), 5 deletions(-)
--- a/sound/pci/hda/Kconfig +++ b/sound/pci/hda/Kconfig @@ -208,6 +208,7 @@ comment "Set to Y if you want auto-loadi
config SND_HDA_CODEC_REALTEK tristate "Build Realtek HD-audio codec support" + depends on INPUT select SND_HDA_GENERIC select SND_HDA_GENERIC_LEDS select SND_HDA_SCODEC_COMPONENT --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -5002,7 +5002,6 @@ static void alc298_fixup_samsung_amp_v2_ alc298_samsung_v2_init_amps(codec, 4); }
-#if IS_REACHABLE(CONFIG_INPUT) static void gpio2_mic_hotkey_event(struct hda_codec *codec, struct hda_jack_callback *event) { @@ -5111,10 +5110,6 @@ static void alc233_fixup_lenovo_line2_mi spec->kb_dev = NULL; } } -#else /* INPUT */ -#define alc280_fixup_hp_gpio2_mic_hotkey NULL -#define alc233_fixup_lenovo_line2_mic_hotkey NULL -#endif /* INPUT */
static void alc269_fixup_hp_line1_mic1_led(struct hda_codec *codec, const struct hda_fixup *fix, int action)
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maurizio Lombardi mlombard@redhat.com
commit afb41b08c44e5386f2f52fa859010ac4afd2b66f upstream.
In H2CTermReq, a FES with value 0x05 means "R2T Limit Exceeded"; but in C2HTermReq the same value has a different meaning (Data Transfer Limit Exceeded).
Fixes: 84e009042d0f ("nvme-tcp: add basic support for the C2HTermReq PDU") Signed-off-by: Maurizio Lombardi mlombard@redhat.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Keith Busch kbusch@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/nvme/host/tcp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -788,7 +788,7 @@ static void nvme_tcp_handle_c2h_term(str [NVME_TCP_FES_PDU_SEQ_ERR] = "PDU Sequence Error", [NVME_TCP_FES_HDR_DIGEST_ERR] = "Header Digest Error", [NVME_TCP_FES_DATA_OUT_OF_RANGE] = "Data Transfer Out Of Range", - [NVME_TCP_FES_R2T_LIMIT_EXCEEDED] = "R2T Limit Exceeded", + [NVME_TCP_FES_DATA_LIMIT_EXCEEDED] = "Data Transfer Limit Exceeded", [NVME_TCP_FES_UNSUPPORTED_PARAM] = "Unsupported Parameter", };
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
commit b160dc46dd9af4001c802cc9c7d68b6ba58d27c4 upstream.
This list started as a "when to prefer `expect`" list, but at some point during writing I changed it to a "prefer `expect` unless..." one. However, the first bullet remained, which does not make sense anymore.
Thus remove it. In addition, fix nearby typo.
Fixes: 04866494e936 ("Documentation: rust: discuss `#[expect(...)]` in the guidelines") Reviewed-by: Alice Ryhl aliceryhl@google.com Reviewed-by: Andreas Hindborg a.hindborg@kernel.org Link: https://lore.kernel.org/r/20241117133127.473937-1-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/rust/coding-guidelines.rst | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
--- a/Documentation/rust/coding-guidelines.rst +++ b/Documentation/rust/coding-guidelines.rst @@ -296,9 +296,7 @@ may happen in several situations, e.g.: It also increases the visibility of the remaining ``allow``\ s and reduces the chance of misapplying one.
-Thus prefer ``except`` over ``allow`` unless: - -- The lint attribute is intended to be temporary, e.g. while developing. +Thus prefer ``expect`` over ``allow`` unless:
- Conditional compilation triggers the warning in some cases but not others.
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
This reverts commit 48fe216d7db6b651972c1c1d8e3180cd699971b0 which is commit 87ecfdbc699cc95fac73291b52650283ddcf929d upstream.
It should not have been applied.
Link: https://lore.kernel.org/r/CABgObfb5U9zwTQBPkPB=mKu-vMrRspPCm4wfxoQpB+SyAnb5W... Reported-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/powerpc/kvm/e500_mmu_host.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -479,6 +479,7 @@ static inline int kvmppc_e500_shadow_map if (pte_present(pte)) { wimg = (pte_val(pte) >> PTE_WIMGE_SHIFT) & MAS2_WIMGE_MASK; + local_irq_restore(flags); } else { local_irq_restore(flags); pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n", @@ -487,9 +488,8 @@ static inline int kvmppc_e500_shadow_map goto out; } } - local_irq_restore(flags); - writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); + kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ref, gvaddr, stlbe);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
This reverts commit 833f69be62ac366b5c23b4a6434389e470dd5c7f which is commit 419cfb983ca93e75e905794521afefcfa07988bb upstream.
It should not have been applied.
Link: https://lore.kernel.org/r/CABgObfb5U9zwTQBPkPB=mKu-vMrRspPCm4wfxoQpB+SyAnb5W... Reported-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/powerpc/kvm/e500_mmu_host.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -322,7 +322,6 @@ static inline int kvmppc_e500_shadow_map { struct kvm_memory_slot *slot; unsigned long pfn = 0; /* silence GCC warning */ - struct page *page = NULL; unsigned long hva; int pfnmap = 0; int tsize = BOOK3E_PAGESZ_4K; @@ -444,7 +443,7 @@ static inline int kvmppc_e500_shadow_map
if (likely(!pfnmap)) { tsize_pages = 1UL << (tsize + 10 - PAGE_SHIFT); - pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, NULL, &page); + pfn = gfn_to_pfn_memslot(slot, gfn); if (is_error_noslot_pfn(pfn)) { if (printk_ratelimit()) pr_err("%s: real page not found for gfn %lx\n", @@ -489,6 +488,8 @@ static inline int kvmppc_e500_shadow_map } } writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); + if (writable) + kvm_set_pfn_dirty(pfn);
kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ref, gvaddr, stlbe); @@ -497,7 +498,8 @@ static inline int kvmppc_e500_shadow_map kvmppc_mmu_flush_icache(pfn);
out: - kvm_release_faultin_page(kvm, page, !!ret, writable); + /* Drop refcount on page, so that mmu notifiers can clear it */ + kvm_release_pfn_clean(pfn); spin_unlock(&kvm->mmu_lock); return ret; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
This reverts commit f2623aec7fdc2675667042c85f87502c9139c098 which is commit 84cf78dcd9d65c45ab73998d4ad50f433d53fb93 upstream.
It should not have been applied.
Link: https://lore.kernel.org/r/CABgObfb5U9zwTQBPkPB=mKu-vMrRspPCm4wfxoQpB+SyAnb5W... Reported-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/powerpc/kvm/e500_mmu_host.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -498,9 +498,11 @@ static inline int kvmppc_e500_shadow_map kvmppc_mmu_flush_icache(pfn);
out: + spin_unlock(&kvm->mmu_lock); + /* Drop refcount on page, so that mmu notifiers can clear it */ kvm_release_pfn_clean(pfn); - spin_unlock(&kvm->mmu_lock); + return ret; }
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
This reverts commit dec857329fb9a66a5bce4f9db14c97ef64725a32 which is commit c9be85dabb376299504e0d391d15662c0edf8273 upstream.
It should not have been applied.
Link: https://lore.kernel.org/r/CABgObfb5U9zwTQBPkPB=mKu-vMrRspPCm4wfxoQpB+SyAnb5W... Reported-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/powerpc/kvm/e500_mmu_host.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
--- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -242,7 +242,7 @@ static inline int tlbe_is_writable(struc return tlbe->mas7_3 & (MAS3_SW|MAS3_UW); }
-static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref, +static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref, struct kvm_book3e_206_tlb_entry *gtlbe, kvm_pfn_t pfn, unsigned int wimg) { @@ -252,7 +252,11 @@ static inline bool kvmppc_e500_ref_setup /* Use guest supplied MAS2_G and MAS2_E */ ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
- return tlbe_is_writable(gtlbe); + /* Mark the page accessed */ + kvm_set_pfn_accessed(pfn); + + if (tlbe_is_writable(gtlbe)) + kvm_set_pfn_dirty(pfn); }
static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref) @@ -333,7 +337,6 @@ static inline int kvmppc_e500_shadow_map unsigned int wimg = 0; pgd_t *pgdir; unsigned long flags; - bool writable = false;
/* used to check for invalidations in progress */ mmu_seq = kvm->mmu_invalidate_seq; @@ -487,9 +490,7 @@ static inline int kvmppc_e500_shadow_map goto out; } } - writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); - if (writable) - kvm_set_pfn_dirty(pfn); + kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ref, gvaddr, stlbe);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Bonzini pbonzini@redhat.com
commit 87ecfdbc699cc95fac73291b52650283ddcf929d upstream.
If find_linux_pte fails, IRQs will not be restored. This is unlikely to happen in practice since it would have been reported as hanging hosts, but it should of course be fixed anyway.
Cc: stable@vger.kernel.org Reported-by: Sean Christopherson seanjc@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/powerpc/kvm/e500_mmu_host.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -481,7 +481,6 @@ static inline int kvmppc_e500_shadow_map if (pte_present(pte)) { wimg = (pte_val(pte) >> PTE_WIMGE_SHIFT) & MAS2_WIMGE_MASK; - local_irq_restore(flags); } else { local_irq_restore(flags); pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n", @@ -490,8 +489,9 @@ static inline int kvmppc_e500_shadow_map goto out; } } - kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); + local_irq_restore(flags);
+ kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ref, gvaddr, stlbe);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiri Olsa jolsa@kernel.org
commit b583ef82b671c9a752fbe3e95bd4c1c51eab764d upstream.
Max Makarov reported kernel panic [1] in perf user callchain code.
The reason for that is the race between uprobe_free_utask and bpf profiler code doing the perf user stack unwind and is triggered within uprobe_free_utask function: - after current->utask is freed and - before current->utask is set to NULL
general protection fault, probably for non-canonical address 0x9e759c37ee555c76: 0000 [#1] SMP PTI RIP: 0010:is_uprobe_at_func_entry+0x28/0x80 ... ? die_addr+0x36/0x90 ? exc_general_protection+0x217/0x420 ? asm_exc_general_protection+0x26/0x30 ? is_uprobe_at_func_entry+0x28/0x80 perf_callchain_user+0x20a/0x360 get_perf_callchain+0x147/0x1d0 bpf_get_stackid+0x60/0x90 bpf_prog_9aac297fb833e2f5_do_perf_event+0x434/0x53b ? __smp_call_single_queue+0xad/0x120 bpf_overflow_handler+0x75/0x110 ... asm_sysvec_apic_timer_interrupt+0x1a/0x20 RIP: 0010:__kmem_cache_free+0x1cb/0x350 ... ? uprobe_free_utask+0x62/0x80 ? acct_collect+0x4c/0x220 uprobe_free_utask+0x62/0x80 mm_release+0x12/0xb0 do_exit+0x26b/0xaa0 __x64_sys_exit+0x1b/0x20 do_syscall_64+0x5a/0x80
It can be easily reproduced by running following commands in separate terminals:
# while :; do bpftrace -e 'uprobe:/bin/ls:_start { printf("hit\n"); }' -c ls; done # bpftrace -e 'profile:hz:100000 { @[ustack()] = count(); }'
Fixing this by making sure current->utask pointer is set to NULL before we start to release the utask object.
[1] https://github.com/grafana/pyroscope/issues/3673
Fixes: cfa7f3d2c526 ("perf,x86: avoid missing caller address in stack traces captured in uprobe") Reported-by: Max Makarov maxpain@linux.com Signed-off-by: Jiri Olsa jolsa@kernel.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Acked-by: Oleg Nesterov oleg@redhat.com Acked-by: Andrii Nakryiko andrii@kernel.org Link: https://lore.kernel.org/r/20250109141440.2692173-1-jolsa@kernel.org [Christian Simon: Rebased for 6.12.y, due to mainline change https://lore.kernel.org/all/20240929144239.GA9475@redhat.com/] Signed-off-by: Christian Simon simon@swine.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/events/uprobes.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1775,6 +1775,7 @@ void uprobe_free_utask(struct task_struc if (!utask) return;
+ t->utask = NULL; if (utask->active_uprobe) put_uprobe(utask->active_uprobe);
@@ -1784,7 +1785,6 @@ void uprobe_free_utask(struct task_struc
xol_free_insn_slot(t); kfree(utask); - t->utask = NULL; }
/*
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kumar Kartikeya Dwivedi memxor@gmail.com
commit 0e2fb011a0ba8e2258ce776fdf89fbd589c2a3a6 upstream.
Availability of the gettid definition across glibc versions supported by BPF selftests is not certain. Currently, all users in the tree open-code syscall to gettid. Convert them to a common macro definition.
Reviewed-by: Jiri Olsa jolsa@kernel.org Signed-off-by: Kumar Kartikeya Dwivedi memxor@gmail.com Link: https://lore.kernel.org/r/20241104171959.2938862-3-memxor@gmail.com Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Reported-by: Colm Harrington colm.harrington@oracle.com Signed-off-by: Alan Maguire alan.maguire@oracle.com --- tools/testing/selftests/bpf/benchs/bench_trigger.c | 3 ++- tools/testing/selftests/bpf/bpf_util.h | 9 +++++++++ tools/testing/selftests/bpf/map_tests/task_storage_map.c | 3 ++- tools/testing/selftests/bpf/prog_tests/bpf_cookie.c | 2 +- tools/testing/selftests/bpf/prog_tests/bpf_iter.c | 6 +++--- tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c | 10 +++++----- tools/testing/selftests/bpf/prog_tests/core_reloc.c | 2 +- tools/testing/selftests/bpf/prog_tests/linked_funcs.c | 2 +- tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c | 2 +- tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c | 4 ++-- tools/testing/selftests/bpf/prog_tests/task_local_storage.c | 8 ++++---- tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c | 2 +- 12 files changed, 32 insertions(+), 21 deletions(-)
--- a/tools/testing/selftests/bpf/benchs/bench_trigger.c +++ b/tools/testing/selftests/bpf/benchs/bench_trigger.c @@ -4,6 +4,7 @@ #include <argp.h> #include <unistd.h> #include <stdint.h> +#include "bpf_util.h" #include "bench.h" #include "trigger_bench.skel.h" #include "trace_helpers.h" @@ -72,7 +73,7 @@ static __always_inline void inc_counter( unsigned slot;
if (unlikely(tid == 0)) - tid = syscall(SYS_gettid); + tid = sys_gettid();
/* multiplicative hashing, it's fast */ slot = 2654435769U * tid; --- a/tools/testing/selftests/bpf/bpf_util.h +++ b/tools/testing/selftests/bpf/bpf_util.h @@ -6,6 +6,7 @@ #include <stdlib.h> #include <string.h> #include <errno.h> +#include <syscall.h> #include <bpf/libbpf.h> /* libbpf_num_possible_cpus */
static inline unsigned int bpf_num_possible_cpus(void) @@ -59,4 +60,12 @@ static inline void bpf_strlcpy(char *dst (offsetof(TYPE, MEMBER) + sizeof_field(TYPE, MEMBER)) #endif
+/* Availability of gettid across glibc versions is hit-and-miss, therefore + * fallback to syscall in this macro and use it everywhere. + */ +#ifndef sys_gettid +#define sys_gettid() syscall(SYS_gettid) +#endif + + #endif /* __BPF_UTIL__ */ --- a/tools/testing/selftests/bpf/map_tests/task_storage_map.c +++ b/tools/testing/selftests/bpf/map_tests/task_storage_map.c @@ -12,6 +12,7 @@ #include <bpf/bpf.h> #include <bpf/libbpf.h>
+#include "bpf_util.h" #include "test_maps.h" #include "task_local_storage_helpers.h" #include "read_bpf_task_storage_busy.skel.h" @@ -115,7 +116,7 @@ void test_task_storage_map_stress_lookup CHECK(err, "attach", "error %d\n", err);
/* Trigger program */ - syscall(SYS_gettid); + sys_gettid(); skel->bss->pid = 0;
CHECK(skel->bss->busy != 0, "bad bpf_task_storage_busy", "got %d\n", skel->bss->busy); --- a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c @@ -690,7 +690,7 @@ void test_bpf_cookie(void) if (!ASSERT_OK_PTR(skel, "skel_open")) return;
- skel->bss->my_tid = syscall(SYS_gettid); + skel->bss->my_tid = sys_gettid();
if (test__start_subtest("kprobe")) kprobe_subtest(skel); --- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c @@ -226,7 +226,7 @@ static void test_task_common_nocheck(str ASSERT_OK(pthread_create(&thread_id, NULL, &do_nothing_wait, NULL), "pthread_create");
- skel->bss->tid = syscall(SYS_gettid); + skel->bss->tid = sys_gettid();
do_dummy_read_opts(skel->progs.dump_task, opts);
@@ -255,10 +255,10 @@ static void *run_test_task_tid(void *arg union bpf_iter_link_info linfo; int num_unknown_tid, num_known_tid;
- ASSERT_NEQ(getpid(), syscall(SYS_gettid), "check_new_thread_id"); + ASSERT_NEQ(getpid(), sys_gettid(), "check_new_thread_id");
memset(&linfo, 0, sizeof(linfo)); - linfo.task.tid = syscall(SYS_gettid); + linfo.task.tid = sys_gettid(); opts.link_info = &linfo; opts.link_info_len = sizeof(linfo); test_task_common(&opts, 0, 1); --- a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c +++ b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c @@ -63,14 +63,14 @@ static void test_tp_btf(int cgroup_fd) if (!ASSERT_OK(err, "map_delete_elem")) goto out;
- skel->bss->target_pid = syscall(SYS_gettid); + skel->bss->target_pid = sys_gettid();
err = cgrp_ls_tp_btf__attach(skel); if (!ASSERT_OK(err, "skel_attach")) goto out;
- syscall(SYS_gettid); - syscall(SYS_gettid); + sys_gettid(); + sys_gettid();
skel->bss->target_pid = 0;
@@ -154,7 +154,7 @@ static void test_recursion(int cgroup_fd goto out;
/* trigger sys_enter, make sure it does not cause deadlock */ - syscall(SYS_gettid); + sys_gettid();
out: cgrp_ls_recursion__destroy(skel); @@ -224,7 +224,7 @@ static void test_yes_rcu_lock(__u64 cgro return;
CGROUP_MODE_SET(skel); - skel->bss->target_pid = syscall(SYS_gettid); + skel->bss->target_pid = sys_gettid();
bpf_program__set_autoload(skel->progs.yes_rcu_lock, true); err = cgrp_ls_sleepable__load(skel); --- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c +++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c @@ -1010,7 +1010,7 @@ static void run_core_reloc_tests(bool us struct data *data; void *mmap_data = NULL;
- my_pid_tgid = getpid() | ((uint64_t)syscall(SYS_gettid) << 32); + my_pid_tgid = getpid() | ((uint64_t)sys_gettid() << 32);
for (i = 0; i < ARRAY_SIZE(test_cases); i++) { char btf_file[] = "/tmp/core_reloc.btf.XXXXXX"; --- a/tools/testing/selftests/bpf/prog_tests/linked_funcs.c +++ b/tools/testing/selftests/bpf/prog_tests/linked_funcs.c @@ -20,7 +20,7 @@ void test_linked_funcs(void) bpf_program__set_autoload(skel->progs.handler1, true); bpf_program__set_autoload(skel->progs.handler2, true);
- skel->rodata->my_tid = syscall(SYS_gettid); + skel->rodata->my_tid = sys_gettid(); skel->bss->syscall_id = SYS_getpgid;
err = linked_funcs__load(skel); --- a/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c +++ b/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c @@ -23,7 +23,7 @@ static int get_pid_tgid(pid_t *pid, pid_ struct stat st; int err;
- *pid = syscall(SYS_gettid); + *pid = sys_gettid(); *tgid = getpid();
err = stat("/proc/self/ns/pid", &st); --- a/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c +++ b/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c @@ -21,7 +21,7 @@ static void test_success(void) if (!ASSERT_OK_PTR(skel, "skel_open")) return;
- skel->bss->target_pid = syscall(SYS_gettid); + skel->bss->target_pid = sys_gettid();
bpf_program__set_autoload(skel->progs.get_cgroup_id, true); bpf_program__set_autoload(skel->progs.task_succ, true); @@ -58,7 +58,7 @@ static void test_rcuptr_acquire(void) if (!ASSERT_OK_PTR(skel, "skel_open")) return;
- skel->bss->target_pid = syscall(SYS_gettid); + skel->bss->target_pid = sys_gettid();
bpf_program__set_autoload(skel->progs.task_acquire, true); err = rcu_read_lock__load(skel); --- a/tools/testing/selftests/bpf/prog_tests/task_local_storage.c +++ b/tools/testing/selftests/bpf/prog_tests/task_local_storage.c @@ -23,14 +23,14 @@ static void test_sys_enter_exit(void) if (!ASSERT_OK_PTR(skel, "skel_open_and_load")) return;
- skel->bss->target_pid = syscall(SYS_gettid); + skel->bss->target_pid = sys_gettid();
err = task_local_storage__attach(skel); if (!ASSERT_OK(err, "skel_attach")) goto out;
- syscall(SYS_gettid); - syscall(SYS_gettid); + sys_gettid(); + sys_gettid();
/* 3x syscalls: 1x attach and 2x gettid */ ASSERT_EQ(skel->bss->enter_cnt, 3, "enter_cnt"); @@ -99,7 +99,7 @@ static void test_recursion(void)
/* trigger sys_enter, make sure it does not cause deadlock */ skel->bss->test_pid = getpid(); - syscall(SYS_gettid); + sys_gettid(); skel->bss->test_pid = 0; task_ls_recursion__detach(skel);
--- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c @@ -125,7 +125,7 @@ static void *child_thread(void *ctx) struct child *child = ctx; int c = 0, err;
- child->tid = syscall(SYS_gettid); + child->tid = sys_gettid();
/* let parent know we are ready */ err = write(child->c2p[1], &c, 1);
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xi Ruoyao xry111@xry111.site
commit f24f669d03f884a6ef95cca84317d0f329e93961 upstream.
Per the "Processor Specification Update" documentations referred by the intel-microcode-20240312 release note, this microcode release has fixed the issue for all affected models.
So don't disable PCID if the microcode is new enough. The precise minimum microcode revision fixing the issue was provided by Pawan Intel.
[ dhansen: comment and changelog tweaks ]
Signed-off-by: Xi Ruoyao xry111@xry111.site Signed-off-by: Dave Hansen dave.hansen@linux.intel.com Acked-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Link: https://lore.kernel.org/all/168436059559.404.13934972543631851306.tip-bot2@t... Link: https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files/releases... Link: https://cdrdv2.intel.com/v1/dl/getContent/740518 # RPL042, rev. 13 Link: https://cdrdv2.intel.com/v1/dl/getContent/682436 # ADL063, rev. 24 Link: https://lore.kernel.org/all/20240325231300.qrltbzf6twm43ftb@desk/ Link: https://lore.kernel.org/all/20240522020625.69418-1-xry111%40xry111.site Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/mm/init.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-)
--- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -263,28 +263,33 @@ static void __init probe_page_size_mask( }
/* - * INVLPG may not properly flush Global entries - * on these CPUs when PCIDs are enabled. + * INVLPG may not properly flush Global entries on + * these CPUs. New microcode fixes the issue. */ static const struct x86_cpu_id invlpg_miss_ids[] = { - X86_MATCH_VFM(INTEL_ALDERLAKE, 0), - X86_MATCH_VFM(INTEL_ALDERLAKE_L, 0), - X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, 0), - X86_MATCH_VFM(INTEL_RAPTORLAKE, 0), - X86_MATCH_VFM(INTEL_RAPTORLAKE_P, 0), - X86_MATCH_VFM(INTEL_RAPTORLAKE_S, 0), + X86_MATCH_VFM(INTEL_ALDERLAKE, 0x2e), + X86_MATCH_VFM(INTEL_ALDERLAKE_L, 0x42c), + X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, 0x11), + X86_MATCH_VFM(INTEL_RAPTORLAKE, 0x118), + X86_MATCH_VFM(INTEL_RAPTORLAKE_P, 0x4117), + X86_MATCH_VFM(INTEL_RAPTORLAKE_S, 0x2e), {} };
static void setup_pcid(void) { + const struct x86_cpu_id *invlpg_miss_match; + if (!IS_ENABLED(CONFIG_X86_64)) return;
if (!boot_cpu_has(X86_FEATURE_PCID)) return;
- if (x86_match_cpu(invlpg_miss_ids)) { + invlpg_miss_match = x86_match_cpu(invlpg_miss_ids); + + if (invlpg_miss_match && + boot_cpu_data.microcode < invlpg_miss_match->driver_data) { pr_info("Incomplete global flushes, disabling PCID"); setup_clear_cpu_cap(X86_FEATURE_PCID); return;
Hello,
On Mon, 10 Mar 2025 18:02:33 +0100 Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 12 Mar 2025 17:04:00 +0000. Anything received after that time might be too late.
This rc kernel passes DAMON functionality test[1] on my test machine. Attaching the test results summary below. Please note that I retrieved the kernel from linux-stable-rc tree[2].
Tested-by: SeongJae Park sj@kernel.org
[1] https://github.com/damonitor/damon-tests/tree/next/corr [2] 912e4f3f1c16 ("Linux 6.12.19-rc1")
Thanks, SJ
[...]
---
ok 9 selftests: damon: damos_tried_regions.py ok 10 selftests: damon: damon_nr_regions.py ok 11 selftests: damon: reclaim.sh ok 12 selftests: damon: lru_sort.sh ok 13 selftests: damon: debugfs_empty_targets.sh ok 14 selftests: damon: debugfs_huge_count_read_write.sh ok 15 selftests: damon: debugfs_duplicate_context_creation.sh ok 16 selftests: damon: debugfs_rm_non_contexts.sh ok 17 selftests: damon: debugfs_target_ids_read_before_terminate_race.sh ok 18 selftests: damon: debugfs_target_ids_pid_leak.sh ok 19 selftests: damon: sysfs_update_removed_scheme_dir.sh ok 20 selftests: damon: sysfs_update_schemes_tried_regions_hang.py ok 1 selftests: damon-tests: kunit.sh ok 2 selftests: damon-tests: huge_count_read_write.sh ok 3 selftests: damon-tests: buffer_overflow.sh ok 4 selftests: damon-tests: rm_contexts.sh ok 5 selftests: damon-tests: record_null_deref.sh ok 6 selftests: damon-tests: dbgfs_target_ids_read_before_terminate_race.sh ok 7 selftests: damon-tests: dbgfs_target_ids_pid_leak.sh ok 8 selftests: damon-tests: damo_tests.sh ok 9 selftests: damon-tests: masim-record.sh ok 10 selftests: damon-tests: build_i386.sh ok 11 selftests: damon-tests: build_arm64.sh # SKIP ok 12 selftests: damon-tests: build_m68k.sh # SKIP ok 13 selftests: damon-tests: build_i386_idle_flag.sh ok 14 selftests: damon-tests: build_i386_highpte.sh ok 15 selftests: damon-tests: build_nomemcg.sh [33m [92mPASS [39m
On 3/10/25 10:02, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 12 Mar 2025 17:04:00 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.12.19-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.12.y and the diffstat can be found below.
thanks,
greg k-h
On ARCH_BRCMSTB using 32-bit and 64-bit ARM kernels, build tested on BMIPS_GENERIC:
Tested-by: Florian Fainelli florian.fainelli@broadcom.com
Hi Greg,
On 10/03/25 22:32, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
No problems seen on x86_64 and aarch64 with our testing.
Tested-by: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com
Thanks, Harshit
On Mon, 10 Mar 2025 18:02:33 +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 12 Mar 2025 17:04:00 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.12.19-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.12.y and the diffstat can be found below.
thanks,
greg k-h
All tests passing for Tegra ...
Test results for stable-v6.12: 10 builds: 10 pass, 0 fail 28 boots: 28 pass, 0 fail 116 tests: 116 pass, 0 fail
Linux version: 6.12.19-rc1-g53db7cb59db6 Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra186-p3509-0000+p3636-0001, tegra194-p2972-0000, tegra194-p3509-0000+p3668-0000, tegra20-ventana, tegra210-p2371-2180, tegra210-p3450-0000, tegra30-cardhu-a04
Tested-by: Jon Hunter jonathanh@nvidia.com
Jon
On 3/10/25 10:02, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 12 Mar 2025 17:04:00 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.12.19-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.12.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
On Mon, 10 Mar 2025 at 22:48, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 12 Mar 2025 17:04:00 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.12.19-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.12.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
## Build * kernel: 6.12.19-rc1 * git: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git * git commit: 53db7cb59db64055714b1a7fd1cadd8d5ef6e2be * git describe: v6.12.18-270-g53db7cb59db6 * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.12.y/build/v6.12....
## Test Regressions (compared to v6.12.15-529-g7d0ba26d4036)
## Metric Regressions (compared to v6.12.15-529-g7d0ba26d4036)
## Test Fixes (compared to v6.12.15-529-g7d0ba26d4036)
## Metric Fixes (compared to v6.12.15-529-g7d0ba26d4036)
## Test result summary total: 77388, pass: 61961, fail: 2961, skip: 12426, xfail: 40
## Build Summary * arc: 6 total, 5 passed, 1 failed * arm: 143 total, 137 passed, 6 failed * arm64: 58 total, 54 passed, 4 failed * i386: 22 total, 19 passed, 3 failed * mips: 38 total, 33 passed, 5 failed * parisc: 5 total, 3 passed, 2 failed * powerpc: 44 total, 43 passed, 1 failed * riscv: 27 total, 24 passed, 3 failed * s390: 26 total, 22 passed, 4 failed * sh: 6 total, 5 passed, 1 failed * sparc: 5 total, 3 passed, 2 failed * x86_64: 50 total, 49 passed, 1 failed
## Test suites summary * boot * commands * kselftest-arm64 * kselftest-breakpoints * kselftest-capabilities * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-exec * kselftest-fpu * kselftest-ftrace * kselftest-futex * kselftest-gpio * kselftest-intel_pstate * kselftest-ipc * kselftest-kcmp * kselftest-kvm * kselftest-membarrier * kselftest-memfd * kselftest-mincore * kselftest-mqueue * kselftest-net * kselftest-net-mptcp * kselftest-openat2 * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-seccomp * kselftest-sigaltstack * kselftest-size * kselftest-tc-testing * kselftest-timers * kselftest-tmpfs * kselftest-tpm2 * kselftest-user_events * kselftest-vDSO * kselftest-x86 * kunit * kvm-unit-tests * libgpiod * libhugetlbfs * log-parser-boot * log-parser-build-clang * log-parser-build-gcc * log-parser-test * ltp-capability * ltp-commands * ltp-containers * ltp-controllers * ltp-crypto * ltp-cve * ltp-fcntl-locktests * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-hugetlb * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-pty * ltp-sched * ltp-smoke * ltp-syscalls * ltp-tracing * perf * rcutorture
-- Linaro LKFT https://lkft.linaro.org
On Mon, Mar 10, 2025 at 06:02:33PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Tested-by: Mark Brown broonie@kernel.org
On 3/10/25 11:02, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 12 Mar 2025 17:04:00 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.12.19-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.12.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
* Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 12 Mar 2025 17:04:00 +0000. Anything received after that time might be too late.
Hi Greg
6.12.19-rc1 compiles, boots and runs here on x86_64 (AMD Ryzen 5 7520U, Slackware64-current), no regressions observed.
Tested-by: Markus Reichelt lkt+2023@mareichelt.com
On Mon, 10 Mar 2025 18:02:33 +0100 Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 12 Mar 2025 17:04:00 +0000. Anything received after that time might be too late.
Boot-tested under QEMU for Rust x86_64, arm64 and riscv64; built-tested for loongarch64:
Tested-by: Miguel Ojeda ojeda@kernel.org
Thanks!
Cheers, Miguel
Am 10.03.2025 um 18:02 schrieb Greg Kroah-Hartman:
This is the start of the stable review cycle for the 6.12.19 release. There are 269 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Builds, boots and works on my 2-socket Ivy Bridge Xeon E5-2697 v2 server. No dmesg oddities or regressions found.
Tested-by: Peter Schneider pschneider1968@googlemail.com
Beste Grüße, Peter Schneider
The kernel, bpf tool, perf tool, and kselftest builds fine for v6.12.19-rc1 on x86 and arm64 Azure VM.
Kernel binary size for x86 build: text data bss dec hex filename 27756160 17713706 6397952 51867818 31770aa vmlinux
Kernel binary size for arm64 build: text data bss dec hex filename 36399457 14996921 1052816 52449194 3204faa vmlinux
Tested-by: Hardik Garg hargar@linux.microsoft.com
Thanks, Hardik
linux-stable-mirror@lists.linaro.org